Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
269,540 | 20,386,304,327 | IssuesEvent | 2022-02-22 07:21:56 | Azure/PSRule.Rules.Azure | https://api.github.com/repos/Azure/PSRule.Rules.Azure | opened | Improve running locally documentation | documentation | We need to improve documentation for testing a repository locally using VSCode extension including any configuration options that are relevant. | 1.0 | Improve running locally documentation - We need to improve documentation for testing a repository locally using VSCode extension including any configuration options that are relevant. | non_priority | improve running locally documentation we need to improve documentation for testing a repository locally using vscode extension including any configuration options that are relevant | 0 |
720,683 | 24,801,502,006 | IssuesEvent | 2022-10-24 22:14:41 | MSRevive/MSCScripts | https://api.github.com/repos/MSRevive/MSCScripts | closed | Replace bloodstone ring ability | help wanted high priority | As players can see monster's hp by default now, the bloodstone ring is currently useless. We need to replace it to do something useful now. | 1.0 | Replace bloodstone ring ability - As players can see monster's hp by default now, the bloodstone ring is currently useless. We need to replace it to do something useful now. | priority | replace bloodstone ring ability as players can see monster s hp by default now the bloodstone ring is currently useless we need to replace it to do something useful now | 1 |
554,126 | 16,389,597,249 | IssuesEvent | 2021-05-17 14:36:12 | ruuvi/com.ruuvi.station | https://api.github.com/repos/ruuvi/com.ruuvi.station | closed | Starting without wifi access causes crash - 1.5.8 - defect | bug medium priority | If mobile is out of range of wifi or otherwise not connected to wifi app will crash.
May be related to issue #350 | 1.0 | Starting without wifi access causes crash - 1.5.8 - defect - If mobile is out of range of wifi or otherwise not connected to wifi app will crash.
May be related to issue #350 | priority | starting without wifi access causes crash defect if mobile is out of range of wifi or otherwise not connected to wifi app will crash may be related to issue | 1 |
138,633 | 11,209,817,471 | IssuesEvent | 2020-01-06 11:29:25 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Manual test run on Windows x64 for 1.2.x - Beta | OS/Windows QA/Yes release-notes/exclude tests | ## Per release specialty tests
- [x] Devtools "Audit" (Lighthouse) feature causes browser to freeze / lock up.([#3199](https://github.com/brave/brave-browser/issues/3199))
- [x] Maximum daily ads at 21 instead of 20 (follow up to #3849).([#4207](https://github.com/brave/brave-browser/issues/4207))
- [x] Ads grants notification is shown when Ads switch was OFF.([#4340](https://github.com/brave/brave-browser/issues/4340))
- [x] Ads earnings notification partially translated.([#4861](https://github.com/brave/brave-browser/issues/4861))
- [x] Desktop Bookmark Platform Alignment.([#5158](https://github.com/brave/brave-browser/issues/5158))
- [x] Developer tool doesn't load with remote devices in brave-browser (Desktop) .([#5640](https://github.com/brave/brave-browser/issues/5640))
- [x] Add key for brave services.([#5690](https://github.com/brave/brave-browser/issues/5690))
- [ ] Point browser to dev environment.([#5722](https://github.com/brave/brave-browser/issues/5722))
- [x] Blank page for magnet link after browser restart.([#6472](https://github.com/brave/brave-browser/issues/6472))
- [x] Use infobars for Dapp detection and warn for MetaMask.([#6600](https://github.com/brave/brave-browser/issues/6600))
- [x] Decouple business logic for newly supported regions into ads library.([#6612](https://github.com/brave/brave-browser/issues/6612))
- [x] Brave Rewards : Monthly Contribution : Copy Updates.([#7093](https://github.com/brave/brave-browser/issues/7093))
- [x] Harden contribution flow.([#7201](https://github.com/brave/brave-browser/issues/7201))
- [x] Brave crashes when select Sync option from Tor window.([#7225](https://github.com/brave/brave-browser/issues/7225))
- [x] Disable opening Tor in guest window.([#7237](https://github.com/brave/brave-browser/issues/7237))
- [x] Brave Sec Issue.([#7291](https://github.com/brave/brave-browser/issues/7291))
- [x] Brave.P3A.SentAnswersCount defaults to highest value - follow up to 7261.([#7306](https://github.com/brave/brave-browser/issues/7306))
### Installer
- [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window
### Data(Upgrade from previous release)
- [x] Make sure that data from the last version appears in the new version OK
- [x] With data from the last version, verify that
- [x] bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] cookies are preserved
- [x] installed extensions are retained and work correctly
- [x] opened tabs can be reloaded
- [x] stored passwords are preserved
- [x] unpinned tabs can be pinned
## Extensions/Plugins tests
- [x] Verify one item from Brave Update server is installable (Example: Ad-block DAT file on fresh extension)
- [x] Verify one item from Google Update server is installable (Example: Extensions from CWS)
- [x] Verify PDFJS, Torrent viewer extensions are installed automatically on fresh profile and cannot be disabled
- [x] Verify magnet links and .torrent files loads Torrent viewer page and able to download torrent
### CWS
- [x] Verify installing ABP from CWS shows warning message `NOT A RECOMMENDED BRAVE EXTENSION!` but still allows to install the extension
- [x] Verify installing LastPass from CWS doesn't show any warning message
### PDF
- [x] Test that PDF is loaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
### Bravery settings
- [x] Verify that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [x] Verify that toggling `Ads and trackers blocked` works as expected
- [x] Visit https://testsafebrowsing.appspot.com/s/phishing.html, verify that Safe Browsing (via our Proxy) works for all the listed items
- [x] Visit https://brianbondy.com/ and then turn on script blocking, page should not load. Allow it from the script blocking UI in the URL bar and it should load the page correctly
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked
### Fingerprint Tests
- [x] Visit https://jsfiddle.net/bkf50r8v/13/, ensure 3 blocked items are listed in shields. Result window should show `got canvas fingerprint 0` and `got webgl fingerprint 00`
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ only when `Block all fingerprinting protection` is on
- [ ] Test that Brave browser isn't detected on https://extensions.inrialpes.fr/brave/
- [x] Test that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
### Rewards
- [x] Verify wallet is auto created after enabling rewards
- [x] Verify account balance shows correct BAT and USD value
- [x] Verify you are able to restore a wallet
- [x] Verify wallet address matches the QR code that is generated under `Add funds`
- [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel
- [x] Verify adding funds via any of the currencies flows into wallet after specified amount of time
- [x] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [x] Verify monthly budget shows correct BAT and USD value
- [x] Verify you are able to exclude a publisher from the auto-contribute table by clicking on the `x` in auto-contribute table and popup list of sites
- [x] Verify you are able to exclude a publisher by using the toggle on the Rewards Panel
- [x] Verify when you click on the BR panel while on a site, the panel displays site specific information (site favicon, domain, attention %)
- [x] Verify when you click on `Send a tip`, the custom tip banner displays
- [x] Verify you are able to make one-time tip and they display in tips panel
- [x] Verify you are able to make recurring tip and they display in tips panel
- [x] Verify you can tip a verified publisher
- [x] Verify you can tip a verified YouTube creator
- [x] Verify tip panel shows a verified checkmark for a verified publisher/verified YouTube creator
- [x] Verify tip panel shows a message about unverified publisher
- [x] Verify BR panel shows message about an unverified publisher
- [x] Verify you are able to perform a contribution
- [x] Verify if you disable auto-contribute you are still able to tip regular sites and YouTube creators
- [x] Verify that disabling Rewards and enabling it again does not lose state
- [x] Verify that disabling auto-contribute and enabling it again does not lose state
- [x] Adjust min visit/time in settings. Visit some sites and YouTube channels to verify they are added to the table after the specified settings
- [x] Upgrade from older version
- [x] Verify the wallet balance is retained and wallet backup code isn't corrupted
- [x] Verify auto-contribute list is not lost after upgrade
- [x] Verify tips list is not lost after upgrade
- [x] Verify wallet panel transactions list is not lost after upgrade
### Ads Upgrade Tests:
- [x] Install 0.62.51 and enable Rewards (Ads are not available on this version). Update on `test` channel to the hotfix version. Verify Ads are off by default, should get a BAT logo notification to alert you that Ads are available.
- [x] Install 0.64.77 and enable Rewards. Ads are on by default. View an Ad. Update on `test` channel to the hotfix version. Verify Ads are still on after update, Ads panel information was not lost after upgrade, no BAT logo notification.
- [x] Install 0.64.77 and enable Rewards. Disable Ads. Update on `test` channel to the hotfix version. Verify Ads are still off after update, no BAT logo notification.
- [x] Install 1.1.23 and enable Rewards. Ads are on by default. View an ad. Update on `test` channel to the hotfix version. Verify Ads are still on after update, Ads panel information was not lost after upgrade, no BAT logo notification.
- [x] install 1.1.23 and enable Rewards. Disable Ads. Update on `test` channel to the hotfix version. Verify Ads are still off after update, no BAT logo notification.
### Tor Tabs
- [x] Visit https://check.torproject.org in a Tor window, ensure its shows success message for using a Tor exit node
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Do a hard refresh (Ctrl+Shift+R/Cmd+Shift+R), ensure exit IP changes after page reloads
- [x] Visit https://protonirockerxow.onion/ in a Tor window, ensure login page is shown
- [x] Visit https://browserleaks.com/geo in a Tor window, ensure location isn't shown
### Session storage
- [x] Temporarily move away your browser profile and test that a new profile is created when browser is launched
- macOS - `~/Library/Application\ Support/BraveSoftware/`
- Windows - `%userprofile%\appdata\Local\BraveSoftware\`
- Linux(Ubuntu) - `~/.config/BraveSoftware/`
- [x] Test that windows and tabs restore when closed, including active tab
- [x] Ensure that the tabs in the above session are being lazy loaded when the session is restored
## Update tests
- [x] Verify visiting `brave://settings/help` triggers update check
- [x] Verify once update is downloaded, prompts to `Relaunch` to install update
## Chromium upgrade tests
- [x] Verify `brave://gpu` on Brave and `chrome://gpu` on Chrome are similar for the same Chromium version on both browsers
#### Adblock
- [x] Verify referrer blocking works properly for TLD+1. Visit `https://technology.slashdot.org/` and verify adblock works properly similar to `https://slashdot.org/`
#### Components
- [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components
## Crypto Wallets
- [x] ensure that you can create a new wallet without any issues
- [x] ensure that you can restore a previous CW wallet without any issues
- [x] ensure that you can restore a previous MM wallet without any issues
- [x] ensure that you can create a transaction (sending crypto) with a CW wallet
- [x] ensure that you can create a transaction (sending crypto) using a restored MM wallet | 1.0 | Manual test run on Windows x64 for 1.2.x - Beta - ## Per release specialty tests
- [x] Devtools "Audit" (Lighthouse) feature causes browser to freeze / lock up.([#3199](https://github.com/brave/brave-browser/issues/3199))
- [x] Maximum daily ads at 21 instead of 20 (follow up to #3849).([#4207](https://github.com/brave/brave-browser/issues/4207))
- [x] Ads grants notification is shown when Ads switch was OFF.([#4340](https://github.com/brave/brave-browser/issues/4340))
- [x] Ads earnings notification partially translated.([#4861](https://github.com/brave/brave-browser/issues/4861))
- [x] Desktop Bookmark Platform Alignment.([#5158](https://github.com/brave/brave-browser/issues/5158))
- [x] Developer tool doesn't load with remote devices in brave-browser (Desktop) .([#5640](https://github.com/brave/brave-browser/issues/5640))
- [x] Add key for brave services.([#5690](https://github.com/brave/brave-browser/issues/5690))
- [ ] Point browser to dev environment.([#5722](https://github.com/brave/brave-browser/issues/5722))
- [x] Blank page for magnet link after browser restart.([#6472](https://github.com/brave/brave-browser/issues/6472))
- [x] Use infobars for Dapp detection and warn for MetaMask.([#6600](https://github.com/brave/brave-browser/issues/6600))
- [x] Decouple business logic for newly supported regions into ads library.([#6612](https://github.com/brave/brave-browser/issues/6612))
- [x] Brave Rewards : Monthly Contribution : Copy Updates.([#7093](https://github.com/brave/brave-browser/issues/7093))
- [x] Harden contribution flow.([#7201](https://github.com/brave/brave-browser/issues/7201))
- [x] Brave crashes when select Sync option from Tor window.([#7225](https://github.com/brave/brave-browser/issues/7225))
- [x] Disable opening Tor in guest window.([#7237](https://github.com/brave/brave-browser/issues/7237))
- [x] Brave Sec Issue.([#7291](https://github.com/brave/brave-browser/issues/7291))
- [x] Brave.P3A.SentAnswersCount defaults to highest value - follow up to 7261.([#7306](https://github.com/brave/brave-browser/issues/7306))
### Installer
- [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window
### Data(Upgrade from previous release)
- [x] Make sure that data from the last version appears in the new version OK
- [x] With data from the last version, verify that
- [x] bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] cookies are preserved
- [x] installed extensions are retained and work correctly
- [x] opened tabs can be reloaded
- [x] stored passwords are preserved
- [x] unpinned tabs can be pinned
## Extensions/Plugins tests
- [x] Verify one item from Brave Update server is installable (Example: Ad-block DAT file on fresh extension)
- [x] Verify one item from Google Update server is installable (Example: Extensions from CWS)
- [x] Verify PDFJS, Torrent viewer extensions are installed automatically on fresh profile and cannot be disabled
- [x] Verify magnet links and .torrent files loads Torrent viewer page and able to download torrent
### CWS
- [x] Verify installing ABP from CWS shows warning message `NOT A RECOMMENDED BRAVE EXTENSION!` but still allows to install the extension
- [x] Verify installing LastPass from CWS doesn't show any warning message
### PDF
- [x] Test that PDF is loaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
### Bravery settings
- [x] Verify that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [x] Verify that toggling `Ads and trackers blocked` works as expected
- [x] Visit https://testsafebrowsing.appspot.com/s/phishing.html, verify that Safe Browsing (via our Proxy) works for all the listed items
- [x] Visit https://brianbondy.com/ and then turn on script blocking, page should not load. Allow it from the script blocking UI in the URL bar and it should load the page correctly
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked
### Fingerprint Tests
- [x] Visit https://jsfiddle.net/bkf50r8v/13/, ensure 3 blocked items are listed in shields. Result window should show `got canvas fingerprint 0` and `got webgl fingerprint 00`
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ only when `Block all fingerprinting protection` is on
- [ ] Test that Brave browser isn't detected on https://extensions.inrialpes.fr/brave/
- [x] Test that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
### Rewards
- [x] Verify wallet is auto created after enabling rewards
- [x] Verify account balance shows correct BAT and USD value
- [x] Verify you are able to restore a wallet
- [x] Verify wallet address matches the QR code that is generated under `Add funds`
- [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel
- [x] Verify adding funds via any of the currencies flows into wallet after specified amount of time
- [x] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [x] Verify monthly budget shows correct BAT and USD value
- [x] Verify you are able to exclude a publisher from the auto-contribute table by clicking on the `x` in auto-contribute table and popup list of sites
- [x] Verify you are able to exclude a publisher by using the toggle on the Rewards Panel
- [x] Verify when you click on the BR panel while on a site, the panel displays site specific information (site favicon, domain, attention %)
- [x] Verify when you click on `Send a tip`, the custom tip banner displays
- [x] Verify you are able to make one-time tip and they display in tips panel
- [x] Verify you are able to make recurring tip and they display in tips panel
- [x] Verify you can tip a verified publisher
- [x] Verify you can tip a verified YouTube creator
- [x] Verify tip panel shows a verified checkmark for a verified publisher/verified YouTube creator
- [x] Verify tip panel shows a message about unverified publisher
- [x] Verify BR panel shows message about an unverified publisher
- [x] Verify you are able to perform a contribution
- [x] Verify if you disable auto-contribute you are still able to tip regular sites and YouTube creators
- [x] Verify that disabling Rewards and enabling it again does not lose state
- [x] Verify that disabling auto-contribute and enabling it again does not lose state
- [x] Adjust min visit/time in settings. Visit some sites and YouTube channels to verify they are added to the table after the specified settings
- [x] Upgrade from older version
- [x] Verify the wallet balance is retained and wallet backup code isn't corrupted
- [x] Verify auto-contribute list is not lost after upgrade
- [x] Verify tips list is not lost after upgrade
- [x] Verify wallet panel transactions list is not lost after upgrade
### Ads Upgrade Tests:
- [x] Install 0.62.51 and enable Rewards (Ads are not available on this version). Update on `test` channel to the hotfix version. Verify Ads are off by default, should get a BAT logo notification to alert you that Ads are available.
- [x] Install 0.64.77 and enable Rewards. Ads are on by default. View an Ad. Update on `test` channel to the hotfix version. Verify Ads are still on after update, Ads panel information was not lost after upgrade, no BAT logo notification.
- [x] Install 0.64.77 and enable Rewards. Disable Ads. Update on `test` channel to the hotfix version. Verify Ads are still off after update, no BAT logo notification.
- [x] Install 1.1.23 and enable Rewards. Ads are on by default. View an ad. Update on `test` channel to the hotfix version. Verify Ads are still on after update, Ads panel information was not lost after upgrade, no BAT logo notification.
- [x] install 1.1.23 and enable Rewards. Disable Ads. Update on `test` channel to the hotfix version. Verify Ads are still off after update, no BAT logo notification.
### Tor Tabs
- [x] Visit https://check.torproject.org in a Tor window, ensure its shows success message for using a Tor exit node
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Do a hard refresh (Ctrl+Shift+R/Cmd+Shift+R), ensure exit IP changes after page reloads
- [x] Visit https://protonirockerxow.onion/ in a Tor window, ensure login page is shown
- [x] Visit https://browserleaks.com/geo in a Tor window, ensure location isn't shown
### Session storage
- [x] Temporarily move away your browser profile and test that a new profile is created when browser is launched
- macOS - `~/Library/Application\ Support/BraveSoftware/`
- Windows - `%userprofile%\appdata\Local\BraveSoftware\`
- Linux(Ubuntu) - `~/.config/BraveSoftware/`
- [x] Test that windows and tabs restore when closed, including active tab
- [x] Ensure that the tabs in the above session are being lazy loaded when the session is restored
## Update tests
- [x] Verify visiting `brave://settings/help` triggers update check
- [x] Verify once update is downloaded, prompts to `Relaunch` to install update
## Chromium upgrade tests
- [x] Verify `brave://gpu` on Brave and `chrome://gpu` on Chrome are similar for the same Chromium version on both browsers
#### Adblock
- [x] Verify referrer blocking works properly for TLD+1. Visit `https://technology.slashdot.org/` and verify adblock works properly similar to `https://slashdot.org/`
#### Components
- [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components
## Crypto Wallets
- [x] ensure that you can create a new wallet without any issues
- [x] ensure that you can restore a previous CW wallet without any issues
- [x] ensure that you can restore a previous MM wallet without any issues
- [x] ensure that you can create a transaction (sending crypto) with a CW wallet
- [x] ensure that you can create a transaction (sending crypto) using a restored MM wallet | non_priority | manual test run on windows for x beta per release specialty tests devtools audit lighthouse feature causes browser to freeze lock up maximum daily ads at instead of follow up to ads grants notification is shown when ads switch was off ads earnings notification partially translated desktop bookmark platform alignment developer tool doesn t load with remote devices in brave browser desktop add key for brave services point browser to dev environment blank page for magnet link after browser restart use infobars for dapp detection and warn for metamask decouple business logic for newly supported regions into ads library brave rewards monthly contribution copy updates harden contribution flow brave crashes when select sync option from tor window disable opening tor in guest window brave sec issue brave sentanswerscount defaults to highest value follow up to installer check signature if os run spctl assess verbose applications brave browser beta app and make sure it returns accepted if windows right click on the brave installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window data upgrade from previous release make sure that data from the last version appears in the new version ok with data from the last version verify that bookmarks on the bookmark toolbar and bookmark folders can be opened cookies are preserved installed extensions are retained and work correctly opened tabs can be reloaded stored passwords are preserved unpinned tabs can be pinned extensions plugins tests verify one item from brave update server is installable example ad block dat file on fresh extension verify one item from google update server is installable example extensions from cws verify pdfjs torrent viewer extensions are installed automatically on fresh profile and cannot be disabled verify magnet links and torrent files loads torrent viewer page and able to download torrent cws verify installing abp from cws shows warning message not a recommended brave extension but still allows to install the extension verify installing lastpass from cws doesn t show any warning message pdf test that pdf is loaded over https at test that pdf is loaded over http at widevine verify widevine notification is shown when you visit netflix for the first time test that you can stream on netflix on a fresh profile after installing widevine bravery settings verify that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to verify that toggling ads and trackers blocked works as expected visit verify that safe browsing via our proxy works for all the listed items visit and then turn on script blocking page should not load allow it from the script blocking ui in the url bar and it should load the page correctly test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked fingerprint tests visit ensure blocked items are listed in shields result window should show got canvas fingerprint and got webgl fingerprint test that audio fingerprint is blocked at only when block all fingerprinting protection is on test that brave browser isn t detected on test that doesn t leak ip address when block all fingerprinting protection is on rewards verify wallet is auto created after enabling rewards verify account balance shows correct bat and usd value verify you are able to restore a wallet verify wallet address matches the qr code that is generated under add funds verify actions taken claiming grant tipping auto contribute display in wallet panel verify adding funds via any of the currencies flows into wallet after specified amount of time verify adding funds to an existing wallet with amount adjusts the bat value appropriately verify monthly budget shows correct bat and usd value verify you are able to exclude a publisher from the auto contribute table by clicking on the x in auto contribute table and popup list of sites verify you are able to exclude a publisher by using the toggle on the rewards panel verify when you click on the br panel while on a site the panel displays site specific information site favicon domain attention verify when you click on send a tip the custom tip banner displays verify you are able to make one time tip and they display in tips panel verify you are able to make recurring tip and they display in tips panel verify you can tip a verified publisher verify you can tip a verified youtube creator verify tip panel shows a verified checkmark for a verified publisher verified youtube creator verify tip panel shows a message about unverified publisher verify br panel shows message about an unverified publisher verify you are able to perform a contribution verify if you disable auto contribute you are still able to tip regular sites and youtube creators verify that disabling rewards and enabling it again does not lose state verify that disabling auto contribute and enabling it again does not lose state adjust min visit time in settings visit some sites and youtube channels to verify they are added to the table after the specified settings upgrade from older version verify the wallet balance is retained and wallet backup code isn t corrupted verify auto contribute list is not lost after upgrade verify tips list is not lost after upgrade verify wallet panel transactions list is not lost after upgrade ads upgrade tests install and enable rewards ads are not available on this version update on test channel to the hotfix version verify ads are off by default should get a bat logo notification to alert you that ads are available install and enable rewards ads are on by default view an ad update on test channel to the hotfix version verify ads are still on after update ads panel information was not lost after upgrade no bat logo notification install and enable rewards disable ads update on test channel to the hotfix version verify ads are still off after update no bat logo notification install and enable rewards ads are on by default view an ad update on test channel to the hotfix version verify ads are still on after update ads panel information was not lost after upgrade no bat logo notification install and enable rewards disable ads update on test channel to the hotfix version verify ads are still off after update no bat logo notification tor tabs visit in a tor window ensure its shows success message for using a tor exit node visit in a tor window note down exit node ip address do a hard refresh ctrl shift r cmd shift r ensure exit ip changes after page reloads visit in a tor window ensure login page is shown visit in a tor window ensure location isn t shown session storage temporarily move away your browser profile and test that a new profile is created when browser is launched macos library application support bravesoftware windows userprofile appdata local bravesoftware linux ubuntu config bravesoftware test that windows and tabs restore when closed including active tab ensure that the tabs in the above session are being lazy loaded when the session is restored update tests verify visiting brave settings help triggers update check verify once update is downloaded prompts to relaunch to install update chromium upgrade tests verify brave gpu on brave and chrome gpu on chrome are similar for the same chromium version on both browsers adblock verify referrer blocking works properly for tld visit and verify adblock works properly similar to components delete adblock folder from browser profile and restart browser visit brave components and verify brave ad block updater downloads and update the component repeat for all brave components crypto wallets ensure that you can create a new wallet without any issues ensure that you can restore a previous cw wallet without any issues ensure that you can restore a previous mm wallet without any issues ensure that you can create a transaction sending crypto with a cw wallet ensure that you can create a transaction sending crypto using a restored mm wallet | 0 |
225,523 | 7,482,215,265 | IssuesEvent | 2018-04-04 23:59:47 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | GET /healthz blocks when etcd endpoint is down | kind/bug priority/1.0-blocker priority/insane | The following code blocks with a very long timeout when an etcd endpoint is down:
```
func (e *etcdClient) Status() (string, error) {
eps := e.client.Endpoints()
var err1 error
for i, ep := range eps {
if sr, err := e.client.Status(ctx.Background(), ep); err != nil {
err1 = err
} else if sr.Header.MemberId == sr.Leader {
eps[i] = fmt.Sprintf("%s - (Leader) %s", ep, sr.Version)
} else {
eps[i] = fmt.Sprintf("%s - %s", ep, sr.Version)
}
}
return "Etcd: " + strings.Join(eps, "; "), err1
}
```
This causes `cilium status` to block and fail with a `context deadline exceeded` | 2.0 | GET /healthz blocks when etcd endpoint is down - The following code blocks with a very long timeout when an etcd endpoint is down:
```
func (e *etcdClient) Status() (string, error) {
eps := e.client.Endpoints()
var err1 error
for i, ep := range eps {
if sr, err := e.client.Status(ctx.Background(), ep); err != nil {
err1 = err
} else if sr.Header.MemberId == sr.Leader {
eps[i] = fmt.Sprintf("%s - (Leader) %s", ep, sr.Version)
} else {
eps[i] = fmt.Sprintf("%s - %s", ep, sr.Version)
}
}
return "Etcd: " + strings.Join(eps, "; "), err1
}
```
This causes `cilium status` to block and fail with a `context deadline exceeded` | priority | get healthz blocks when etcd endpoint is down the following code blocks with a very long timeout when an etcd endpoint is down func e etcdclient status string error eps e client endpoints var error for i ep range eps if sr err e client status ctx background ep err nil err else if sr header memberid sr leader eps fmt sprintf s leader s ep sr version else eps fmt sprintf s s ep sr version return etcd strings join eps this causes cilium status to block and fail with a context deadline exceeded | 1 |
220,133 | 16,889,073,149 | IssuesEvent | 2021-06-23 06:53:51 | jinseobhong/typescript.reactNative.template | https://api.github.com/repos/jinseobhong/typescript.reactNative.template | closed | [Documentation] Create CODE_OF_CONDUCT.md for repository | documentation task | # Task <a href="#task" id="task">#</a>
1. [What kind to task](#what-kind-to-task)
- [Describe to What you are trying to solve by task](#describe-to-what-you-are-trying-to-solve-by-task)
- [Task goals](#task-goals)
- [Environment for task](#environment-for-task)
- [Tasks of Task](#tasks-of-task)
- [Describe alternatives](#describe-alternatives)
2. [Additional context](#additional-context)
3. [Reference](#reference)
## What kind to task <a href="#what-kind-of-task" id="what-kind-of-task">#</a>
Please check the type of **task** and add label, See [here](../blob/master/CONTRIBUTING.md#how-to-create-issue-about-task) to see what types are available.
### Describe to What you are trying to solve by task <a href="#describe-to-what-you-are-trying-to-solve-by-task" id="describe-to-what-you-are-trying-to-solve-by-task">#</a>
Write documentation for code conductors.
### Task goals <a href="#task-goals" id="Task-goals">#</a>
- [x] Create CODE_OF_CONDUCT.md for repository
### Environment for task <a href="#environment-for-task" id="environment-for-task">#</a>
Please write if there is an environment required for task
#### Environment for Android
- OS : [ e.g: Ubuntu 20.04 LTS, etc .. ]
- Virtual execution environment
- Java : [ e.g: openjdk 11.0.11 2021-04-20, etc .. ]
- Android Studio :
- Android SDK :
- Android SDK Platform :
- Android Virtual Device :
- Development tools
- Node : [ e.g: v16.3.0, etc .. ]
- Package dependency manager :
- npm : [ e.g: 7.15.1, etc .. ]
- yarn :[ e.g: 1.22.10, etc .. ]
- Packages :
- dependencies :
- something dependencies [ e.g: "react-native": "0.64.2", etc .. ]
-
- devDependencies :
- something devDependencies [ e.g: "@babel/core": "^7.12.9", etc .. ]
-
#### Environment for ios
- Mac OS :
- Virtual execution environment
- Xcode :
- CocoaPods :
- Development tools
- Watchman :
- Node : [ e.g: v16.3.0, etc .. ]
- Package dependency manager :
- npm : [ e.g: 7.15.1, etc .. ]
- yarn :[ e.g: 1.22.10, etc .. ]
- Packages :
- dependencies :
- something dependencies [ e.g: "react-native": "0.64.2", etc .. ]
-
- devDependencies :
- something devDependencies [ e.g: "@babel/core": "^7.12.9", etc .. ]
-
### Tasks of Task <a href="#tasks-of-task" id="tasks-of-task">#</a>
- [x] Create CODE_OF_CONDUCT.md for repository
### Describe alternatives <a href="#describe-alternatives" id="describe-alternatives">#</a>
If there is an alternative to this method, please let me know. if you have tried anything before to task(in the order you tried), Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Additional context <a href="#additional-context" id="additional-context">#</a>
Add any other context or screenshots about this issues
## Reference <a href="#reference" id="reference">#</a>
When writing a task, write down what you referenced
| 1.0 | [Documentation] Create CODE_OF_CONDUCT.md for repository - # Task <a href="#task" id="task">#</a>
1. [What kind to task](#what-kind-to-task)
- [Describe to What you are trying to solve by task](#describe-to-what-you-are-trying-to-solve-by-task)
- [Task goals](#task-goals)
- [Environment for task](#environment-for-task)
- [Tasks of Task](#tasks-of-task)
- [Describe alternatives](#describe-alternatives)
2. [Additional context](#additional-context)
3. [Reference](#reference)
## What kind to task <a href="#what-kind-of-task" id="what-kind-of-task">#</a>
Please check the type of **task** and add label, See [here](../blob/master/CONTRIBUTING.md#how-to-create-issue-about-task) to see what types are available.
### Describe to What you are trying to solve by task <a href="#describe-to-what-you-are-trying-to-solve-by-task" id="describe-to-what-you-are-trying-to-solve-by-task">#</a>
Write documentation for code conductors.
### Task goals <a href="#task-goals" id="Task-goals">#</a>
- [x] Create CODE_OF_CONDUCT.md for repository
### Environment for task <a href="#environment-for-task" id="environment-for-task">#</a>
Please write if there is an environment required for task
#### Environment for Android
- OS : [ e.g: Ubuntu 20.04 LTS, etc .. ]
- Virtual execution environment
- Java : [ e.g: openjdk 11.0.11 2021-04-20, etc .. ]
- Android Studio :
- Android SDK :
- Android SDK Platform :
- Android Virtual Device :
- Development tools
- Node : [ e.g: v16.3.0, etc .. ]
- Package dependency manager :
- npm : [ e.g: 7.15.1, etc .. ]
- yarn :[ e.g: 1.22.10, etc .. ]
- Packages :
- dependencies :
- something dependencies [ e.g: "react-native": "0.64.2", etc .. ]
-
- devDependencies :
- something devDependencies [ e.g: "@babel/core": "^7.12.9", etc .. ]
-
#### Environment for ios
- Mac OS :
- Virtual execution environment
- Xcode :
- CocoaPods :
- Development tools
- Watchman :
- Node : [ e.g: v16.3.0, etc .. ]
- Package dependency manager :
- npm : [ e.g: 7.15.1, etc .. ]
- yarn :[ e.g: 1.22.10, etc .. ]
- Packages :
- dependencies :
- something dependencies [ e.g: "react-native": "0.64.2", etc .. ]
-
- devDependencies :
- something devDependencies [ e.g: "@babel/core": "^7.12.9", etc .. ]
-
### Tasks of Task <a href="#tasks-of-task" id="tasks-of-task">#</a>
- [x] Create CODE_OF_CONDUCT.md for repository
### Describe alternatives <a href="#describe-alternatives" id="describe-alternatives">#</a>
If there is an alternative to this method, please let me know. if you have tried anything before to task(in the order you tried), Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## Additional context <a href="#additional-context" id="additional-context">#</a>
Add any other context or screenshots about this issues
## Reference <a href="#reference" id="reference">#</a>
When writing a task, write down what you referenced
| non_priority | create code of conduct md for repository task what kind to task describe to what you are trying to solve by task task goals environment for task tasks of task describe alternatives additional context reference what kind to task please check the type of task and add label see blob master contributing md how to create issue about task to see what types are available describe to what you are trying to solve by task write documentation for code conductors task goals create code of conduct md for repository environment for task please write if there is an environment required for task environment for android os virtual execution environment java android studio android sdk android sdk platform android virtual device development tools node package dependency manager npm yarn packages dependencies something dependencies devdependencies something devdependencies environment for ios mac os virtual execution environment xcode cocoapods development tools watchman node package dependency manager npm yarn packages dependencies something dependencies devdependencies something devdependencies tasks of task create code of conduct md for repository describe alternatives if there is an alternative to this method please let me know if you have tried anything before to task in the order you tried steps to reproduce the behavior go to click on scroll down to see error additional context add any other context or screenshots about this issues reference when writing a task write down what you referenced | 0 |
41,926 | 10,709,353,247 | IssuesEvent | 2019-10-24 21:54:20 | idaholab/moose | https://api.github.com/repos/idaholab/moose | opened | TestHarness isn't handling crashes in --recover tests (part1) correctly | C: TestHarness P: normal T: defect | ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
It appears that if a test crashes when using the --recover flag during Part1, it still passes from the TestHarness point of view. We are getting away with this since Part2 will definitely fail if that happens.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I have a hacked up branch that's crashing in part1 - I'll have to work up a real test case. We should be able to create a hard error (compiled in) to trigger this.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Minor: Luckly this doesn't let anything bad slip through, but it's a pretty significant issue for the TestHarness
| 1.0 | TestHarness isn't handling crashes in --recover tests (part1) correctly - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
It appears that if a test crashes when using the --recover flag during Part1, it still passes from the TestHarness point of view. We are getting away with this since Part2 will definitely fail if that happens.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I have a hacked up branch that's crashing in part1 - I'll have to work up a real test case. We should be able to create a hard error (compiled in) to trigger this.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Minor: Luckly this doesn't let anything bad slip through, but it's a pretty significant issue for the TestHarness
| non_priority | testharness isn t handling crashes in recover tests correctly bug description it appears that if a test crashes when using the recover flag during it still passes from the testharness point of view we are getting away with this since will definitely fail if that happens steps to reproduce i have a hacked up branch that s crashing in i ll have to work up a real test case we should be able to create a hard error compiled in to trigger this impact minor luckly this doesn t let anything bad slip through but it s a pretty significant issue for the testharness | 0 |
146,967 | 5,631,510,808 | IssuesEvent | 2017-04-05 14:42:25 | actor-framework/actor-framework | https://api.github.com/repos/actor-framework/actor-framework | closed | Add Pony benchmark | @benchmarks low priority task | The [Pony](http://ponylang.org) language offers first-class actor support. The authors also compare it against CAF 0.13 in [a set of benchmarks](http://ponylang.org/benchmarks_all.pdf). It would be great to reproduce these numbers.
| 1.0 | Add Pony benchmark - The [Pony](http://ponylang.org) language offers first-class actor support. The authors also compare it against CAF 0.13 in [a set of benchmarks](http://ponylang.org/benchmarks_all.pdf). It would be great to reproduce these numbers.
| priority | add pony benchmark the language offers first class actor support the authors also compare it against caf in it would be great to reproduce these numbers | 1 |
723,338 | 24,893,744,412 | IssuesEvent | 2022-10-28 14:09:01 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | opened | Error Index 1 out of bounds for length 1 when searching api with name "doc" with publisher rest api | Type/Bug Priority/Normal | ### Description
When using the publisher rest api te search an api with the name "doc" a 500 response is returned. This happens when the api in question exists and when it doesn't. When using any other "query", like "do" or "dot" the 500 is not returned.
The following error is logged in the api manager:
```
[2022-10-28 13:52:18,779] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
at org.wso2.carbon.apimgt.persistence.RegistryPersistenceImpl.searchAPIsForPublisher(RegistryPersistenceImpl.java:932) ~[org.wso2.carbon.apimgt.persistence_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.APIProviderImpl.searchPaginatedAPIs_aroundBody520(APIProviderImpl.java:8434) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.APIProviderImpl.searchPaginatedAPIs(APIProviderImpl.java:8422) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.UserAwareAPIProvider.searchPaginatedAPIs(UserAwareAPIProvider.java:1) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.getAllAPIs(ApisApiServiceImpl.java:261) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.getAllAPIs(ApisApi.java:1075) ~[?:?]
at jdk.internal.reflect.GeneratedMethodAccessor578.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[?:?]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[?:?]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[?:?]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[?:?]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:222) ~[?:?]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:655) ~[tomcat-servlet-api_9.0.58.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[?:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) ~[org.wso2.carbon.identity.context.rewrite.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) ~[org.wso2.carbon.identity.authz.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) ~[org.wso2.carbon.identity.auth.valve_1.4.52.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:101) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:58) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat_9.0.58.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
```
### Steps to Reproduce
1) Start with a "clean" api manager with docker-compose (following the steps in https://github.com/wso2/docker-apim).
1) Create a dcr, get a token with the right scopes
1) try to search the "doc" api:
```
curl --location --request GET 'https://localhost:9443/api/am/publisher/v3/apis?query=doc' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer YOUR-TOKEN-HERE'
```
1) Verify the response is:
```
{
"code": 500,
"message": "Internal server error",
"description": "The server encountered an internal error. Please contact administrator.",
"moreInfo": "",
"error": []
}
```
1) Verify the error message above is shown in the output.
1) Try to search any other api, e.g. "dot":
```
curl --location --request GET 'https://localhost:9443/api/am/publisher/v3/apis?query=doc' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer YOUR-TOKEN-HERE'
```
1) Verify the response is:
```
{
"count": 0,
"list": [],
"pagination": {
"offset": 0,
"limit": 25,
"total": 0,
"next": "",
"previous": ""
}
}
```
### Affected Component
APIM
### Version
4.1.0
### Environment Details (with versions)
Docker container as in https://github.com/wso2/docker-apim/tree/master/docker-compose/apim-is-as-km-with-analytics
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | 1.0 | Error Index 1 out of bounds for length 1 when searching api with name "doc" with publisher rest api - ### Description
When using the publisher rest api te search an api with the name "doc" a 500 response is returned. This happens when the api in question exists and when it doesn't. When using any other "query", like "do" or "dot" the 500 is not returned.
The following error is logged in the api manager:
```
[2022-10-28 13:52:18,779] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
at org.wso2.carbon.apimgt.persistence.RegistryPersistenceImpl.searchAPIsForPublisher(RegistryPersistenceImpl.java:932) ~[org.wso2.carbon.apimgt.persistence_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.APIProviderImpl.searchPaginatedAPIs_aroundBody520(APIProviderImpl.java:8434) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.APIProviderImpl.searchPaginatedAPIs(APIProviderImpl.java:8422) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.impl.UserAwareAPIProvider.searchPaginatedAPIs(UserAwareAPIProvider.java:1) ~[org.wso2.carbon.apimgt.impl_9.20.74.jar:?]
at org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl.getAllAPIs(ApisApiServiceImpl.java:261) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.publisher.v1.ApisApi.getAllAPIs(ApisApi.java:1075) ~[?:?]
at jdk.internal.reflect.GeneratedMethodAccessor578.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) ~[?:?]
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) ~[?:?]
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) ~[?:?]
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) ~[?:?]
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) ~[?:?]
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:265) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) ~[?:?]
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) ~[?:?]
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:304) ~[?:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:222) ~[?:?]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:655) ~[tomcat-servlet-api_9.0.58.wso2v1.jar:?]
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:279) ~[?:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:107) ~[org.wso2.carbon.identity.context.rewrite.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:110) ~[org.wso2.carbon.identity.authz.valve_1.4.52.jar:?]
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:102) ~[org.wso2.carbon.identity.auth.valve_1.4.52.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:101) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:146) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:58) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126) ~[org.wso2.carbon.tomcat.ext_4.6.3.jar:?]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat_9.0.58.wso2v1.jar:?]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat_9.0.58.wso2v1.jar:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
```
### Steps to Reproduce
1) Start with a "clean" api manager with docker-compose (following the steps in https://github.com/wso2/docker-apim).
1) Create a dcr, get a token with the right scopes
1) try to search the "doc" api:
```
curl --location --request GET 'https://localhost:9443/api/am/publisher/v3/apis?query=doc' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer YOUR-TOKEN-HERE'
```
1) Verify the response is:
```
{
"code": 500,
"message": "Internal server error",
"description": "The server encountered an internal error. Please contact administrator.",
"moreInfo": "",
"error": []
}
```
1) Verify the error message above is shown in the output.
1) Try to search any other api, e.g. "dot":
```
curl --location --request GET 'https://localhost:9443/api/am/publisher/v3/apis?query=doc' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer YOUR-TOKEN-HERE'
```
1) Verify the response is:
```
{
"count": 0,
"list": [],
"pagination": {
"offset": 0,
"limit": 25,
"total": 0,
"next": "",
"previous": ""
}
}
```
### Affected Component
APIM
### Version
4.1.0
### Environment Details (with versions)
Docker container as in https://github.com/wso2/docker-apim/tree/master/docker-compose/apim-is-as-km-with-analytics
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_ | priority | error index out of bounds for length when searching api with name doc with publisher rest api description when using the publisher rest api te search an api with the name doc a response is returned this happens when the api in question exists and when it doesn t when using any other query like do or dot the is not returned the following error is logged in the api manager error globalthrowablemapper an unknown exception has been captured by the global exception mapper java lang arrayindexoutofboundsexception index out of bounds for length at org carbon apimgt persistence registrypersistenceimpl searchapisforpublisher registrypersistenceimpl java at org carbon apimgt impl apiproviderimpl searchpaginatedapis apiproviderimpl java at org carbon apimgt impl apiproviderimpl searchpaginatedapis apiproviderimpl java at org carbon apimgt impl userawareapiprovider searchpaginatedapis userawareapiprovider java at org carbon apimgt rest api publisher impl apisapiserviceimpl getallapis apisapiserviceimpl java at org carbon apimgt rest api publisher apisapi getallapis apisapi java at jdk internal reflect invoke unknown source at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java at org apache cxf service invoker abstractinvoker invoke abstractinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf jaxrs jaxrsinvoker invoke jaxrsinvoker java at org apache cxf interceptor serviceinvokerinterceptor run serviceinvokerinterceptor java at org apache cxf interceptor serviceinvokerinterceptor handlemessage serviceinvokerinterceptor java at org apache cxf phase phaseinterceptorchain dointercept phaseinterceptorchain java at org apache cxf transport chaininitiationobserver onmessage chaininitiationobserver java at org apache cxf transport http abstracthttpdestination invoke abstracthttpdestination java at org apache cxf transport servlet servletcontroller invokedestination servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet servletcontroller invoke servletcontroller java at org apache cxf transport servlet cxfnonspringservlet invoke cxfnonspringservlet java at org apache cxf transport servlet abstracthttpservlet handlerequest abstracthttpservlet java at org apache cxf transport servlet abstracthttpservlet doget abstracthttpservlet java at javax servlet http httpservlet service httpservlet java at org apache cxf transport servlet abstracthttpservlet service abstracthttpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java steps to reproduce start with a clean api manager with docker compose following the steps in create a dcr get a token with the right scopes try to search the doc api curl location request get header accept application json header authorization bearer your token here verify the response is code message internal server error description the server encountered an internal error please contact administrator moreinfo error verify the error message above is shown in the output try to search any other api e g dot curl location request get header accept application json header authorization bearer your token here verify the response is count list pagination offset limit total next previous affected component apim version environment details with versions docker container as in relevant log output no response related issues no response suggested labels no response | 1 |
584,685 | 17,461,692,076 | IssuesEvent | 2021-08-06 11:25:32 | turbot/steampipe-plugin-azure | https://api.github.com/repos/turbot/steampipe-plugin-azure | closed | Add table azure_iothub | enhancement priority:high new table | **References**
https://docs.microsoft.com/en-us/rest/api/iothub/iot-hub-resource/get
We need diagnostic settings details also for iothub. | 1.0 | Add table azure_iothub - **References**
https://docs.microsoft.com/en-us/rest/api/iothub/iot-hub-resource/get
We need diagnostic settings details also for iothub. | priority | add table azure iothub references we need diagnostic settings details also for iothub | 1 |
415,861 | 12,136,052,395 | IssuesEvent | 2020-04-23 13:51:20 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Can we unify path for product performance result | Priority/Normal Type/New Feature | **Description:**
Problem: Path for product performance report is not unified across every product. It is hard to find and the path is not descriptive at all.
Some product has performance result in branch root and it just has a number that increases.
**Suggested Labels:**
The suggestion is to have a unique path (
example:
**xxxxx/perfomace_results/latest**
while every previous result will have the version in the path like
**xxxxx/perfomace_results/v2.6**
This way we can reference it in documentation without any issue.
| 1.0 | Can we unify path for product performance result - **Description:**
Problem: Path for product performance report is not unified across every product. It is hard to find and the path is not descriptive at all.
Some product has performance result in branch root and it just has a number that increases.
**Suggested Labels:**
The suggestion is to have a unique path (
example:
**xxxxx/perfomace_results/latest**
while every previous result will have the version in the path like
**xxxxx/perfomace_results/v2.6**
This way we can reference it in documentation without any issue.
| priority | can we unify path for product performance result description problem path for product performance report is not unified across every product it is hard to find and the path is not descriptive at all some product has performance result in branch root and it just has a number that increases suggested labels the suggestion is to have a unique path example xxxxx perfomace results latest while every previous result will have the version in the path like xxxxx perfomace results this way we can reference it in documentation without any issue | 1 |
173,185 | 6,521,363,950 | IssuesEvent | 2017-08-28 20:16:50 | Aubron/scoreshots-templates | https://api.github.com/repos/Aubron/scoreshots-templates | closed | Queens, Multi-Sport Schedule | Priority: Low Status: Needs Finalization / Preview Image | ### Requested by:
Queens University
### Due Date:
2017-08-02
## Template Description:
Original message from Danielle Nicosia of Queens enclosed below:
> I was wondering if there was a way to get a template made up that would allow us to put multiple sports schedules with a cut out of a player from that sport on one template. Also, if it could be made in poster size that would be fantastic!
She's been informed that what we can create right now is rectangular stuff that she could resize to poster dimensions later when that functionality is released.
## Dynamic Considerations:
N/A
## Additional Materials
Client example image, except poster-sized.

| 1.0 | Queens, Multi-Sport Schedule - ### Requested by:
Queens University
### Due Date:
2017-08-02
## Template Description:
Original message from Danielle Nicosia of Queens enclosed below:
> I was wondering if there was a way to get a template made up that would allow us to put multiple sports schedules with a cut out of a player from that sport on one template. Also, if it could be made in poster size that would be fantastic!
She's been informed that what we can create right now is rectangular stuff that she could resize to poster dimensions later when that functionality is released.
## Dynamic Considerations:
N/A
## Additional Materials
Client example image, except poster-sized.

| priority | queens multi sport schedule requested by queens university due date template description original message from danielle nicosia of queens enclosed below i was wondering if there was a way to get a template made up that would allow us to put multiple sports schedules with a cut out of a player from that sport on one template also if it could be made in poster size that would be fantastic she s been informed that what we can create right now is rectangular stuff that she could resize to poster dimensions later when that functionality is released dynamic considerations n a additional materials client example image except poster sized | 1 |
194,180 | 22,261,878,191 | IssuesEvent | 2022-06-10 01:47:36 | Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | reopened | CVE-2019-19061 (High) detected in linuxlinux-4.19.88 | security vulnerability | ## CVE-2019-19061 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/iio/imu/adis_buffer.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/iio/imu/adis_buffer.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the adis_update_scan_mode_burst() function in drivers/iio/imu/adis_buffer.c in the Linux kernel before 5.3.9 allows attackers to cause a denial of service (memory consumption), aka CID-9c0530e898f3.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19061>CVE-2019-19061</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19061">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19061</a></p>
<p>Release Date: 2020-09-25</p>
<p>Fix Resolution: v5.4-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19061 (High) detected in linuxlinux-4.19.88 - ## CVE-2019-19061 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/iio/imu/adis_buffer.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/iio/imu/adis_buffer.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the adis_update_scan_mode_burst() function in drivers/iio/imu/adis_buffer.c in the Linux kernel before 5.3.9 allows attackers to cause a denial of service (memory consumption), aka CID-9c0530e898f3.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19061>CVE-2019-19061</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19061">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19061</a></p>
<p>Release Date: 2020-09-25</p>
<p>Fix Resolution: v5.4-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux drivers iio imu adis buffer c linux drivers iio imu adis buffer c vulnerability details a memory leak in the adis update scan mode burst function in drivers iio imu adis buffer c in the linux kernel before allows attackers to cause a denial of service memory consumption aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
821,072 | 30,803,056,116 | IssuesEvent | 2023-08-01 04:10:46 | rangav/thunder-client-support | https://api.github.com/repos/rangav/thunder-client-support | closed | Set variable based on test result (or) if header/body property exists? | feature request Priority | Example:
I want to set a variable `{{isVoidable}}` or `{{isRefundable}}` based on the data in the response body.

So something like:
`json.voidable -> equals -> true` - if this passes, the immediate next test is run to set the variable.
If it fails, that next test (to set the variable) is skipped.
Is this kind of functionality already possible? | 1.0 | Set variable based on test result (or) if header/body property exists? - Example:
I want to set a variable `{{isVoidable}}` or `{{isRefundable}}` based on the data in the response body.

So something like:
`json.voidable -> equals -> true` - if this passes, the immediate next test is run to set the variable.
If it fails, that next test (to set the variable) is skipped.
Is this kind of functionality already possible? | priority | set variable based on test result or if header body property exists example i want to set a variable isvoidable or isrefundable based on the data in the response body so something like json voidable equals true if this passes the immediate next test is run to set the variable if it fails that next test to set the variable is skipped is this kind of functionality already possible | 1 |
73,740 | 15,281,690,062 | IssuesEvent | 2021-02-23 08:30:57 | raindigi/site-preview | https://api.github.com/repos/raindigi/site-preview | opened | CVE-2020-7608 (Medium) detected in nodev15.5.0 | security vulnerability | ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nodev15.5.0</b></p></summary>
<p>
<p>Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:</p>
<p>Library home page: <a href=https://github.com/nodejs/node.git>https://github.com/nodejs/node.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/site-preview/commit/006e82abf45997e0f560bc473e9718eebd131bad">006e82abf45997e0f560bc473e9718eebd131bad</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7608 (Medium) detected in nodev15.5.0 - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nodev15.5.0</b></p></summary>
<p>
<p>Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:</p>
<p>Library home page: <a href=https://github.com/nodejs/node.git>https://github.com/nodejs/node.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/raindigi/site-preview/commit/006e82abf45997e0f560bc473e9718eebd131bad">006e82abf45997e0f560bc473e9718eebd131bad</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in cve medium severity vulnerability vulnerable library node js javascript runtime sparkles turtle rocket sparkles library home page a href found in head commit a href vulnerable source files vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
488,924 | 14,099,514,912 | IssuesEvent | 2020-11-06 01:35:50 | AY2021S1-CS2103T-T10-1/tp | https://api.github.com/repos/AY2021S1-CS2103T-T10-1/tp | closed | Tag pane resizes when typing in command box | priority.High severity.High type.Bug | 
The command is not yet executed, enter is not pressed, just typing stuff in the command box causes the tags pane increase its height. | 1.0 | Tag pane resizes when typing in command box - 
The command is not yet executed, enter is not pressed, just typing stuff in the command box causes the tags pane increase its height. | priority | tag pane resizes when typing in command box the command is not yet executed enter is not pressed just typing stuff in the command box causes the tags pane increase its height | 1 |
118,620 | 4,751,262,790 | IssuesEvent | 2016-10-22 19:50:06 | SuperTux/supertux | https://api.github.com/repos/SuperTux/supertux | reopened | 0.4 crashes on startup on Mac OS X 10.7 | os:macos priority:high type:bug | The new supertux release crashes on startup on my admitedly older 2006 intel iMac. I have not found any requirements for versions, so I do not know whether this is expected.
What can I do to help debug the problem? | 1.0 | 0.4 crashes on startup on Mac OS X 10.7 - The new supertux release crashes on startup on my admitedly older 2006 intel iMac. I have not found any requirements for versions, so I do not know whether this is expected.
What can I do to help debug the problem? | priority | crashes on startup on mac os x the new supertux release crashes on startup on my admitedly older intel imac i have not found any requirements for versions so i do not know whether this is expected what can i do to help debug the problem | 1 |
389,819 | 11,517,594,378 | IssuesEvent | 2020-02-14 08:45:10 | DimensionDev/Maskbook | https://api.github.com/repos/DimensionDev/Maskbook | closed | Rationalise localization | Component: i18n Priority: P4 (Do when free) | Some users reports say that Maskbook is being used in Hong Kong too. Looks like we can't just settle with a single `zh`: we need to add some subdivisions. While we are at it, we should also rationalize our localisation system.
- [ ] Use a proper library that has:
- [ ] Plurals
- [ ] Grammatical Gender
- [ ] Put it on a platform with Git hooks
* * *
I recommend Gettext/Jed if we want to put the thing on some translation platform immediately; i18next is sort of decent at that too. If we want something really good and powerful, however, I strongly recommend using [MessageFormat](https://messageformat.github.io/). Few platforms support it for now, but Weblate (WeblateOrg/weblate#2967) might do soon. There's some precedence of its adoption in the JS community by Angular folks, so we aren't doing anything crazy here.
I don't really care about what platform we use, although again I am strongly leaning Weblate. If we are using Crowdin, however, be sure to review everything and don't let these damned stale votes decide what strings to use.
PS: don't change the spelling mix | 1.0 | Rationalise localization - Some users reports say that Maskbook is being used in Hong Kong too. Looks like we can't just settle with a single `zh`: we need to add some subdivisions. While we are at it, we should also rationalize our localisation system.
- [ ] Use a proper library that has:
- [ ] Plurals
- [ ] Grammatical Gender
- [ ] Put it on a platform with Git hooks
* * *
I recommend Gettext/Jed if we want to put the thing on some translation platform immediately; i18next is sort of decent at that too. If we want something really good and powerful, however, I strongly recommend using [MessageFormat](https://messageformat.github.io/). Few platforms support it for now, but Weblate (WeblateOrg/weblate#2967) might do soon. There's some precedence of its adoption in the JS community by Angular folks, so we aren't doing anything crazy here.
I don't really care about what platform we use, although again I am strongly leaning Weblate. If we are using Crowdin, however, be sure to review everything and don't let these damned stale votes decide what strings to use.
PS: don't change the spelling mix | priority | rationalise localization some users reports say that maskbook is being used in hong kong too looks like we can t just settle with a single zh we need to add some subdivisions while we are at it we should also rationalize our localisation system use a proper library that has plurals grammatical gender put it on a platform with git hooks i recommend gettext jed if we want to put the thing on some translation platform immediately is sort of decent at that too if we want something really good and powerful however i strongly recommend using few platforms support it for now but weblate weblateorg weblate might do soon there s some precedence of its adoption in the js community by angular folks so we aren t doing anything crazy here i don t really care about what platform we use although again i am strongly leaning weblate if we are using crowdin however be sure to review everything and don t let these damned stale votes decide what strings to use ps don t change the spelling mix | 1 |
177,353 | 6,577,606,002 | IssuesEvent | 2017-09-12 02:04:34 | apache/incubator-openwhisk-wskdeploy | https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy | closed | Trigger supports "source" according to the code, yet spec has "feed" as trigger sub element. | bug priority: high | We should support `feed` under `trigger`, mark `source` deprecated.
In the use case `alarmtrigger`, a trigger is described as below.
```
package:
name: helloworld
triggers:
Every12Hours:
source: /whisk.system/alarms/alarm
```
If I change `source` to `feed`, it will create a trigger without `feed` in OpenWhisk.
```
$ wsk trigger get Every12Hours
ok: got trigger Every12Hours
{
"namespace": "guoyingc@cn.ibm.com_dev",
"name": "Every12Hours",
"version": "0.0.1",
"parameters": [
{
"key": "cron",
"value": "0 */12 * * *"
}
],
"limits": {},
"publish": false
}
``` | 1.0 | Trigger supports "source" according to the code, yet spec has "feed" as trigger sub element. - We should support `feed` under `trigger`, mark `source` deprecated.
In the use case `alarmtrigger`, a trigger is described as below.
```
package:
name: helloworld
triggers:
Every12Hours:
source: /whisk.system/alarms/alarm
```
If I change `source` to `feed`, it will create a trigger without `feed` in OpenWhisk.
```
$ wsk trigger get Every12Hours
ok: got trigger Every12Hours
{
"namespace": "guoyingc@cn.ibm.com_dev",
"name": "Every12Hours",
"version": "0.0.1",
"parameters": [
{
"key": "cron",
"value": "0 */12 * * *"
}
],
"limits": {},
"publish": false
}
``` | priority | trigger supports source according to the code yet spec has feed as trigger sub element we should support feed under trigger mark source deprecated in the use case alarmtrigger a trigger is described as below package name helloworld triggers source whisk system alarms alarm if i change source to feed it will create a trigger without feed in openwhisk wsk trigger get ok got trigger namespace guoyingc cn ibm com dev name version parameters key cron value limits publish false | 1 |
497,476 | 14,371,366,516 | IssuesEvent | 2020-12-01 12:28:14 | replicate/replicate | https://api.github.com/repos/replicate/replicate | closed | Add development support for Linux | priority/medium type/bug | The development environment outlined in `CONTRIBUTING.md` currently does not support Linux systems. Fixing this would enable more developers to contribute to the project. | 1.0 | Add development support for Linux - The development environment outlined in `CONTRIBUTING.md` currently does not support Linux systems. Fixing this would enable more developers to contribute to the project. | priority | add development support for linux the development environment outlined in contributing md currently does not support linux systems fixing this would enable more developers to contribute to the project | 1 |
446,984 | 12,881,367,641 | IssuesEvent | 2020-07-12 11:40:07 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | Status(StatusCode=Cancelled, Detail="No grpc-status found on response.") | kind/question priority/P3 | Grpc.Core.RpcException: Status(StatusCode=Cancelled, Detail="No grpc-status found on response.") .
Always report this error and throw it when it happens at the server side,when i return result, here :static readonly grpc::Marshaller<global::GrpcServer.Web.Protos.GetGoodsByGdsCodeResponse> __Marshaller_GetGoodsByGdsCodeResponse = grpc::Marshallers.Create((arg) => global::Google.Protobuf.MessageExtensions.ToByteArray(arg), global::GrpcServer.Web.Protos.GetGoodsByGdsCodeResponse.Parser.ParseFrom);
| 1.0 | Status(StatusCode=Cancelled, Detail="No grpc-status found on response.") - Grpc.Core.RpcException: Status(StatusCode=Cancelled, Detail="No grpc-status found on response.") .
Always report this error and throw it when it happens at the server side,when i return result, here :static readonly grpc::Marshaller<global::GrpcServer.Web.Protos.GetGoodsByGdsCodeResponse> __Marshaller_GetGoodsByGdsCodeResponse = grpc::Marshallers.Create((arg) => global::Google.Protobuf.MessageExtensions.ToByteArray(arg), global::GrpcServer.Web.Protos.GetGoodsByGdsCodeResponse.Parser.ParseFrom);
| priority | status statuscode cancelled detail no grpc status found on response grpc core rpcexception status statuscode cancelled detail no grpc status found on response always report this error and throw it when it happens at the server side when i return result here static readonly grpc marshaller marshaller getgoodsbygdscoderesponse grpc marshallers create arg global google protobuf messageextensions tobytearray arg global grpcserver web protos getgoodsbygdscoderesponse parser parsefrom | 1 |
19,459 | 4,403,393,879 | IssuesEvent | 2016-08-11 07:42:35 | datagraft/datagraft-portal | https://api.github.com/repos/datagraft/datagraft-portal | opened | Add groups | backend documentation enhancement UI | Add support for specifying groups of users and managing them. Make it possible to support access/modifications of assets by groups. | 1.0 | Add groups - Add support for specifying groups of users and managing them. Make it possible to support access/modifications of assets by groups. | non_priority | add groups add support for specifying groups of users and managing them make it possible to support access modifications of assets by groups | 0 |
73,768 | 3,421,077,913 | IssuesEvent | 2015-12-08 17:14:58 | ccswbs/hjckrrh | https://api.github.com/repos/ccswbs/hjckrrh | closed | G0, PG2 - Any page which adds an existing node as a panel generates an empty h2 with a link. | feature: Custom Content (C) feature: general (G) feature: page (P) priority: normal type: accessibility type: bug type: drupal issue type: enhancement request | An empty h2 (linked) title is being generated by the node.tpl.php file. This affects any site that adds an existing node as a panel on their pages.
**Steps necessary to demonstrate issue:**
1. Go to any panel page (Admin > Structure > Pages)
2. Add a panel to any region (e.g. middle)
3. Select "Existing content" and add any node
4. Leave all checkmarks unchecked and select "Full content" build mode.
4. Save the panel page.
5. Inspect the added panel. You'll see an extra (empty) h2 title tag with a link inside, but no text. This causes accessibility issues (i.e. a link with no descriptive text)
Code Sample:
`
<div class="pane-content">
<div id="node-181" class="node node-page clearfix" about="/psychology/graduate/cpade/about" typeof="foaf:Document">
<h2><a href="/psychology/graduate/cpade/about"></a></h2>
<span property="dc:title" content="" class="rdf-meta element-hidden"></span>
<div class="content">
`
**Suggested fix:** Override node.tpl.php using our ug_theme folder. Add an if statement that checks if the title variable has a value before printing out the h2 with the link.
See https://www-stage.uoguelph.ca/psychology/graduate/cpade for an example. @tqureshi-uog can point us to more examples of this as well. | 1.0 | G0, PG2 - Any page which adds an existing node as a panel generates an empty h2 with a link. - An empty h2 (linked) title is being generated by the node.tpl.php file. This affects any site that adds an existing node as a panel on their pages.
**Steps necessary to demonstrate issue:**
1. Go to any panel page (Admin > Structure > Pages)
2. Add a panel to any region (e.g. middle)
3. Select "Existing content" and add any node
4. Leave all checkmarks unchecked and select "Full content" build mode.
4. Save the panel page.
5. Inspect the added panel. You'll see an extra (empty) h2 title tag with a link inside, but no text. This causes accessibility issues (i.e. a link with no descriptive text)
Code Sample:
`
<div class="pane-content">
<div id="node-181" class="node node-page clearfix" about="/psychology/graduate/cpade/about" typeof="foaf:Document">
<h2><a href="/psychology/graduate/cpade/about"></a></h2>
<span property="dc:title" content="" class="rdf-meta element-hidden"></span>
<div class="content">
`
**Suggested fix:** Override node.tpl.php using our ug_theme folder. Add an if statement that checks if the title variable has a value before printing out the h2 with the link.
See https://www-stage.uoguelph.ca/psychology/graduate/cpade for an example. @tqureshi-uog can point us to more examples of this as well. | priority | any page which adds an existing node as a panel generates an empty with a link an empty linked title is being generated by the node tpl php file this affects any site that adds an existing node as a panel on their pages steps necessary to demonstrate issue go to any panel page admin structure pages add a panel to any region e g middle select existing content and add any node leave all checkmarks unchecked and select full content build mode save the panel page inspect the added panel you ll see an extra empty title tag with a link inside but no text this causes accessibility issues i e a link with no descriptive text code sample suggested fix override node tpl php using our ug theme folder add an if statement that checks if the title variable has a value before printing out the with the link see for an example tqureshi uog can point us to more examples of this as well | 1 |
793,934 | 28,017,335,212 | IssuesEvent | 2023-03-28 00:30:33 | matrixorigin/matrixone | https://api.github.com/repos/matrixorigin/matrixone | closed | [Feature Request]: text data type | priority/p0 kind/feature | ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
```Markdown
Text is a common data type.
```
### Describe the feature you'd like
MO supports a single TEXT data type instead of tinytext, mediumtext, text, longtext of MySQL 8.0.
It should cover the ranges of these four data types.
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_ | 1.0 | [Feature Request]: text data type - ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
```Markdown
Text is a common data type.
```
### Describe the feature you'd like
MO supports a single TEXT data type instead of tinytext, mediumtext, text, longtext of MySQL 8.0.
It should cover the ranges of these four data types.
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_ | priority | text data type is there an existing issue for the same feature request i have checked the existing issues is your feature request related to a problem markdown text is a common data type describe the feature you d like mo supports a single text data type instead of tinytext mediumtext text longtext of mysql it should cover the ranges of these four data types describe implementation you ve considered no response documentation adoption use case migration strategy no response additional information no response | 1 |
339,979 | 10,265,228,963 | IssuesEvent | 2019-08-22 18:20:02 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Merge conflict results in empty dialog box | bug priority: high | ## Describe the bug
Pulling from a remote repository and causing a conflict opens an empty dialog.
## To Reproduce
Steps to reproduce the behavior:
1. Create a site using Editorial BP called `parent`
2. Create a site called `child` using remote repository, and point it to {PATH_TO_CRAFTER_BUNDLE}/crafter-authoring/data/repos/sites/parent/sandbox
3. Edit the homepage and make a change in `child`
4. Edit the home and make a change in `parent`
5. Go to `child` site
6. Go to SiteConfig > Remote Repositories and pull from the parent
7. See error
## Expected behavior
The dialog should report that there was a merge conflict.
## Screenshots

## Logs
https://gist.github.com/sumerjabri/05c73151a4385889e32724ae1a76efd5
## Specs
### Version
3.1.1-SNAPSHOT
### OS
Linux, Ubuntu 18.04 LTS
### Browser
Chrome
## Additional context
N/A
| 1.0 | [studio] Merge conflict results in empty dialog box - ## Describe the bug
Pulling from a remote repository and causing a conflict opens an empty dialog.
## To Reproduce
Steps to reproduce the behavior:
1. Create a site using Editorial BP called `parent`
2. Create a site called `child` using remote repository, and point it to {PATH_TO_CRAFTER_BUNDLE}/crafter-authoring/data/repos/sites/parent/sandbox
3. Edit the homepage and make a change in `child`
4. Edit the home and make a change in `parent`
5. Go to `child` site
6. Go to SiteConfig > Remote Repositories and pull from the parent
7. See error
## Expected behavior
The dialog should report that there was a merge conflict.
## Screenshots

## Logs
https://gist.github.com/sumerjabri/05c73151a4385889e32724ae1a76efd5
## Specs
### Version
3.1.1-SNAPSHOT
### OS
Linux, Ubuntu 18.04 LTS
### Browser
Chrome
## Additional context
N/A
| priority | merge conflict results in empty dialog box describe the bug pulling from a remote repository and causing a conflict opens an empty dialog to reproduce steps to reproduce the behavior create a site using editorial bp called parent create a site called child using remote repository and point it to path to crafter bundle crafter authoring data repos sites parent sandbox edit the homepage and make a change in child edit the home and make a change in parent go to child site go to siteconfig remote repositories and pull from the parent see error expected behavior the dialog should report that there was a merge conflict screenshots logs specs version snapshot os linux ubuntu lts browser chrome additional context n a | 1 |
247,646 | 7,921,558,598 | IssuesEvent | 2018-07-05 07:57:36 | kubernetes/kubeadm | https://api.github.com/repos/kubernetes/kubeadm | reopened | Kube-dns failed to resolve the services inside the pods after stopping one master node in a multi-master/HA setup | area/HA kind/bug priority/backlog sig/cluster-lifecycle sig/network | ## BUG REPORT
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
## Versions
**kubeadm version** (use `kubeadm version`): 1.10.5
**Environment**:
- **Kubernetes version** (use `kubectl version`): 1.10.5
- **Kubelet version** 1.10.5
- **Cloud provider or hardware configuration**: EC2 instances on aws
- **OS** (e.g. from /etc/os-release): Centos 7.5
- **Kernel** (e.g. `uname -a`): 3.10.0-862.3.2.el7.x86_64
- **Others**: It's 3 node(ec2 instances) setup on aws. Deployed kubernetes HA using kubeadm. All 3 nodes are in master role and scheduling is allowed on all nodes i.e. on all master nodes.
- **kubernetes components docker images**
```
k8s.gcr.io/kube-scheduler-amd64 v1.10.5 1f3f4b7d8ff7 7 days ago 51.2MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.5 c38845efbf65 7 days ago 151MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.5 06990282ebc5 7 days ago 228MB
k8s.gcr.io/kube-proxy-amd64 v1.10.5 32609f1b11ae 7 days ago 97.9MB
weaveworks/weave-npc 2.3.0 21545eb3d6f9 2 months ago 47.2MB
weaveworks/weave-kube 2.3.0 f15514acce73 2 months ago 96.8MB
weaveworks/weaveexec 2.3.0 c2030610fb92 2 months ago 79.1MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 5 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 5 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 5 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 6 months ago 742kB
```
## What happened?
I brought down one node intentionally to test the k8s ha behaviour. Kubectl get nodes is showing correct output by mentioning 2 nodes in Ready state and 1 node in NotReady state. All the pods in kube-system namespace are up and running.
Kube-dns pod was running on the node which I brought down. Before bringing down the node, everything was working fine. After bringing down the node, kube-dns pod got rescheduled to the one of the node which was in running state.
```
kube-dns-86f4d74b45-7jf8d 3/3 Running 0 1h 10.117.113.109 master-0
kube-dns-86f4d74b45-7whkm 3/3 Unknown 0 3h 10.117.113.129 master-1
```
Kube-dns pod i.e. `kube-dns-86f4d74b45-7whkm` which was running on master-1 went into Unknown state and I believe, this is as per the design. New pod `kube-dns-86f4d74b45-7jf8d` came up on master-0 and all containers of the pod are in running state.
**Please check the ip of the pods**
Kubedns pod which is in **Running** state is having **10.117.113.109**
and the kubedns pod which is in **Unknown** state was having **10.117.113.129**
Following is my get node output
```
[root@ip-10-0-1-104 centos]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-0 Ready master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
master-2 Ready master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
master-1 NotReady master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
```
master-0 and master-2 are in running state.
Now the main issue is here,
Pods running on master-0 are resolving the service name properly
> $ kubectl exec -it my-pod-1 -- host kubernetes.default
and output is:
kubernetes.default.svc.cluster.local has address 10.96.0.1
Pods running on master-2 are giving issue in the resolution.
> $ kubectl exec -it my-pod-2 -- host kubernetes.default
and output is:
;; connection timed out; trying next origin
;; connection timed out; trying next origin
Following is my observation and I think the root cause
I went through the iptables rules on both nodes which are in Ready state.
On master-0, on which the resolution is working inside the pods running on it, following are the rules
```
[root@ip-10-0-1-104 centos]# iptables-save |grep dns
-A KUBE-SEP-5Y5UGQTIEY6E6GMK -s 10.117.113.109/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-5Y5UGQTIEY6E6GMK -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.117.113.109:53
-A KUBE-SEP-A3LANKKXX5Q6MODX -s 10.117.113.109/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-A3LANKKXX5Q6MODX -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.117.113.109:53
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-A3LANKKXX5Q6MODX
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5Y5UGQTIEY6E6G
```
Check IP in first 4 rules.
On master-2, following are the iptable rules relevant to kube-dns
```
[root@ip-10-0-1-152 centos]# iptables-save |grep dns
-A KUBE-SEP-R6YMCWDUASN32VLL -s 10.117.113.129/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-R6YMCWDUASN32VLL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.117.113.129:53
-A KUBE-SEP-ZZF2UW7NTBAIKDZD -s 10.117.113.129/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZZF2UW7NTBAIKDZD -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.117.113.129:53
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-ZZF2UW7NTBAIKDZD
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-R6YMCWDUASN32VLL
```
Check IP in first 4 rules.
Kubedns pod which is in **Running** state is having ip **10.117.113.109**
and the kubedns pod which is in **Unknown** state was having ip **10.117.113.129**
I can see that iptable-rules on the master-2 node are still showing the kube-dns pod ip which is in Unknown state. Iptables rules on master-0 node are seems to be correct one hence no issue there on that node.
## What you expected to happen?
All pods on all the Ready node should resolve the services. As far as I know, iptables rules should have gotten updated and should have pointed to the correct kube-dns pod ip on node master-2
## How to reproduce it (as minimally and precisely as possible)?
In the multi-master setup, bring down the node on which kube-dns pod is running.
## Anything else we need to know?
I followed the https://kubernetes.io/docs/setup/independent/high-availability/ document to setup the k8s HA and in my case etcd cluster is running on same nodes but not inside the containers. | 1.0 | Kube-dns failed to resolve the services inside the pods after stopping one master node in a multi-master/HA setup - ## BUG REPORT
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
## Versions
**kubeadm version** (use `kubeadm version`): 1.10.5
**Environment**:
- **Kubernetes version** (use `kubectl version`): 1.10.5
- **Kubelet version** 1.10.5
- **Cloud provider or hardware configuration**: EC2 instances on aws
- **OS** (e.g. from /etc/os-release): Centos 7.5
- **Kernel** (e.g. `uname -a`): 3.10.0-862.3.2.el7.x86_64
- **Others**: It's 3 node(ec2 instances) setup on aws. Deployed kubernetes HA using kubeadm. All 3 nodes are in master role and scheduling is allowed on all nodes i.e. on all master nodes.
- **kubernetes components docker images**
```
k8s.gcr.io/kube-scheduler-amd64 v1.10.5 1f3f4b7d8ff7 7 days ago 51.2MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.5 c38845efbf65 7 days ago 151MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.5 06990282ebc5 7 days ago 228MB
k8s.gcr.io/kube-proxy-amd64 v1.10.5 32609f1b11ae 7 days ago 97.9MB
weaveworks/weave-npc 2.3.0 21545eb3d6f9 2 months ago 47.2MB
weaveworks/weave-kube 2.3.0 f15514acce73 2 months ago 96.8MB
weaveworks/weaveexec 2.3.0 c2030610fb92 2 months ago 79.1MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 5 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 5 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 5 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 6 months ago 742kB
```
## What happened?
I brought down one node intentionally to test the k8s ha behaviour. Kubectl get nodes is showing correct output by mentioning 2 nodes in Ready state and 1 node in NotReady state. All the pods in kube-system namespace are up and running.
Kube-dns pod was running on the node which I brought down. Before bringing down the node, everything was working fine. After bringing down the node, kube-dns pod got rescheduled to the one of the node which was in running state.
```
kube-dns-86f4d74b45-7jf8d 3/3 Running 0 1h 10.117.113.109 master-0
kube-dns-86f4d74b45-7whkm 3/3 Unknown 0 3h 10.117.113.129 master-1
```
Kube-dns pod i.e. `kube-dns-86f4d74b45-7whkm` which was running on master-1 went into Unknown state and I believe, this is as per the design. New pod `kube-dns-86f4d74b45-7jf8d` came up on master-0 and all containers of the pod are in running state.
**Please check the ip of the pods**
Kubedns pod which is in **Running** state is having **10.117.113.109**
and the kubedns pod which is in **Unknown** state was having **10.117.113.129**
Following is my get node output
```
[root@ip-10-0-1-104 centos]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-0 Ready master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
master-2 Ready master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
master-1 NotReady master 3h v1.10.5 <none> CentOS Linux 7 (Core) 3.10.0-862.3.2.el7.x86_64 docker://17.12.0-ce
```
master-0 and master-2 are in running state.
Now the main issue is here,
Pods running on master-0 are resolving the service name properly
> $ kubectl exec -it my-pod-1 -- host kubernetes.default
and output is:
kubernetes.default.svc.cluster.local has address 10.96.0.1
Pods running on master-2 are giving issue in the resolution.
> $ kubectl exec -it my-pod-2 -- host kubernetes.default
and output is:
;; connection timed out; trying next origin
;; connection timed out; trying next origin
Following is my observation and I think the root cause
I went through the iptables rules on both nodes which are in Ready state.
On master-0, on which the resolution is working inside the pods running on it, following are the rules
```
[root@ip-10-0-1-104 centos]# iptables-save |grep dns
-A KUBE-SEP-5Y5UGQTIEY6E6GMK -s 10.117.113.109/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-5Y5UGQTIEY6E6GMK -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.117.113.109:53
-A KUBE-SEP-A3LANKKXX5Q6MODX -s 10.117.113.109/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-A3LANKKXX5Q6MODX -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.117.113.109:53
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-A3LANKKXX5Q6MODX
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5Y5UGQTIEY6E6G
```
Check IP in first 4 rules.
On master-2, following are the iptable rules relevant to kube-dns
```
[root@ip-10-0-1-152 centos]# iptables-save |grep dns
-A KUBE-SEP-R6YMCWDUASN32VLL -s 10.117.113.129/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-R6YMCWDUASN32VLL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.117.113.129:53
-A KUBE-SEP-ZZF2UW7NTBAIKDZD -s 10.117.113.129/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZZF2UW7NTBAIKDZD -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.117.113.129:53
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.117.113.0/24 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-ZZF2UW7NTBAIKDZD
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-R6YMCWDUASN32VLL
```
Check IP in first 4 rules.
Kubedns pod which is in **Running** state is having ip **10.117.113.109**
and the kubedns pod which is in **Unknown** state was having ip **10.117.113.129**
I can see that iptable-rules on the master-2 node are still showing the kube-dns pod ip which is in Unknown state. Iptables rules on master-0 node are seems to be correct one hence no issue there on that node.
## What you expected to happen?
All pods on all the Ready node should resolve the services. As far as I know, iptables rules should have gotten updated and should have pointed to the correct kube-dns pod ip on node master-2
## How to reproduce it (as minimally and precisely as possible)?
In the multi-master setup, bring down the node on which kube-dns pod is running.
## Anything else we need to know?
I followed the https://kubernetes.io/docs/setup/independent/high-availability/ document to setup the k8s HA and in my case etcd cluster is running on same nodes but not inside the containers. | priority | kube dns failed to resolve the services inside the pods after stopping one master node in a multi master ha setup bug report if this is a bug report please fill in as much of the template below as you can if you leave out information we can t help you as well if this is a feature request please describe in detail the feature behavior change you d like to see in both cases be ready for followup questions and please respond in a timely manner if we can t reproduce a bug or think a feature already exists we might close your issue if we re wrong please feel free to reopen it and explain why versions kubeadm version use kubeadm version environment kubernetes version use kubectl version kubelet version cloud provider or hardware configuration instances on aws os e g from etc os release centos kernel e g uname a others it s node instances setup on aws deployed kubernetes ha using kubeadm all nodes are in master role and scheduling is allowed on all nodes i e on all master nodes kubernetes components docker images gcr io kube scheduler days ago gcr io kube controller manager days ago gcr io kube apiserver days ago gcr io kube proxy days ago weaveworks weave npc months ago weaveworks weave kube months ago weaveworks weaveexec months ago gcr io dns dnsmasq nanny months ago gcr io dns sidecar months ago gcr io dns kube dns months ago gcr io pause months ago what happened i brought down one node intentionally to test the ha behaviour kubectl get nodes is showing correct output by mentioning nodes in ready state and node in notready state all the pods in kube system namespace are up and running kube dns pod was running on the node which i brought down before bringing down the node everything was working fine after bringing down the node kube dns pod got rescheduled to the one of the node which was in running state kube dns running master kube dns unknown master kube dns pod i e kube dns which was running on master went into unknown state and i believe this is as per the design new pod kube dns came up on master and all containers of the pod are in running state please check the ip of the pods kubedns pod which is in running state is having and the kubedns pod which is in unknown state was having following is my get node output kubectl get nodes o wide name status roles age version external ip os image kernel version container runtime master ready master centos linux core docker ce master ready master centos linux core docker ce master notready master centos linux core docker ce master and master are in running state now the main issue is here pods running on master are resolving the service name properly kubectl exec it my pod host kubernetes default and output is kubernetes default svc cluster local has address pods running on master are giving issue in the resolution kubectl exec it my pod host kubernetes default and output is connection timed out trying next origin connection timed out trying next origin following is my observation and i think the root cause i went through the iptables rules on both nodes which are in ready state on master on which the resolution is working inside the pods running on it following are the rules iptables save grep dns a kube sep s m comment comment kube system kube dns dns j kube mark masq a kube sep p udp m comment comment kube system kube dns dns m udp j dnat to destination a kube sep s m comment comment kube system kube dns dns tcp j kube mark masq a kube sep p tcp m comment comment kube system kube dns dns tcp m tcp j dnat to destination a kube services s d p udp m comment comment kube system kube dns dns cluster ip m udp dport j kube mark masq a kube services d p udp m comment comment kube system kube dns dns cluster ip m udp dport j kube svc a kube services s d p tcp m comment comment kube system kube dns dns tcp cluster ip m tcp dport j kube mark masq a kube services d p tcp m comment comment kube system kube dns dns tcp cluster ip m tcp dport j kube svc a kube svc m comment comment kube system kube dns dns tcp j kube sep a kube svc m comment comment kube system kube dns dns j kube sep check ip in first rules on master following are the iptable rules relevant to kube dns iptables save grep dns a kube sep s m comment comment kube system kube dns dns j kube mark masq a kube sep p udp m comment comment kube system kube dns dns m udp j dnat to destination a kube sep s m comment comment kube system kube dns dns tcp j kube mark masq a kube sep p tcp m comment comment kube system kube dns dns tcp m tcp j dnat to destination a kube services s d p tcp m comment comment kube system kube dns dns tcp cluster ip m tcp dport j kube mark masq a kube services d p tcp m comment comment kube system kube dns dns tcp cluster ip m tcp dport j kube svc a kube services s d p udp m comment comment kube system kube dns dns cluster ip m udp dport j kube mark masq a kube services d p udp m comment comment kube system kube dns dns cluster ip m udp dport j kube svc a kube svc m comment comment kube system kube dns dns tcp j kube sep a kube svc m comment comment kube system kube dns dns j kube sep check ip in first rules kubedns pod which is in running state is having ip and the kubedns pod which is in unknown state was having ip i can see that iptable rules on the master node are still showing the kube dns pod ip which is in unknown state iptables rules on master node are seems to be correct one hence no issue there on that node what you expected to happen all pods on all the ready node should resolve the services as far as i know iptables rules should have gotten updated and should have pointed to the correct kube dns pod ip on node master how to reproduce it as minimally and precisely as possible in the multi master setup bring down the node on which kube dns pod is running anything else we need to know i followed the document to setup the ha and in my case etcd cluster is running on same nodes but not inside the containers | 1 |
194,774 | 6,898,863,059 | IssuesEvent | 2017-11-24 11:13:28 | xwikisas/application-flashmessages | https://api.github.com/repos/xwikisas/application-flashmessages | closed | Create template provider | Priority: Major Type: Improvement | Since the AWM dependency is removed in #4, a new way to create flash entries is needed. | 1.0 | Create template provider - Since the AWM dependency is removed in #4, a new way to create flash entries is needed. | priority | create template provider since the awm dependency is removed in a new way to create flash entries is needed | 1 |
49,721 | 13,187,256,728 | IssuesEvent | 2020-08-13 02:50:34 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | filterscript trunk: Error: »const class OMKey« has no member named »IsIceTop« (Trac #1930) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1930">https://code.icecube.wisc.edu/ticket/1930</a>, reported by flauber and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-01-10T17:17:57",
"description": "Hi,\n\nwhile building the current trunk I get this:\n\n{{{\n[ 68%] Built target filter-tools\n[ 68%] Built target tensor-of-inertia\n[ 68%] Built target gulliver-bootstrap\n[ 70%] Built target portia\n[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion \u00bbbool I3CosmicRayFilter_13::KeepEvent(I3Frame&)\u00ab:\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab\n if(omKey.IsIceTop()){\n ^~~~~~~~\nmake[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1\nmake[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2\nmake: *** [Makefile:128: all] Fehler 2\n}}}\n\n\n\nSvn Info:\n{{{\n[flauber@Shion src]$ svn info\nPfad: .\nWurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src\nURL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk\nRelative URL: ^/meta-projects/icerec/trunk\nBasis des Projektarchivs: http://code.icecube.wisc.edu/svn\nUUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828\nRevision: 152558\nKnotentyp: Verzeichnis\nPlan: normal\nLetzter Autor: nega\nLetzte ge\u00e4nderte Rev: 151396\nLetztes \u00c4nderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)\n}}}",
"reporter": "flauber",
"cc": "",
"resolution": "invalid",
"_ts": "1484068677758096",
"component": "combo reconstruction",
"summary": "filterscript trunk: Error: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab",
"priority": "normal",
"keywords": "",
"time": "2017-01-10T16:33:42",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | filterscript trunk: Error: »const class OMKey« has no member named »IsIceTop« (Trac #1930) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1930">https://code.icecube.wisc.edu/ticket/1930</a>, reported by flauber and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-01-10T17:17:57",
"description": "Hi,\n\nwhile building the current trunk I get this:\n\n{{{\n[ 68%] Built target filter-tools\n[ 68%] Built target tensor-of-inertia\n[ 68%] Built target gulliver-bootstrap\n[ 70%] Built target portia\n[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion \u00bbbool I3CosmicRayFilter_13::KeepEvent(I3Frame&)\u00ab:\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab\n if(omKey.IsIceTop()){\n ^~~~~~~~\nmake[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1\nmake[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2\nmake: *** [Makefile:128: all] Fehler 2\n}}}\n\n\n\nSvn Info:\n{{{\n[flauber@Shion src]$ svn info\nPfad: .\nWurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src\nURL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk\nRelative URL: ^/meta-projects/icerec/trunk\nBasis des Projektarchivs: http://code.icecube.wisc.edu/svn\nUUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828\nRevision: 152558\nKnotentyp: Verzeichnis\nPlan: normal\nLetzter Autor: nega\nLetzte ge\u00e4nderte Rev: 151396\nLetztes \u00c4nderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)\n}}}",
"reporter": "flauber",
"cc": "",
"resolution": "invalid",
"_ts": "1484068677758096",
"component": "combo reconstruction",
"summary": "filterscript trunk: Error: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab",
"priority": "normal",
"keywords": "",
"time": "2017-01-10T16:33:42",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| non_priority | filterscript trunk error »const class omkey« has no member named »isicetop« trac migrated from json status closed changetime description hi n nwhile building the current trunk i get this n n n built target filter tools n built target tensor of inertia n built target gulliver bootstrap n built target portia n building cxx object filterscripts cmakefiles filterscripts dir private filterscripts cxx o n home flauber icecube icerec trunk src filterscripts private filterscripts cxx in elementfunktion keepevent n home flauber icecube icerec trunk src filterscripts private filterscripts cxx fehler class omkey has no member named n if omkey isicetop n nmake fehler nmake fehler nmake fehler n n n n nsvn info n n svn info npfad nwurzelpfad der arbeitskopie home flauber icecube icerec trunk src nurl url meta projects icerec trunk nbasis des projektarchivs des projektarchivs nrevision nknotentyp verzeichnis nplan normal nletzter autor nega nletzte ge rev nletztes mi nov n reporter flauber cc resolution invalid ts component combo reconstruction summary filterscript trunk error class omkey has no member named priority normal keywords time milestone owner type defect | 0 |
157,888 | 6,017,838,189 | IssuesEvent | 2017-06-07 10:40:36 | metasfresh/metasfresh | https://api.github.com/repos/metasfresh/metasfresh | opened | Full Test on Translation of en_US in webUI | priority:high type:enhancement | ### Is this a bug or feature request?
feature
### What is the current behavior?
not all is translated
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
everything visible to the user matches his language setting | 1.0 | Full Test on Translation of en_US in webUI - ### Is this a bug or feature request?
feature
### What is the current behavior?
not all is translated
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
everything visible to the user matches his language setting | priority | full test on translation of en us in webui is this a bug or feature request feature what is the current behavior not all is translated which are the steps to reproduce what is the expected or desired behavior everything visible to the user matches his language setting | 1 |
33,366 | 7,700,556,927 | IssuesEvent | 2018-05-20 03:01:22 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] Frontend login page eye not styled | J4 Issue No Code Attached Yet | ### Steps to reproduce the issue
install Joomla! 4.0.0-alpha3 Alpha [ Amani ] 12-May-2018 15:23 GMT
install demo data in admin
go to frontend
click login button (without entering user/pass)
you get to the login page at
http://example.com/index.php/author-login
### Expected result
eye is styled
### Actual result
eye is not styled
<img width="562" alt="screen shot 2018-05-12 at 21 17 02" src="https://user-images.githubusercontent.com/400092/39961229-d7a1b4ae-5629-11e8-8714-f08325d95a4c.png">
### System information (as much as possible)
### Additional comments
| 1.0 | [4.0] Frontend login page eye not styled - ### Steps to reproduce the issue
install Joomla! 4.0.0-alpha3 Alpha [ Amani ] 12-May-2018 15:23 GMT
install demo data in admin
go to frontend
click login button (without entering user/pass)
you get to the login page at
http://example.com/index.php/author-login
### Expected result
eye is styled
### Actual result
eye is not styled
<img width="562" alt="screen shot 2018-05-12 at 21 17 02" src="https://user-images.githubusercontent.com/400092/39961229-d7a1b4ae-5629-11e8-8714-f08325d95a4c.png">
### System information (as much as possible)
### Additional comments
| non_priority | frontend login page eye not styled steps to reproduce the issue install joomla alpha may gmt install demo data in admin go to frontend click login button without entering user pass you get to the login page at expected result eye is styled actual result eye is not styled img width alt screen shot at src system information as much as possible additional comments | 0 |
284,139 | 21,392,220,540 | IssuesEvent | 2022-04-21 08:16:42 | marmelab/react-admin | https://api.github.com/repos/marmelab/react-admin | closed | Format of source returned from `getSource` has changed | documentation | In previous versions of React Admin, when evaluating code like the following:
```
<ArrayInput source="authors">
<SimpleFormIterator>
<FormDataConsumer>
{({ formData, scopedFormData, getSource, ...rest }) => {
return scopedFormData && scopedFormData.user_id ? (
<SelectInput
source={getSource("role")}
choices={[
{
id: "headwriter",
name: "Head Writer",
},
]}
{...rest}
label="Role"
/>
) : null;
}}
</FormDataConsumer>
</SimpleFormIterator>
</ArrayInput>;
```
I would expect `getSource("role")` to return the following: `authors[0].role`, as this was the value that `getSource` would have returned in v3. Instead we get `authors.0.role`.
You can replicate this by inspecting the console logs in the following [CodeSandbox](https://codesandbox.io/s/divine-cookies-f0e60p?file=/src/posts/PostCreate.tsx).
If this is intentional, due to the change of the underlying form library, then it should be acknowledged in the upgrade notes. Otherwise it seems like a bug.
**Environment**
* React-admin version: 4.0.1
* Last version that did not exhibit the issue (if applicable):
* React version:
* Browser:
* Stack trace (in case of a JS error):
| 1.0 | Format of source returned from `getSource` has changed - In previous versions of React Admin, when evaluating code like the following:
```
<ArrayInput source="authors">
<SimpleFormIterator>
<FormDataConsumer>
{({ formData, scopedFormData, getSource, ...rest }) => {
return scopedFormData && scopedFormData.user_id ? (
<SelectInput
source={getSource("role")}
choices={[
{
id: "headwriter",
name: "Head Writer",
},
]}
{...rest}
label="Role"
/>
) : null;
}}
</FormDataConsumer>
</SimpleFormIterator>
</ArrayInput>;
```
I would expect `getSource("role")` to return the following: `authors[0].role`, as this was the value that `getSource` would have returned in v3. Instead we get `authors.0.role`.
You can replicate this by inspecting the console logs in the following [CodeSandbox](https://codesandbox.io/s/divine-cookies-f0e60p?file=/src/posts/PostCreate.tsx).
If this is intentional, due to the change of the underlying form library, then it should be acknowledged in the upgrade notes. Otherwise it seems like a bug.
**Environment**
* React-admin version: 4.0.1
* Last version that did not exhibit the issue (if applicable):
* React version:
* Browser:
* Stack trace (in case of a JS error):
| non_priority | format of source returned from getsource has changed in previous versions of react admin when evaluating code like the following formdata scopedformdata getsource rest return scopedformdata scopedformdata user id selectinput source getsource role choices id headwriter name head writer rest label role null i would expect getsource role to return the following authors role as this was the value that getsource would have returned in instead we get authors role you can replicate this by inspecting the console logs in the following if this is intentional due to the change of the underlying form library then it should be acknowledged in the upgrade notes otherwise it seems like a bug environment react admin version last version that did not exhibit the issue if applicable react version browser stack trace in case of a js error | 0 |
46,600 | 13,055,944,076 | IssuesEvent | 2020-07-30 03:11:34 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [steamshovel] memory leak (Trac #1545) | Incomplete Migration Migrated from Trac combo core defect | Migrated from https://code.icecube.wisc.edu/ticket/1545
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"description": "http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1458335655846260",
"component": "combo core",
"summary": "[steamshovel] memory leak",
"priority": "major",
"keywords": "",
"time": "2016-02-10T20:10:36",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
| 1.0 | [steamshovel] memory leak (Trac #1545) - Migrated from https://code.icecube.wisc.edu/ticket/1545
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"description": "http://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-2a4604.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-10-030213-84904-1/report-59eb46.html#EndPath",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1458335655846260",
"component": "combo core",
"summary": "[steamshovel] memory leak",
"priority": "major",
"keywords": "",
"time": "2016-02-10T20:10:36",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
| non_priority | memory leak trac migrated from json status closed changetime description reporter david schultz cc resolution invalid ts component combo core summary memory leak priority major keywords time milestone owner hdembinski type defect | 0 |
18,000 | 6,537,167,648 | IssuesEvent | 2017-08-31 21:09:51 | craigbarnes/lua-gumbo | https://api.github.com/repos/craigbarnes/lua-gumbo | closed | Separate build targets for each Lua version/ABI | Build | The build system currently outputs the C module to `gumbo/parse.so`, regardless of the target Lua ABI, without any special naming or indication of which Lua interpreter is required to load it. Originally this was to allow `require "gumbo.parse"` to just work as expected within the build directory, for ad-hoc testing and suchlike. However, the build system supports so many different configurations now that this is no longer a worthwhile goal.
At some point, the Makefile should be changed to build a separate compiler artifact for each major configuration (i.e. at least one each for the 5.1/5.2/5.3 ABIs). Testing should all be done via the Makefile or inside a self-contained installation (i.e. as created by `make install`).
The overall effect of this should be a lesser requirement to run `make clean` when testing different configurations and fewer confusing errors arising from library/interpreter ABI mismatch. It also means that all 3 modules can be compiled and tested in parallel (via `make -j`).
- [x] Replace `gumbo/parse.{o,so}` targets with `build/lua5{1,2,3}/parse.{o,so}` targets
- [x] Adjust pkg-config search to query only the relevant `*.pc` files per target
- [x] Add some kind of fallback for `lua.pc`, using pkg-config version constraints
- [x] Adjust test runner commands to use new paths
- [x] Adjust `install` and `uninstall` targets to use new paths
- [x] Change `all` target to work correctly with the new targets
- [x] Update CI scripts
- [ ] ~~Update build instructions in `README.md`~~ (moved to #56)
- [x] Update `rockspec.in` | 1.0 | Separate build targets for each Lua version/ABI - The build system currently outputs the C module to `gumbo/parse.so`, regardless of the target Lua ABI, without any special naming or indication of which Lua interpreter is required to load it. Originally this was to allow `require "gumbo.parse"` to just work as expected within the build directory, for ad-hoc testing and suchlike. However, the build system supports so many different configurations now that this is no longer a worthwhile goal.
At some point, the Makefile should be changed to build a separate compiler artifact for each major configuration (i.e. at least one each for the 5.1/5.2/5.3 ABIs). Testing should all be done via the Makefile or inside a self-contained installation (i.e. as created by `make install`).
The overall effect of this should be a lesser requirement to run `make clean` when testing different configurations and fewer confusing errors arising from library/interpreter ABI mismatch. It also means that all 3 modules can be compiled and tested in parallel (via `make -j`).
- [x] Replace `gumbo/parse.{o,so}` targets with `build/lua5{1,2,3}/parse.{o,so}` targets
- [x] Adjust pkg-config search to query only the relevant `*.pc` files per target
- [x] Add some kind of fallback for `lua.pc`, using pkg-config version constraints
- [x] Adjust test runner commands to use new paths
- [x] Adjust `install` and `uninstall` targets to use new paths
- [x] Change `all` target to work correctly with the new targets
- [x] Update CI scripts
- [ ] ~~Update build instructions in `README.md`~~ (moved to #56)
- [x] Update `rockspec.in` | non_priority | separate build targets for each lua version abi the build system currently outputs the c module to gumbo parse so regardless of the target lua abi without any special naming or indication of which lua interpreter is required to load it originally this was to allow require gumbo parse to just work as expected within the build directory for ad hoc testing and suchlike however the build system supports so many different configurations now that this is no longer a worthwhile goal at some point the makefile should be changed to build a separate compiler artifact for each major configuration i e at least one each for the abis testing should all be done via the makefile or inside a self contained installation i e as created by make install the overall effect of this should be a lesser requirement to run make clean when testing different configurations and fewer confusing errors arising from library interpreter abi mismatch it also means that all modules can be compiled and tested in parallel via make j replace gumbo parse o so targets with build parse o so targets adjust pkg config search to query only the relevant pc files per target add some kind of fallback for lua pc using pkg config version constraints adjust test runner commands to use new paths adjust install and uninstall targets to use new paths change all target to work correctly with the new targets update ci scripts update build instructions in readme md moved to update rockspec in | 0 |
719,940 | 24,774,138,092 | IssuesEvent | 2022-10-23 14:17:57 | bounswe/bounswe2022group5 | https://api.github.com/repos/bounswe/bounswe2022group5 | closed | Deciding on Time and Platform for Backend Team First Meeting | High Priority Type: Communication Status: In Progress | ***Description*:**
As Backend Team (@mehmetemreakbulut , @canberkboun9 , @irfanbozkurt , @oguzhandemirelx), we need to decide the first meeting time and platform. Also, an agenda is crucial for a productive meeting.
Agenda determined during the [Meeting 15.1](https://github.com/bounswe/bounswe2022group5/wiki/Meeting-15.1):
* Deciding on the table structure of the database (decided as PostgreSQL)
* Getting familiar with other technologies we would use
* Determining communication plan for the backend team
* Creating an initial project (decided as Python/Django)
***Todo's*:**
- [x] Deciding on the time and platform for the backend team.
***Reviewers*:** @canberkboun9 , @irfanbozkurt , @oguzhandemirelx
***Task Deadline*:** 20.10.2022 23:00
***Review Deadline*:** 21.10.2022 12:00 | 1.0 | Deciding on Time and Platform for Backend Team First Meeting - ***Description*:**
As Backend Team (@mehmetemreakbulut , @canberkboun9 , @irfanbozkurt , @oguzhandemirelx), we need to decide the first meeting time and platform. Also, an agenda is crucial for a productive meeting.
Agenda determined during the [Meeting 15.1](https://github.com/bounswe/bounswe2022group5/wiki/Meeting-15.1):
* Deciding on the table structure of the database (decided as PostgreSQL)
* Getting familiar with other technologies we would use
* Determining communication plan for the backend team
* Creating an initial project (decided as Python/Django)
***Todo's*:**
- [x] Deciding on the time and platform for the backend team.
***Reviewers*:** @canberkboun9 , @irfanbozkurt , @oguzhandemirelx
***Task Deadline*:** 20.10.2022 23:00
***Review Deadline*:** 21.10.2022 12:00 | priority | deciding on time and platform for backend team first meeting description as backend team mehmetemreakbulut irfanbozkurt oguzhandemirelx we need to decide the first meeting time and platform also an agenda is crucial for a productive meeting agenda determined during the deciding on the table structure of the database decided as postgresql getting familiar with other technologies we would use determining communication plan for the backend team creating an initial project decided as python django todo s deciding on the time and platform for the backend team reviewers irfanbozkurt oguzhandemirelx task deadline review deadline | 1 |
15,973 | 21,047,554,579 | IssuesEvent | 2022-03-31 17:28:32 | bayer-science-for-a-better-life/tiffslide | https://api.github.com/repos/bayer-science-for-a-better-life/tiffslide | closed | Unable to read PNG files | help wanted compatibility | Hi,
My workflow involves extracting patches from WSI and storing them in PNGs. But when I try to read PNG file, I am getting a tifffile error:
```python-traceback
>>> from tiffslide.tiffslide import TiffSlide
>>> path=r"C:\Projects\GaNDLF\testing\histo_patches\histo_patches_output\1\image\image_patch_3792-13696.png"
>>> TiffSlide(path)
Traceback (most recent call last):
File "C:\Projects\GaNDLF\venv\lib\site-packages\tifffile\tifffile.py", line 3142, in __init__
byteorder = {b'II': '<', b'MM': '>', b'EP': '<'}[header[:2]]
KeyError: b'\x89P'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Projects\GaNDLF\venv\lib\site-packages\tiffslide\tiffslide.py", line 107, in __init__
filename, storage_options=storage_options, tifffile_options=tifffile_options
File "C:\Projects\GaNDLF\venv\lib\site-packages\tiffslide\tiffslide.py", line 527, in _prepare_tifffile
return TiffFile(path, **tf_kw)
File "C:\Projects\GaNDLF\venv\lib\site-packages\tifffile\tifffile.py", line 3144, in __init__
raise TiffFileError(f'not a TIFF file {header!r}')
tifffile.tifffile.TiffFileError: not a TIFF file b'\x89PNG'
```

Attaching a PNG for reference.
Thanks! | True | Unable to read PNG files - Hi,
My workflow involves extracting patches from WSI and storing them in PNGs. But when I try to read PNG file, I am getting a tifffile error:
```python-traceback
>>> from tiffslide.tiffslide import TiffSlide
>>> path=r"C:\Projects\GaNDLF\testing\histo_patches\histo_patches_output\1\image\image_patch_3792-13696.png"
>>> TiffSlide(path)
Traceback (most recent call last):
File "C:\Projects\GaNDLF\venv\lib\site-packages\tifffile\tifffile.py", line 3142, in __init__
byteorder = {b'II': '<', b'MM': '>', b'EP': '<'}[header[:2]]
KeyError: b'\x89P'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Projects\GaNDLF\venv\lib\site-packages\tiffslide\tiffslide.py", line 107, in __init__
filename, storage_options=storage_options, tifffile_options=tifffile_options
File "C:\Projects\GaNDLF\venv\lib\site-packages\tiffslide\tiffslide.py", line 527, in _prepare_tifffile
return TiffFile(path, **tf_kw)
File "C:\Projects\GaNDLF\venv\lib\site-packages\tifffile\tifffile.py", line 3144, in __init__
raise TiffFileError(f'not a TIFF file {header!r}')
tifffile.tifffile.TiffFileError: not a TIFF file b'\x89PNG'
```

Attaching a PNG for reference.
Thanks! | non_priority | unable to read png files hi my workflow involves extracting patches from wsi and storing them in pngs but when i try to read png file i am getting a tifffile error python traceback from tiffslide tiffslide import tiffslide path r c projects gandlf testing histo patches histo patches output image image patch png tiffslide path traceback most recent call last file c projects gandlf venv lib site packages tifffile tifffile py line in init byteorder b ii b ep keyerror b during handling of the above exception another exception occurred traceback most recent call last file line in file c projects gandlf venv lib site packages tiffslide tiffslide py line in init filename storage options storage options tifffile options tifffile options file c projects gandlf venv lib site packages tiffslide tiffslide py line in prepare tifffile return tifffile path tf kw file c projects gandlf venv lib site packages tifffile tifffile py line in init raise tifffileerror f not a tiff file header r tifffile tifffile tifffileerror not a tiff file b attaching a png for reference thanks | 0 |
310,255 | 9,487,705,986 | IssuesEvent | 2019-04-22 17:38:16 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Sync should show warning not to share the code with anyone | feature/sync priority/P2 security | STR:
1. Go to brave://sync
2. Click 'start a new chain'
3. Click either desktop or mobile
4. It shows the sync code (either code words or a QR code) without any type of warning that this is a sensitive encryption key.
Impact: Someone could accidentally share the code with Brave (ex: in a support request) which gives us the access needed to decrypt their sync data. The user has no idea the sync words are security sensitive.
Desired outcome: There should be some kind of bold warning text in this modal.
<img width="631" alt="Screen Shot 2019-04-16 at 12 44 36 PM" src="https://user-images.githubusercontent.com/549654/56239562-31bda480-6046-11e9-995f-43b606643aef.png">
| 1.0 | Sync should show warning not to share the code with anyone - STR:
1. Go to brave://sync
2. Click 'start a new chain'
3. Click either desktop or mobile
4. It shows the sync code (either code words or a QR code) without any type of warning that this is a sensitive encryption key.
Impact: Someone could accidentally share the code with Brave (ex: in a support request) which gives us the access needed to decrypt their sync data. The user has no idea the sync words are security sensitive.
Desired outcome: There should be some kind of bold warning text in this modal.
<img width="631" alt="Screen Shot 2019-04-16 at 12 44 36 PM" src="https://user-images.githubusercontent.com/549654/56239562-31bda480-6046-11e9-995f-43b606643aef.png">
| priority | sync should show warning not to share the code with anyone str go to brave sync click start a new chain click either desktop or mobile it shows the sync code either code words or a qr code without any type of warning that this is a sensitive encryption key impact someone could accidentally share the code with brave ex in a support request which gives us the access needed to decrypt their sync data the user has no idea the sync words are security sensitive desired outcome there should be some kind of bold warning text in this modal img width alt screen shot at pm src | 1 |
4,670 | 3,875,970,613 | IssuesEvent | 2016-04-12 05:02:15 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 21912095: Autocompleted result in Spotlight Search flickers quickly even when result does not change | classification:ui/usability reproducible:always status:open | #### Description
Summary:
When using Spotlight search, the autocompleted result flickers when continuing to type, even when it has not changed.
Steps to Reproduce:
1. Activate Spotlight with Command+Space.
2. Type “Safari” letter by letter.
3. Notice that the autocompleted result for “Safari” flickers in between keypresses.
Expected Results:
“Safari” should not flicker.
Actual Results:
“Safari” flickers. See attached video.
-
Product Version: 10.11 Beta (15A216g)
Created: 2015-07-21T00:59:01.526460
Originated: 2015-07-20T19:58:00
Open Radar Link: http://www.openradar.me/21912095 | True | 21912095: Autocompleted result in Spotlight Search flickers quickly even when result does not change - #### Description
Summary:
When using Spotlight search, the autocompleted result flickers when continuing to type, even when it has not changed.
Steps to Reproduce:
1. Activate Spotlight with Command+Space.
2. Type “Safari” letter by letter.
3. Notice that the autocompleted result for “Safari” flickers in between keypresses.
Expected Results:
“Safari” should not flicker.
Actual Results:
“Safari” flickers. See attached video.
-
Product Version: 10.11 Beta (15A216g)
Created: 2015-07-21T00:59:01.526460
Originated: 2015-07-20T19:58:00
Open Radar Link: http://www.openradar.me/21912095 | non_priority | autocompleted result in spotlight search flickers quickly even when result does not change description summary when using spotlight search the autocompleted result flickers when continuing to type even when it has not changed steps to reproduce activate spotlight with command space type “safari” letter by letter notice that the autocompleted result for “safari” flickers in between keypresses expected results “safari” should not flicker actual results “safari” flickers see attached video product version beta created originated open radar link | 0 |
751,839 | 26,260,724,367 | IssuesEvent | 2023-01-06 07:28:06 | harvester/harvester | https://api.github.com/repos/harvester/harvester | closed | [BUG] Havester 1.1.0 upgrade to 1.1.1 is missing image docker.io/longhornio/longhorn-ui:v1.3.2 | kind/bug priority/0 reproduce/needed severity/needed area/airgap-env | **Describe the bug**
After the upgrade from harvester 1.1.0 to 1.1.1 I can see this:
`longhorn-ui-94b465b84-2d2zx 0/1 ImagePullBackOff 0 59m`
and
`Failed to pull image "longhornio/longhorn-ui:v1.3.2`
**To Reproduce**
Upgrade Harvester 1.1.0 to 1.1.1 in air gapped
**Expected behavior**
Harvester should have all the required images included on the ISO
**Environment**
- Harvester ISO version: 1.1.1
| 1.0 | [BUG] Havester 1.1.0 upgrade to 1.1.1 is missing image docker.io/longhornio/longhorn-ui:v1.3.2 - **Describe the bug**
After the upgrade from harvester 1.1.0 to 1.1.1 I can see this:
`longhorn-ui-94b465b84-2d2zx 0/1 ImagePullBackOff 0 59m`
and
`Failed to pull image "longhornio/longhorn-ui:v1.3.2`
**To Reproduce**
Upgrade Harvester 1.1.0 to 1.1.1 in air gapped
**Expected behavior**
Harvester should have all the required images included on the ISO
**Environment**
- Harvester ISO version: 1.1.1
| priority | havester upgrade to is missing image docker io longhornio longhorn ui describe the bug after the upgrade from harvester to i can see this longhorn ui imagepullbackoff and failed to pull image longhornio longhorn ui to reproduce upgrade harvester to in air gapped expected behavior harvester should have all the required images included on the iso environment harvester iso version | 1 |
30,056 | 2,722,147,107 | IssuesEvent | 2015-04-14 00:24:15 | CruxFramework/crux-smart-faces | https://api.github.com/repos/CruxFramework/crux-smart-faces | closed | DialogBox without close button | bug imported Milestone-M14-C4 Module-CruxWidgets Priority-Medium TargetVersion-5.3.0 | _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 17, 2015 11:22:45_
DialogBox used in the showcase project does not have close button on the small view type.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=639_ | 1.0 | DialogBox without close button - _From [flavia.jesus@triggolabs.com](https://code.google.com/u/flavia.jesus@triggolabs.com/) on March 17, 2015 11:22:45_
DialogBox used in the showcase project does not have close button on the small view type.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=639_ | priority | dialogbox without close button from on march dialogbox used in the showcase project does not have close button on the small view type original issue | 1 |
60,723 | 25,234,777,374 | IssuesEvent | 2022-11-14 23:20:32 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | SSO realm merge planning - vault | ops and shared services | **Describe the issue**
we will be merging the realm that vault uses to another realm in Gold. This ticket is to find out what's the impact and if the merge is doable for this service.
**What is the plan? How will this get completed?**
Discussion with Service Lead, testing, planning
**Definition of done**
- [x] identify service's SSO usage
- [x] discuss if realm merge is doable
- [x] come up with testing plan
| 1.0 | SSO realm merge planning - vault - **Describe the issue**
we will be merging the realm that vault uses to another realm in Gold. This ticket is to find out what's the impact and if the merge is doable for this service.
**What is the plan? How will this get completed?**
Discussion with Service Lead, testing, planning
**Definition of done**
- [x] identify service's SSO usage
- [x] discuss if realm merge is doable
- [x] come up with testing plan
| non_priority | sso realm merge planning vault describe the issue we will be merging the realm that vault uses to another realm in gold this ticket is to find out what s the impact and if the merge is doable for this service what is the plan how will this get completed discussion with service lead testing planning definition of done identify service s sso usage discuss if realm merge is doable come up with testing plan | 0 |
20,311 | 29,670,777,064 | IssuesEvent | 2023-06-11 11:32:48 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Editing a page that has a Spacer block with an HTML Anchor produces message “This block contains unexpected or invalid content.” | [Type] Bug [Feature] Blocks Backwards Compatibility [Type] Regression [Status] Duplicate | ### Description
Our website has pages that contain Spacer blocks with HTML Anchors defined. The site has been working for months with this design. Last edits were done in October 2022. Now when I try to edit these pages every one of these Spacer blocks shows the error message “This block contains unexpected or invalid content."
If I choose "Attempt Block Recovery" the Spacer block is restored, but without the anchor. I can then manually add the anchor back, save the page, and all is good... until some future date when a subsequent page edit shows this error again. (What I mean by 'future date' is that I encountered this problem in January 2022, rebuilt the Spacer blocks with Anchors, and everything looked good. The next round of edits in October 2022 worked fine. Then in March 2023 the Spacer Blocks with Anchors choke again when I edit these pages.)
There is no apparent difference in the HTML code generated to specify the anchor between January 2022 and now, but the editor chokes on processing the existing code, then allows adding the Anchor to the Spacer and generates the same code. My expectation is that the existing Spacer does not produce any “This block contains unexpected or invalid content" errors.
### Step-by-step reproduction instructions
Steps to reproduce:
1. Edit an existing page that contains a Spacer block with an HTML Anchor. If the page was last updated over 4–5 months ago the Spacer blocks with Anchors will trip on this error, but if the page was recently edited and saved this error won't occur.
2. Error shown when existing page is edited:

3. HTML code after "Attempt Block Recovery" chosen:

4. HTML code after recovered Spacer edited to add the same HTML Anchor that was dropped by recovery:

### Screenshots, screen recording, code snippet
Excerpt from console log showing Block validation messages:

### Environment info
WordPress Version: 6.1.1
WordPress.com Editing Toolkit 3.60227
Gutenberg 15.4.0
Firefox 111.0
Desktop with Windows 10
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
No | True | Editing a page that has a Spacer block with an HTML Anchor produces message “This block contains unexpected or invalid content.” - ### Description
Our website has pages that contain Spacer blocks with HTML Anchors defined. The site has been working for months with this design. Last edits were done in October 2022. Now when I try to edit these pages every one of these Spacer blocks shows the error message “This block contains unexpected or invalid content."
If I choose "Attempt Block Recovery" the Spacer block is restored, but without the anchor. I can then manually add the anchor back, save the page, and all is good... until some future date when a subsequent page edit shows this error again. (What I mean by 'future date' is that I encountered this problem in January 2022, rebuilt the Spacer blocks with Anchors, and everything looked good. The next round of edits in October 2022 worked fine. Then in March 2023 the Spacer Blocks with Anchors choke again when I edit these pages.)
There is no apparent difference in the HTML code generated to specify the anchor between January 2022 and now, but the editor chokes on processing the existing code, then allows adding the Anchor to the Spacer and generates the same code. My expectation is that the existing Spacer does not produce any “This block contains unexpected or invalid content" errors.
### Step-by-step reproduction instructions
Steps to reproduce:
1. Edit an existing page that contains a Spacer block with an HTML Anchor. If the page was last updated over 4–5 months ago the Spacer blocks with Anchors will trip on this error, but if the page was recently edited and saved this error won't occur.
2. Error shown when existing page is edited:

3. HTML code after "Attempt Block Recovery" chosen:

4. HTML code after recovered Spacer edited to add the same HTML Anchor that was dropped by recovery:

### Screenshots, screen recording, code snippet
Excerpt from console log showing Block validation messages:

### Environment info
WordPress Version: 6.1.1
WordPress.com Editing Toolkit 3.60227
Gutenberg 15.4.0
Firefox 111.0
Desktop with Windows 10
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
No | non_priority | editing a page that has a spacer block with an html anchor produces message “this block contains unexpected or invalid content ” description our website has pages that contain spacer blocks with html anchors defined the site has been working for months with this design last edits were done in october now when i try to edit these pages every one of these spacer blocks shows the error message “this block contains unexpected or invalid content if i choose attempt block recovery the spacer block is restored but without the anchor i can then manually add the anchor back save the page and all is good until some future date when a subsequent page edit shows this error again what i mean by future date is that i encountered this problem in january rebuilt the spacer blocks with anchors and everything looked good the next round of edits in october worked fine then in march the spacer blocks with anchors choke again when i edit these pages there is no apparent difference in the html code generated to specify the anchor between january and now but the editor chokes on processing the existing code then allows adding the anchor to the spacer and generates the same code my expectation is that the existing spacer does not produce any “this block contains unexpected or invalid content errors step by step reproduction instructions steps to reproduce edit an existing page that contains a spacer block with an html anchor if the page was last updated over – months ago the spacer blocks with anchors will trip on this error but if the page was recently edited and saved this error won t occur error shown when existing page is edited html code after attempt block recovery chosen html code after recovered spacer edited to add the same html anchor that was dropped by recovery screenshots screen recording code snippet excerpt from console log showing block validation messages environment info wordpress version wordpress com editing toolkit gutenberg firefox desktop with windows please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg no | 0 |
236,580 | 7,750,965,311 | IssuesEvent | 2018-05-30 15:40:02 | Flynrod/SpawnShield | https://api.github.com/repos/Flynrod/SpawnShield | opened | Ability to enter protection zone during combat | bug confirmed high priority | **SpawnShield version:** 2.0.8
**Description:**
When a player is tagged, he is able to enter the security zone despite the settings in the configuration file. | 1.0 | Ability to enter protection zone during combat - **SpawnShield version:** 2.0.8
**Description:**
When a player is tagged, he is able to enter the security zone despite the settings in the configuration file. | priority | ability to enter protection zone during combat spawnshield version description when a player is tagged he is able to enter the security zone despite the settings in the configuration file | 1 |
33,604 | 12,216,765,438 | IssuesEvent | 2020-05-01 15:48:21 | habusha/CIOIL | https://api.github.com/repos/habusha/CIOIL | opened | CVE-2020-5405 (Medium) detected in spring-cloud-config-client-2.0.1.RELEASE.jar | security vulnerability | ## CVE-2020-5405 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-cloud-config-client-2.0.1.RELEASE.jar</b></p></summary>
<p>This project is a Spring configuration client.</p>
<p>Library home page: <a href="https://spring.io">https://spring.io</a></p>
<p>Path to dependency file: /tmp/ws-scm/CIOIL/infra_github/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200501140025_KHMIDU/downloadResource_IFJBLS/20200501140121/spring-cloud-config-client-2.0.1.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-config-2.0.3.RELEASE.jar (Root Library)
- :x: **spring-cloud-config-client-2.0.1.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/bbaa61e2fd7a1837b81f9827e715dc8c1817cd31">bbaa61e2fd7a1837b81f9827e715dc8c1817cd31</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Cloud Config, versions 2.2.x prior to 2.2.2, versions 2.1.x prior to 2.1.7, and older unsupported versions allow applications to serve arbitrary configuration files through the spring-cloud-config-server module. A malicious user, or attacker, can send a request using a specially crafted URL that can lead a directory traversal attack.
<p>Publish Date: 2020-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5405>CVE-2020-5405</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5405">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5405</a></p>
<p>Release Date: 2020-03-05</p>
<p>Fix Resolution: org.springframework.cloud:spring-cloud-config-client:2.1.7.RELEASE,2.2.2.RELEASE;org.springframework.cloud:spring-cloud-config-server:2.1.7.RELEASE,2.2.2.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-5405 (Medium) detected in spring-cloud-config-client-2.0.1.RELEASE.jar - ## CVE-2020-5405 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-cloud-config-client-2.0.1.RELEASE.jar</b></p></summary>
<p>This project is a Spring configuration client.</p>
<p>Library home page: <a href="https://spring.io">https://spring.io</a></p>
<p>Path to dependency file: /tmp/ws-scm/CIOIL/infra_github/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200501140025_KHMIDU/downloadResource_IFJBLS/20200501140121/spring-cloud-config-client-2.0.1.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-config-2.0.3.RELEASE.jar (Root Library)
- :x: **spring-cloud-config-client-2.0.1.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/bbaa61e2fd7a1837b81f9827e715dc8c1817cd31">bbaa61e2fd7a1837b81f9827e715dc8c1817cd31</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Cloud Config, versions 2.2.x prior to 2.2.2, versions 2.1.x prior to 2.1.7, and older unsupported versions allow applications to serve arbitrary configuration files through the spring-cloud-config-server module. A malicious user, or attacker, can send a request using a specially crafted URL that can lead a directory traversal attack.
<p>Publish Date: 2020-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5405>CVE-2020-5405</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5405">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5405</a></p>
<p>Release Date: 2020-03-05</p>
<p>Fix Resolution: org.springframework.cloud:spring-cloud-config-client:2.1.7.RELEASE,2.2.2.RELEASE;org.springframework.cloud:spring-cloud-config-server:2.1.7.RELEASE,2.2.2.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in spring cloud config client release jar cve medium severity vulnerability vulnerable library spring cloud config client release jar this project is a spring configuration client library home page a href path to dependency file tmp ws scm cioil infra github pom xml path to vulnerable library tmp ws ua khmidu downloadresource ifjbls spring cloud config client release jar dependency hierarchy spring cloud starter config release jar root library x spring cloud config client release jar vulnerable library found in head commit a href vulnerability details spring cloud config versions x prior to versions x prior to and older unsupported versions allow applications to serve arbitrary configuration files through the spring cloud config server module a malicious user or attacker can send a request using a specially crafted url that can lead a directory traversal attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework cloud spring cloud config client release release org springframework cloud spring cloud config server release release step up your open source security game with whitesource | 0 |
22,881 | 2,651,012,327 | IssuesEvent | 2015-03-16 07:54:09 | ilgrosso/oldSyncopeIdM | https://api.github.com/repos/ilgrosso/oldSyncopeIdM | closed | Virtual attribute cache | 1 star Component-Logic Component-Persistence duplicate enhancement imported Priority-High Release-Soave-2.1 | _From [fabio.ma...@gmail.com](https://code.google.com/u/109095430973973917901/) on January 18, 2012 11:03:48_
Provide a simple cache for virtual attribute values in order to avoid to query external resources every time.
_Original issue: http://code.google.com/p/syncope/issues/detail?id=276_ | 1.0 | Virtual attribute cache - _From [fabio.ma...@gmail.com](https://code.google.com/u/109095430973973917901/) on January 18, 2012 11:03:48_
Provide a simple cache for virtual attribute values in order to avoid to query external resources every time.
_Original issue: http://code.google.com/p/syncope/issues/detail?id=276_ | priority | virtual attribute cache from on january provide a simple cache for virtual attribute values in order to avoid to query external resources every time original issue | 1 |
62,982 | 3,193,774,623 | IssuesEvent | 2015-09-30 08:11:04 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | Add header "server-type" for answer of agent | Category: Communication Component: For junior contributor Priority: Normal Status: Closed Tracker: Feature | ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1406, http://forge.fusioninventory.org/issues/1406
Original Date: 2011-12-18
Original Assignee: David Durieux
---
Like return header :
```
server-type: glpi/fusioninventory 0.83+1.0
```
| 1.0 | Add header "server-type" for answer of agent - ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 1406, http://forge.fusioninventory.org/issues/1406
Original Date: 2011-12-18
Original Assignee: David Durieux
---
Like return header :
```
server-type: glpi/fusioninventory 0.83+1.0
```
| priority | add header server type for answer of agent author name david durieux ddurieux original redmine issue original date original assignee david durieux like return header server type glpi fusioninventory | 1 |
400,259 | 11,771,273,650 | IssuesEvent | 2020-03-15 23:09:14 | GarkGarcia/icon-pie | https://api.github.com/repos/GarkGarcia/icon-pie | reopened | Freedesktop icon theme support | enhancement priority | If I understand correctly:
- IconPie is currently geared towards generating several icons at into container files ;
- the 'entry' term refers to an icon from within the container file.
Is that so ?
What about icon sets that are stored as individual files in Linux distribution packages and follow the [Freedesktop icon theme specification](https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html) ?
For instance, in my machine, if I look for Kate-related icons into the `breeze-icon-theme` package by typing `dpkg -L breeze-icon-theme|rg kate`, I get
```
/usr/share/icons/breeze/apps/16/kate.svg
/usr/share/icons/breeze/apps/22/kate.svg
/usr/share/icons/breeze/apps/32/kate.svg
/usr/share/icons/breeze/apps/48/kate.svg
/usr/share/icons/breeze/apps/64/kate.svg
/usr/share/icons/breeze/mimetypes/16/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/22/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/32/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/64/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/apps/16/kate.svg
/usr/share/icons/breeze-dark/apps/22/kate.svg
/usr/share/icons/breeze-dark/apps/32/kate.svg
/usr/share/icons/breeze-dark/apps/48/kate.svg
/usr/share/icons/breeze-dark/apps/64/kate.svg
/usr/share/icons/breeze-dark/mimetypes/16/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/22/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/32/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/64/text-x-katefilelist.svg
```
Did you consider such usage scenario ? Can IconPie be used within it ?
Also, note that the Freedesktop specification talks about "icon files" and "icon themes", and never about "entries". As a linuxian, the term "entry" is completely unknown to me. | 1.0 | Freedesktop icon theme support - If I understand correctly:
- IconPie is currently geared towards generating several icons at into container files ;
- the 'entry' term refers to an icon from within the container file.
Is that so ?
What about icon sets that are stored as individual files in Linux distribution packages and follow the [Freedesktop icon theme specification](https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html) ?
For instance, in my machine, if I look for Kate-related icons into the `breeze-icon-theme` package by typing `dpkg -L breeze-icon-theme|rg kate`, I get
```
/usr/share/icons/breeze/apps/16/kate.svg
/usr/share/icons/breeze/apps/22/kate.svg
/usr/share/icons/breeze/apps/32/kate.svg
/usr/share/icons/breeze/apps/48/kate.svg
/usr/share/icons/breeze/apps/64/kate.svg
/usr/share/icons/breeze/mimetypes/16/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/22/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/32/text-x-katefilelist.svg
/usr/share/icons/breeze/mimetypes/64/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/apps/16/kate.svg
/usr/share/icons/breeze-dark/apps/22/kate.svg
/usr/share/icons/breeze-dark/apps/32/kate.svg
/usr/share/icons/breeze-dark/apps/48/kate.svg
/usr/share/icons/breeze-dark/apps/64/kate.svg
/usr/share/icons/breeze-dark/mimetypes/16/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/22/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/32/text-x-katefilelist.svg
/usr/share/icons/breeze-dark/mimetypes/64/text-x-katefilelist.svg
```
Did you consider such usage scenario ? Can IconPie be used within it ?
Also, note that the Freedesktop specification talks about "icon files" and "icon themes", and never about "entries". As a linuxian, the term "entry" is completely unknown to me. | priority | freedesktop icon theme support if i understand correctly iconpie is currently geared towards generating several icons at into container files the entry term refers to an icon from within the container file is that so what about icon sets that are stored as individual files in linux distribution packages and follow the for instance in my machine if i look for kate related icons into the breeze icon theme package by typing dpkg l breeze icon theme rg kate i get usr share icons breeze apps kate svg usr share icons breeze apps kate svg usr share icons breeze apps kate svg usr share icons breeze apps kate svg usr share icons breeze apps kate svg usr share icons breeze mimetypes text x katefilelist svg usr share icons breeze mimetypes text x katefilelist svg usr share icons breeze mimetypes text x katefilelist svg usr share icons breeze mimetypes text x katefilelist svg usr share icons breeze dark apps kate svg usr share icons breeze dark apps kate svg usr share icons breeze dark apps kate svg usr share icons breeze dark apps kate svg usr share icons breeze dark apps kate svg usr share icons breeze dark mimetypes text x katefilelist svg usr share icons breeze dark mimetypes text x katefilelist svg usr share icons breeze dark mimetypes text x katefilelist svg usr share icons breeze dark mimetypes text x katefilelist svg did you consider such usage scenario can iconpie be used within it also note that the freedesktop specification talks about icon files and icon themes and never about entries as a linuxian the term entry is completely unknown to me | 1 |
2,109 | 2,697,578,893 | IssuesEvent | 2015-04-02 20:49:58 | SleepyTrousers/EnderIO | https://api.github.com/repos/SleepyTrousers/EnderIO | closed | Insert and Export pointers not rendering | bug Code Complete | I am not sure if it is the current version of enderio or cofh updates but the arrows no longer render correctly on the conduits. Must view from weird angles to see them. I noticed the issue on direwolf20 forgecraft video today as well | 1.0 | Insert and Export pointers not rendering - I am not sure if it is the current version of enderio or cofh updates but the arrows no longer render correctly on the conduits. Must view from weird angles to see them. I noticed the issue on direwolf20 forgecraft video today as well | non_priority | insert and export pointers not rendering i am not sure if it is the current version of enderio or cofh updates but the arrows no longer render correctly on the conduits must view from weird angles to see them i noticed the issue on forgecraft video today as well | 0 |
598,710 | 18,250,675,389 | IssuesEvent | 2021-10-02 06:22:23 | FantasticoFox/VerifyPage | https://api.github.com/repos/FantasticoFox/VerifyPage | closed | Only show numbers of revision of the page file in question instead of backend rev_id's | medium priority feature UX | 
The backed revision ID's are only impotent for debugging purpose (so best is to have a debugging option to enable it). Useful for the user is to understand how many revisions the local file has. | 1.0 | Only show numbers of revision of the page file in question instead of backend rev_id's - 
The backed revision ID's are only impotent for debugging purpose (so best is to have a debugging option to enable it). Useful for the user is to understand how many revisions the local file has. | priority | only show numbers of revision of the page file in question instead of backend rev id s the backed revision id s are only impotent for debugging purpose so best is to have a debugging option to enable it useful for the user is to understand how many revisions the local file has | 1 |
32,221 | 12,097,387,710 | IssuesEvent | 2020-04-20 08:32:18 | geea-develop/aurelia-datepicker-range-sample | https://api.github.com/repos/geea-develop/aurelia-datepicker-range-sample | opened | CVE-2016-10540 (High) detected in minimatch-0.3.0.tgz | security vulnerability | ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aurelia-datepicker-range-sample/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/aurelia-datepicker-range-sample/node_modules/jasmine/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-protractor-3.0.0.tgz (Root Library)
- protractor-4.0.14.tgz
- jasmine-2.4.1.tgz
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/geea-develop/aurelia-datepicker-range-sample/commit/ec0eb9be8378bd03b51d781ec77956115289c030">ec0eb9be8378bd03b51d781ec77956115289c030</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-10540 (High) detected in minimatch-0.3.0.tgz - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/aurelia-datepicker-range-sample/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/aurelia-datepicker-range-sample/node_modules/jasmine/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-protractor-3.0.0.tgz (Root Library)
- protractor-4.0.14.tgz
- jasmine-2.4.1.tgz
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/geea-develop/aurelia-datepicker-range-sample/commit/ec0eb9be8378bd03b51d781ec77956115289c030">ec0eb9be8378bd03b51d781ec77956115289c030</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file tmp ws scm aurelia datepicker range sample package json path to vulnerable library tmp ws scm aurelia datepicker range sample node modules jasmine node modules minimatch package json dependency hierarchy gulp protractor tgz root library protractor tgz jasmine tgz glob tgz x minimatch tgz vulnerable library found in head commit a href vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later step up your open source security game with whitesource | 0 |
352,463 | 25,068,877,662 | IssuesEvent | 2022-11-07 10:30:31 | eclipse-dataspaceconnector/docs | https://api.github.com/repos/eclipse-dataspaceconnector/docs | opened | Add Trust Framework Adoption repo to documentation | documentation | # Feature Request
Add Trust Framework Adoption repo as submodule.
## Why Is the Feature Desired?
To also link the provided documentation there.
## Solution Proposal
Add submodule, link it in sidebar.
## Type of Issue
improvement
## Checklist
- [x] assigned appropriate label?
- [x] **Do NOT select a milestone or an assignee!**
| 1.0 | Add Trust Framework Adoption repo to documentation - # Feature Request
Add Trust Framework Adoption repo as submodule.
## Why Is the Feature Desired?
To also link the provided documentation there.
## Solution Proposal
Add submodule, link it in sidebar.
## Type of Issue
improvement
## Checklist
- [x] assigned appropriate label?
- [x] **Do NOT select a milestone or an assignee!**
| non_priority | add trust framework adoption repo to documentation feature request add trust framework adoption repo as submodule why is the feature desired to also link the provided documentation there solution proposal add submodule link it in sidebar type of issue improvement checklist assigned appropriate label do not select a milestone or an assignee | 0 |
758,414 | 26,554,525,336 | IssuesEvent | 2023-01-20 10:48:54 | OffchainLabs/arb-token-bridge | https://api.github.com/repos/OffchainLabs/arb-token-bridge | opened | decouple `useTransactions` from `useArbTokenBridge` | Priority: P2 Low Type: Refactoring | At the moment, all of the state and business logic we need for the Bridge UI is kept inside `useArbTokenBridge`. However, we are slowly working towards more modular refactor that will leave us with a couple of stateless methods (like `depositETH`, `withdrawToken`) and other utilities for keeping state when needed (like `useBalance`).
One big part of that refactor would be to not have `useTransactions` used within `useArbTokenBridge`, but have them be used in the UI independently of each other. We should be able to move `useTransaction` away into the UI, and have it populate through the `txLifecycle` callbacks on different `useArbTokenBridge` methods.
_Originally created by @spsjvc_ | 1.0 | decouple `useTransactions` from `useArbTokenBridge` - At the moment, all of the state and business logic we need for the Bridge UI is kept inside `useArbTokenBridge`. However, we are slowly working towards more modular refactor that will leave us with a couple of stateless methods (like `depositETH`, `withdrawToken`) and other utilities for keeping state when needed (like `useBalance`).
One big part of that refactor would be to not have `useTransactions` used within `useArbTokenBridge`, but have them be used in the UI independently of each other. We should be able to move `useTransaction` away into the UI, and have it populate through the `txLifecycle` callbacks on different `useArbTokenBridge` methods.
_Originally created by @spsjvc_ | priority | decouple usetransactions from usearbtokenbridge at the moment all of the state and business logic we need for the bridge ui is kept inside usearbtokenbridge however we are slowly working towards more modular refactor that will leave us with a couple of stateless methods like depositeth withdrawtoken and other utilities for keeping state when needed like usebalance one big part of that refactor would be to not have usetransactions used within usearbtokenbridge but have them be used in the ui independently of each other we should be able to move usetransaction away into the ui and have it populate through the txlifecycle callbacks on different usearbtokenbridge methods originally created by spsjvc | 1 |
671,022 | 22,738,969,675 | IssuesEvent | 2022-07-07 00:26:50 | PolyhedralDev/TerraOverworldConfig | https://api.github.com/repos/PolyhedralDev/TerraOverworldConfig | opened | Global heightmap refactor | enhancement priority=medium major | Rework all terrain to use a global height map, rather than determining general height via biome distribution. This will make height variation look significantly better as terrain won't need to be interpolated so much. Biome specific detailing can be done by different EQs that utilize the heightmap in different ways.
Here are some examples of an early implementation



| 1.0 | Global heightmap refactor - Rework all terrain to use a global height map, rather than determining general height via biome distribution. This will make height variation look significantly better as terrain won't need to be interpolated so much. Biome specific detailing can be done by different EQs that utilize the heightmap in different ways.
Here are some examples of an early implementation



| priority | global heightmap refactor rework all terrain to use a global height map rather than determining general height via biome distribution this will make height variation look significantly better as terrain won t need to be interpolated so much biome specific detailing can be done by different eqs that utilize the heightmap in different ways here are some examples of an early implementation | 1 |
517,587 | 15,016,585,071 | IssuesEvent | 2021-02-01 09:46:35 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | opened | Check and update the TDSv7 schema and translation | Category: Core Category: Translation Priority: High Status: In Progress Type: Bug Type: Maintenance Type: Support | Reconcile the differences between the published schema and the implementation.
Treat the implementation as the "standard" | 1.0 | Check and update the TDSv7 schema and translation - Reconcile the differences between the published schema and the implementation.
Treat the implementation as the "standard" | priority | check and update the schema and translation reconcile the differences between the published schema and the implementation treat the implementation as the standard | 1 |
171,442 | 13,233,560,455 | IssuesEvent | 2020-08-18 14:58:22 | RedHatInsights/tower-analytics-frontend | https://api.github.com/repos/RedHatInsights/tower-analytics-frontend | closed | Upgrade to PatternFly4 | needs_test | # Description
The C.R.C platform team wants all apps to upgrade to PatternFly4 by July 13th so they can upgrade the chrome to PatternFly4 then.
- [x] Test app with PatternFly4
- [x] Determine what changes need to be made
- [x] Make those changes
- [x] Verify changes | 1.0 | Upgrade to PatternFly4 - # Description
The C.R.C platform team wants all apps to upgrade to PatternFly4 by July 13th so they can upgrade the chrome to PatternFly4 then.
- [x] Test app with PatternFly4
- [x] Determine what changes need to be made
- [x] Make those changes
- [x] Verify changes | non_priority | upgrade to description the c r c platform team wants all apps to upgrade to by july so they can upgrade the chrome to then test app with determine what changes need to be made make those changes verify changes | 0 |
717,657 | 24,685,708,851 | IssuesEvent | 2022-10-19 03:09:23 | authelia/authelia | https://api.github.com/repos/authelia/authelia | closed | Add matcher for query arguments in ACL rules | priority/4/normal type/feature | ## Bug Report
### Description
I am attempting to restrict access to resource which due to the nature of application requires me to match an GET argument. Now the first ACL ruleset I sketched was following:
```
- domain: app.example.com
subject: "group:Admins"
resources:
- "^/path/to/\\?key=sensitive-page"
- domain: app.example.com
policy: deny
resources:
- "^/path/to/\\?key=sensitive-page"
- domain: app.example.com
policy: bypass
```
Now the problem is that user can manipulate the order of GET arguments so first two rules are not matched, and `key=sensitive-page` resource would be matched by the last ACL rule.
### Expected Behaviour
I think the only realistic option here is to allow Authelia to parse and sanitize the GET arguments and have separate section for matching GET arguments a'a like this:
```
- domain: app.example.com
subject: "group:Admins"
arguments:
key: "^sensitive-page"
```
Expecting Authelia admin to come up with all the various ways URL can be constructed is not realistic.
| 1.0 | Add matcher for query arguments in ACL rules - ## Bug Report
### Description
I am attempting to restrict access to resource which due to the nature of application requires me to match an GET argument. Now the first ACL ruleset I sketched was following:
```
- domain: app.example.com
subject: "group:Admins"
resources:
- "^/path/to/\\?key=sensitive-page"
- domain: app.example.com
policy: deny
resources:
- "^/path/to/\\?key=sensitive-page"
- domain: app.example.com
policy: bypass
```
Now the problem is that user can manipulate the order of GET arguments so first two rules are not matched, and `key=sensitive-page` resource would be matched by the last ACL rule.
### Expected Behaviour
I think the only realistic option here is to allow Authelia to parse and sanitize the GET arguments and have separate section for matching GET arguments a'a like this:
```
- domain: app.example.com
subject: "group:Admins"
arguments:
key: "^sensitive-page"
```
Expecting Authelia admin to come up with all the various ways URL can be constructed is not realistic.
| priority | add matcher for query arguments in acl rules bug report description i am attempting to restrict access to resource which due to the nature of application requires me to match an get argument now the first acl ruleset i sketched was following domain app example com subject group admins resources path to key sensitive page domain app example com policy deny resources path to key sensitive page domain app example com policy bypass now the problem is that user can manipulate the order of get arguments so first two rules are not matched and key sensitive page resource would be matched by the last acl rule expected behaviour i think the only realistic option here is to allow authelia to parse and sanitize the get arguments and have separate section for matching get arguments a a like this domain app example com subject group admins arguments key sensitive page expecting authelia admin to come up with all the various ways url can be constructed is not realistic | 1 |
477,786 | 13,768,462,831 | IssuesEvent | 2020-10-07 17:07:25 | pringyy/Individual-Project | https://api.github.com/repos/pringyy/Individual-Project | opened | Brain storm different story ideas for MMO | Low Priority research | This is not high priority right now, but for this game to have a story I need to brain storm different ideas of how players can progress to different points in the game so they are working towards something. This is key as if players are working to complete something and progress they can communicate and help each other, therefore helping develop communication skills. | 1.0 | Brain storm different story ideas for MMO - This is not high priority right now, but for this game to have a story I need to brain storm different ideas of how players can progress to different points in the game so they are working towards something. This is key as if players are working to complete something and progress they can communicate and help each other, therefore helping develop communication skills. | priority | brain storm different story ideas for mmo this is not high priority right now but for this game to have a story i need to brain storm different ideas of how players can progress to different points in the game so they are working towards something this is key as if players are working to complete something and progress they can communicate and help each other therefore helping develop communication skills | 1 |
121,298 | 12,122,105,364 | IssuesEvent | 2020-04-22 10:23:11 | process-analytics/bpmn-visualization-js | https://api.github.com/repos/process-analytics/bpmn-visualization-js | closed | [TEST] Update the tests of BpmnXmlParser | documentation infra:refactoring | Replace the existing tests of BpmnXmlParser.
For the Xml Parser, we don't want to verify if we can convert all xml objects in json; but, if we can convert the BPMN files from different [vendors](https://github.com/bpmn-miwg/bpmn-miwg-test-suite) & the [BPMN Model Interchange Working Group](https://github.com/bpmn-miwg/bpmn-miwg-test-suite/tree/master/Reference).
We need to verify if we can convert correctly:
- the attributes of the XML object with special charaters (french, japan..., line end...)
- a process with different elements & participants (no need all) with different namespaces corresponding to the different vendors
- the numbers as a Number
- the booleans as Boolean
Also update the `bpmn-support-how-to.adoc` documentation: no need to create specific XML tests when adding new bpmn elements support | 1.0 | [TEST] Update the tests of BpmnXmlParser - Replace the existing tests of BpmnXmlParser.
For the Xml Parser, we don't want to verify if we can convert all xml objects in json; but, if we can convert the BPMN files from different [vendors](https://github.com/bpmn-miwg/bpmn-miwg-test-suite) & the [BPMN Model Interchange Working Group](https://github.com/bpmn-miwg/bpmn-miwg-test-suite/tree/master/Reference).
We need to verify if we can convert correctly:
- the attributes of the XML object with special charaters (french, japan..., line end...)
- a process with different elements & participants (no need all) with different namespaces corresponding to the different vendors
- the numbers as a Number
- the booleans as Boolean
Also update the `bpmn-support-how-to.adoc` documentation: no need to create specific XML tests when adding new bpmn elements support | non_priority | update the tests of bpmnxmlparser replace the existing tests of bpmnxmlparser for the xml parser we don t want to verify if we can convert all xml objects in json but if we can convert the bpmn files from different the we need to verify if we can convert correctly the attributes of the xml object with special charaters french japan line end a process with different elements participants no need all with different namespaces corresponding to the different vendors the numbers as a number the booleans as boolean also update the bpmn support how to adoc documentation no need to create specific xml tests when adding new bpmn elements support | 0 |
111,617 | 14,114,126,011 | IssuesEvent | 2020-11-07 14:36:05 | thomasmichaelwallace/another-moonshot | https://api.github.com/repos/thomasmichaelwallace/another-moonshot | closed | Game title and pitch | design | Before getting started, I need to:
- Outline an elevator pitch that'll guide the rest of the game development
- Come up with a working title to match | 1.0 | Game title and pitch - Before getting started, I need to:
- Outline an elevator pitch that'll guide the rest of the game development
- Come up with a working title to match | non_priority | game title and pitch before getting started i need to outline an elevator pitch that ll guide the rest of the game development come up with a working title to match | 0 |
79,273 | 10,115,405,421 | IssuesEvent | 2019-07-30 21:41:32 | magento-research/pwa-studio | https://api.github.com/repos/magento-research/pwa-studio | closed | [doc]: Migration banner | docs documentation pkg:pwa-devdocs | **Describe the request**
Since the pwa-studio repository is moving to another org, the URL for the docs site will change and visitors will get a 404 when visiting any old links.
**Possible solutions**
A banner needs to be put up as soon as possible to inform visitors of this change to mitigate the confusion.
**Screenshots**
n/a
| 1.0 | [doc]: Migration banner - **Describe the request**
Since the pwa-studio repository is moving to another org, the URL for the docs site will change and visitors will get a 404 when visiting any old links.
**Possible solutions**
A banner needs to be put up as soon as possible to inform visitors of this change to mitigate the confusion.
**Screenshots**
n/a
| non_priority | migration banner describe the request since the pwa studio repository is moving to another org the url for the docs site will change and visitors will get a when visiting any old links possible solutions a banner needs to be put up as soon as possible to inform visitors of this change to mitigate the confusion screenshots n a | 0 |
51,584 | 6,180,743,740 | IssuesEvent | 2017-07-03 07:07:15 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | opened | System.IO.Tests.FileInfo_Delete failed with Xunit.Sdk.TrueException in CI | area-System.IO test-run-core | failed test: System.IO.Tests.FileInfo_Delete.Unix_ExistingDirectory_ReadOnlyVolume
detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_ubuntu14.04_debug/95/testReport/System.IO.Tests/FileInfo_Delete/Unix_ExistingDirectory_ReadOnlyVolume/
MESSAGE:
~~~
Assert.True() Failure\nExpected: True\nActual: False
~~~
STACK TRACE:
~~~
at System.IO.Tests.FileSystemTest.RunAsSudo(String commandLine) in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/FileSystemTest.cs:line 69
at System.IO.Tests.FileSystemTest.ReadOnly_FileSystemHelper(Action`1 testAction, String subDirectoryName) in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/FileSystemTest.cs:line 108
at System.IO.Tests.File_Delete.Unix_ExistingDirectory_ReadOnlyVolume() in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/File/Delete.cs:line 144
~~~ | 1.0 | System.IO.Tests.FileInfo_Delete failed with Xunit.Sdk.TrueException in CI - failed test: System.IO.Tests.FileInfo_Delete.Unix_ExistingDirectory_ReadOnlyVolume
detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_ubuntu14.04_debug/95/testReport/System.IO.Tests/FileInfo_Delete/Unix_ExistingDirectory_ReadOnlyVolume/
MESSAGE:
~~~
Assert.True() Failure\nExpected: True\nActual: False
~~~
STACK TRACE:
~~~
at System.IO.Tests.FileSystemTest.RunAsSudo(String commandLine) in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/FileSystemTest.cs:line 69
at System.IO.Tests.FileSystemTest.ReadOnly_FileSystemHelper(Action`1 testAction, String subDirectoryName) in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/FileSystemTest.cs:line 108
at System.IO.Tests.File_Delete.Unix_ExistingDirectory_ReadOnlyVolume() in /mnt/j/workspace/dotnet_corefx/master/outerloop_netcoreapp_ubuntu14.04_debug/src/System.IO.FileSystem/tests/File/Delete.cs:line 144
~~~ | non_priority | system io tests fileinfo delete failed with xunit sdk trueexception in ci failed test system io tests fileinfo delete unix existingdirectory readonlyvolume detail message assert true failure nexpected true nactual false stack trace at system io tests filesystemtest runassudo string commandline in mnt j workspace dotnet corefx master outerloop netcoreapp debug src system io filesystem tests filesystemtest cs line at system io tests filesystemtest readonly filesystemhelper action testaction string subdirectoryname in mnt j workspace dotnet corefx master outerloop netcoreapp debug src system io filesystem tests filesystemtest cs line at system io tests file delete unix existingdirectory readonlyvolume in mnt j workspace dotnet corefx master outerloop netcoreapp debug src system io filesystem tests file delete cs line | 0 |
30,370 | 2,723,600,757 | IssuesEvent | 2015-04-14 13:36:54 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | ClassPathResolver section in UserManual is out of date | bug imported Milestone-3.0.0 Priority-Medium Wiki | _From [brunodep...@gmail.com](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | 1.0 | ClassPathResolver section in UserManual is out of date - _From [brunodep...@gmail.com](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | priority | classpathresolver section in usermanual is out of date from on may what steps will reproduce the problem go to wiki usermanual check instructions for creating a weblogicclasspathresolver check method public url findwebbasedir the document says to override method public url findwebbasedir however the class classpathresolverimpl doesn t have this method it has a similar method public url findwebbasedirs seems like this section of the usermanual is out of date could you guys update it cheers b original issue | 1 |
70,117 | 13,429,135,142 | IssuesEvent | 2020-09-07 00:49:38 | EKA2L1/Compatibility-List | https://api.github.com/repos/EKA2L1/Compatibility-List | opened | System Rush | - Game Genre: Racing Bootable IO Component Error N-Gage Unimplemented Opcode | # App summary
- App name: System Rush
# EKA2L1 info
- Build name: 1.0.1463
# Test environment summary
- OS: Windows
- CPU: AMD
- GPU: NVIDIA
- RAM: 8 GB
# Issues
it stops working after running into many "opcode" errors
# Log
[EKA2L1.log](https://github.com/EKA2L1/Compatibility-List/files/5180562/EKA2L1.log) | 1.0 | System Rush - # App summary
- App name: System Rush
# EKA2L1 info
- Build name: 1.0.1463
# Test environment summary
- OS: Windows
- CPU: AMD
- GPU: NVIDIA
- RAM: 8 GB
# Issues
it stops working after running into many "opcode" errors
# Log
[EKA2L1.log](https://github.com/EKA2L1/Compatibility-List/files/5180562/EKA2L1.log) | non_priority | system rush app summary app name system rush info build name test environment summary os windows cpu amd gpu nvidia ram gb issues it stops working after running into many opcode errors log | 0 |
163,502 | 25,828,002,121 | IssuesEvent | 2022-12-12 14:16:33 | canedobox/gymproject | https://api.github.com/repos/canedobox/gymproject | closed | Fix contact form section height bug on small screens | bug design | The contact form section is overflowing on small screens, see screenshot:
 | 1.0 | Fix contact form section height bug on small screens - The contact form section is overflowing on small screens, see screenshot:
 | non_priority | fix contact form section height bug on small screens the contact form section is overflowing on small screens see screenshot | 0 |
545,098 | 15,936,060,913 | IssuesEvent | 2021-04-14 10:38:39 | googleapis/python-aiplatform | https://api.github.com/repos/googleapis/python-aiplatform | reopened | samples.snippets.create_custom_job_sample_test: test_ucaip_generated_create_custom_job failed | api: aiplatform flakybot: flaky flakybot: issue priority: p1 samples type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 1b03775c04db8a99c141c69813c42142c077ceef
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/14b5397a-0a2b-46a5-b25e-6805135f01da), [Sponge](http://sponge2/14b5397a-0a2b-46a5-b25e-6805135f01da)
status: failed
<details><summary>Test output</summary><br><pre>shared_state = {}
job_client = <google.cloud.aiplatform_v1.services.job_service.client.JobServiceClient object at 0x7fcd8501af90>
@pytest.fixture(scope="function", autouse=True)
def teardown(shared_state, job_client):
yield
# Cancel the created custom job
> job_client.cancel_custom_job(name=shared_state["custom_job_name"])
E KeyError: 'custom_job_name'
create_custom_job_sample_test.py:33: KeyError</pre></details> | 1.0 | samples.snippets.create_custom_job_sample_test: test_ucaip_generated_create_custom_job failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 1b03775c04db8a99c141c69813c42142c077ceef
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/14b5397a-0a2b-46a5-b25e-6805135f01da), [Sponge](http://sponge2/14b5397a-0a2b-46a5-b25e-6805135f01da)
status: failed
<details><summary>Test output</summary><br><pre>shared_state = {}
job_client = <google.cloud.aiplatform_v1.services.job_service.client.JobServiceClient object at 0x7fcd8501af90>
@pytest.fixture(scope="function", autouse=True)
def teardown(shared_state, job_client):
yield
# Cancel the created custom job
> job_client.cancel_custom_job(name=shared_state["custom_job_name"])
E KeyError: 'custom_job_name'
create_custom_job_sample_test.py:33: KeyError</pre></details> | priority | samples snippets create custom job sample test test ucaip generated create custom job failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output shared state job client pytest fixture scope function autouse true def teardown shared state job client yield cancel the created custom job job client cancel custom job name shared state e keyerror custom job name create custom job sample test py keyerror | 1 |
704,759 | 24,207,949,885 | IssuesEvent | 2022-09-25 13:57:35 | elabftw/elabftw | https://api.github.com/repos/elabftw/elabftw | closed | Suport HEIF in PDF generation | feature request priority:low | # Feature request
<!-- Please provide a clear description of what problem you are trying to solve and how would you want it to be solved. -->
The PDF generation does not understand HEIF files. Can the PDF generator be modified to support this file format, since it is now the default standard on iOS devices.
I can't tell anymore where in MakePdf the images are generated, but ImageMagick supports this: https://eplt.medium.com/5-minutes-to-install-imagemagick-with-heic-support-on-ubuntu-18-04-digitalocean-fe2d09dcef1, and I see you're already using imagemagick in the MakeThumbail.php
I'm using the hosted version or I have PRO support: NO
| 1.0 | Suport HEIF in PDF generation - # Feature request
<!-- Please provide a clear description of what problem you are trying to solve and how would you want it to be solved. -->
The PDF generation does not understand HEIF files. Can the PDF generator be modified to support this file format, since it is now the default standard on iOS devices.
I can't tell anymore where in MakePdf the images are generated, but ImageMagick supports this: https://eplt.medium.com/5-minutes-to-install-imagemagick-with-heic-support-on-ubuntu-18-04-digitalocean-fe2d09dcef1, and I see you're already using imagemagick in the MakeThumbail.php
I'm using the hosted version or I have PRO support: NO
| priority | suport heif in pdf generation feature request the pdf generation does not understand heif files can the pdf generator be modified to support this file format since it is now the default standard on ios devices i can t tell anymore where in makepdf the images are generated but imagemagick supports this and i see you re already using imagemagick in the makethumbail php i m using the hosted version or i have pro support no | 1 |
236,292 | 18,091,955,228 | IssuesEvent | 2021-09-22 03:22:34 | pythonarcade/arcade | https://api.github.com/repos/pythonarcade/arcade | closed | Dead link in the docs | fix waiting for release documentation | ## Documentation request:
### What documentation needs to change?
[Get Started Here](https://api.arcade.academy/en/latest/get_started.html?highlight=Learn%20arcade%20book%20on%20collisions#arcade-skill-tree)
### Where is it located?
`arcade/doc/get_started.rst`
### What is wrong with it? How can it be improved?
The [Learn arcade book on collisions](https://learn.arcade.academy/en/latest/chapters/18_sprites_and_collisions/sprites.html#the-update-method) link 404s. | 1.0 | Dead link in the docs - ## Documentation request:
### What documentation needs to change?
[Get Started Here](https://api.arcade.academy/en/latest/get_started.html?highlight=Learn%20arcade%20book%20on%20collisions#arcade-skill-tree)
### Where is it located?
`arcade/doc/get_started.rst`
### What is wrong with it? How can it be improved?
The [Learn arcade book on collisions](https://learn.arcade.academy/en/latest/chapters/18_sprites_and_collisions/sprites.html#the-update-method) link 404s. | non_priority | dead link in the docs documentation request what documentation needs to change where is it located arcade doc get started rst what is wrong with it how can it be improved the link | 0 |
758,628 | 26,562,439,766 | IssuesEvent | 2023-01-20 16:58:20 | strug-hub/LocusFocus | https://api.github.com/repos/strug-hub/LocusFocus | opened | CI/CD: Update version number using tagged releases | low priority | Depends partly on #29
Not important, but nice to have | 1.0 | CI/CD: Update version number using tagged releases - Depends partly on #29
Not important, but nice to have | priority | ci cd update version number using tagged releases depends partly on not important but nice to have | 1 |
607,478 | 18,783,321,935 | IssuesEvent | 2021-11-08 09:31:07 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | unifiedportal-mem.epfindia.gov.in - site is not usable | priority-important browser-fenix engine-gecko | <!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/92711 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://unifiedportal-mem.epfindia.gov.in/memberinterface/error.jsp
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Page lock not support epfo and other
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/11/bd4c6d9f-bf01-46d9-81c7-cd8437237c93.jpeg">
</details>
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/11/207a89e7-1c15-4451-9b0e-363086d79ed4.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211101163752</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/11/1db32d00-c006-434a-bff4-ac2ac9621bfc)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | unifiedportal-mem.epfindia.gov.in - site is not usable - <!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/92711 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://unifiedportal-mem.epfindia.gov.in/memberinterface/error.jsp
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Page lock not support epfo and other
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/11/bd4c6d9f-bf01-46d9-81c7-cd8437237c93.jpeg">
</details>
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/11/207a89e7-1c15-4451-9b0e-363086d79ed4.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211101163752</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/11/1db32d00-c006-434a-bff4-ac2ac9621bfc)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | unifiedportal mem epfindia gov in site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce page lock not support epfo and other view the screenshot img alt screenshot src view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
34,697 | 14,492,652,470 | IssuesEvent | 2020-12-11 07:20:47 | microsoft/BotFramework-Composer | https://api.github.com/repos/microsoft/BotFramework-Composer | closed | Bot composer - Unable to Begin new Dialog after Cancel All active dialog | Bot Services Type: Bug customer-reported | I am using Bot composer. My case is simple.
I want to Cancel all active dialog and begin new dialog. I am not sure why this is not happening with the emulator.

| 1.0 | Bot composer - Unable to Begin new Dialog after Cancel All active dialog - I am using Bot composer. My case is simple.
I want to Cancel all active dialog and begin new dialog. I am not sure why this is not happening with the emulator.

| non_priority | bot composer unable to begin new dialog after cancel all active dialog i am using bot composer my case is simple i want to cancel all active dialog and begin new dialog i am not sure why this is not happening with the emulator | 0 |
524,357 | 15,212,062,272 | IssuesEvent | 2021-02-17 09:55:19 | staxrip/staxrip | https://api.github.com/repos/staxrip/staxrip | closed | NVENC and QSVENS Subtitles file option must be removed | added/fixed/done bug priority medium | **Describe the bug**
The option Other > Subtitle File must be removed from :
- NVEnc : h264, h265
- QSVEnc : h264, h265
because this options allows to **mux (not harcode!)** a subtitle file to the output of NVEnc and QSVEnc. Since in Staxrip the output of the encoder is *.h264 or *.h265, this option makes crash.
| 1.0 | NVENC and QSVENS Subtitles file option must be removed - **Describe the bug**
The option Other > Subtitle File must be removed from :
- NVEnc : h264, h265
- QSVEnc : h264, h265
because this options allows to **mux (not harcode!)** a subtitle file to the output of NVEnc and QSVEnc. Since in Staxrip the output of the encoder is *.h264 or *.h265, this option makes crash.
| priority | nvenc and qsvens subtitles file option must be removed describe the bug the option other subtitle file must be removed from nvenc qsvenc because this options allows to mux not harcode a subtitle file to the output of nvenc and qsvenc since in staxrip the output of the encoder is or this option makes crash | 1 |
133,544 | 29,298,451,238 | IssuesEvent | 2023-05-25 00:03:11 | microsoft/devhome | https://api.github.com/repos/microsoft/devhome | closed | resource.resw issues | Issue-Bug Area-Code-Health | ### Dev Home version
_No response_
### Windows build number
_No response_
### Other software
_No response_
### Steps to reproduce the bug
* Settings_AboutDescription.Text <- not used in code.
### Expected result
_No response_
### Actual result
_No response_
### Included System Information
_No response_
### Included Extensions Information
_No response_ | 1.0 | resource.resw issues - ### Dev Home version
_No response_
### Windows build number
_No response_
### Other software
_No response_
### Steps to reproduce the bug
* Settings_AboutDescription.Text <- not used in code.
### Expected result
_No response_
### Actual result
_No response_
### Included System Information
_No response_
### Included Extensions Information
_No response_ | non_priority | resource resw issues dev home version no response windows build number no response other software no response steps to reproduce the bug settings aboutdescription text not used in code expected result no response actual result no response included system information no response included extensions information no response | 0 |
166,234 | 14,045,201,065 | IssuesEvent | 2020-11-02 00:36:52 | matplotlib/matplotlib | https://api.github.com/repos/matplotlib/matplotlib | closed | Mostly unused glossary still exists in our docs | Documentation | <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
Discussed in the GSOD call, this feels like something that we wanted to add at some point but never really got around to using. It currently just contains a list of backends (mostly) and links out to their respective external pages.
https://matplotlib.org/glossary/index.html?highlight=glossary
### Suggested Improvement
After making sure that the places that link to the glossary correctly link out to the appropriate pages, the glossary should probably just be deleted (consensus on call). | 1.0 | Mostly unused glossary still exists in our docs - <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
Discussed in the GSOD call, this feels like something that we wanted to add at some point but never really got around to using. It currently just contains a list of backends (mostly) and links out to their respective external pages.
https://matplotlib.org/glossary/index.html?highlight=glossary
### Suggested Improvement
After making sure that the places that link to the glossary correctly link out to the appropriate pages, the glossary should probably just be deleted (consensus on call). | non_priority | mostly unused glossary still exists in our docs problem discussed in the gsod call this feels like something that we wanted to add at some point but never really got around to using it currently just contains a list of backends mostly and links out to their respective external pages suggested improvement after making sure that the places that link to the glossary correctly link out to the appropriate pages the glossary should probably just be deleted consensus on call | 0 |
113,272 | 9,634,542,110 | IssuesEvent | 2019-05-15 21:32:22 | scylladb/scylla | https://api.github.com/repos/scylladb/scylla | closed | Segfault during repair_kill_2_test involving restricting_mutation_reader::with_reader | bug dtest repair | scylla version b8158dd65d3f9cc23865ff15ed8459a93e15761a
Seen in [dtest-release/56/artifact/logs-release.2/1552468431781_repair_additional_test.RepairAdditionalTest.repair_kill_2_test/node2.log](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/56/artifact/logs-release.2/1552468431781_repair_additional_test.RepairAdditionalTest.repair_kill_2_test/node2.log):
```
INFO 2019-03-13 11:13:30,970 [shard 0] compaction_manager - Stopped
INFO 2019-03-13 11:13:30,970 [shard 0] view - Stopping view builder
INFO 2019-03-13 11:13:30,970 [shard 1] view - Stopping view builder
INFO 2019-03-13 11:13:30,970 [shard 0] storage_service - Drain on shutdown: starts
INFO 2019-03-13 11:13:30,970 [shard 0] storage_service - Stop transport: starts
INFO 2019-03-13 11:13:30,971 [shard 0] storage_service - Thrift server stopped
INFO 2019-03-13 11:13:30,974 [shard 0] storage_service - CQL server stopped
INFO 2019-03-13 11:13:30,974 [shard 0] storage_service - Stop transport: shutdown rpc and cql server done
INFO 2019-03-13 11:13:30,974 [shard 0] gossip - My status = NORMAL
INFO 2019-03-13 11:13:30,974 [shard 0] gossip - Announcing shutdown
INFO 2019-03-13 11:13:30,976 [shard 0] storage_service - Node 127.0.75.2 state jump to normal
Segmentation fault on shard 0.
Backtrace:
0x00000000041f2b02
0x00000000040f15b5
0x00000000040f18b5
0x00000000040f1903
0x00007f846257402f
0x0000000001296343
0x00000000012976fb
0x00000000040eea41
0x00000000040eec3e
0x00000000041c2d35
0x0000000004059f85
0x00000000009ea8e4
/lib64/libc.so.6+0x0000000000024412
0x0000000000a4a5cd
seastar::future<> seastar::futurize<seastar::future<> >::apply<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, reader_concurrency_semaphore::reader_permit>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&, std::tuple<reader_concurrency_semaphore::reader_permit>&&) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/shared_ptr.hh:294
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.hh:282
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.cc:585
(inlined by) restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}::operator()(reader_concurrency_semaphore::reader_permit) at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.cc:616
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) seastar::future<> seastar::futurize<seastar::future<> >::apply<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, reader_concurrency_semaphore::reader_permit>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&, std::tuple<reader_concurrency_semaphore::reader_permit>&&) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:1478
auto seastar::future<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::then_impl<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, seastar::future<> >(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&)::{lambda(seastar::future<>)#1}::operator()<seastar::future_state<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> > >(seastar::future<>) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:1015
(inlined by) seastar::continuation<seastar::future<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::then_impl<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, seastar::future<> >(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&)::{lambda(seastar::future<>)#1}, seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::run_and_dispose() at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:425
```
| 1.0 | Segfault during repair_kill_2_test involving restricting_mutation_reader::with_reader - scylla version b8158dd65d3f9cc23865ff15ed8459a93e15761a
Seen in [dtest-release/56/artifact/logs-release.2/1552468431781_repair_additional_test.RepairAdditionalTest.repair_kill_2_test/node2.log](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/56/artifact/logs-release.2/1552468431781_repair_additional_test.RepairAdditionalTest.repair_kill_2_test/node2.log):
```
INFO 2019-03-13 11:13:30,970 [shard 0] compaction_manager - Stopped
INFO 2019-03-13 11:13:30,970 [shard 0] view - Stopping view builder
INFO 2019-03-13 11:13:30,970 [shard 1] view - Stopping view builder
INFO 2019-03-13 11:13:30,970 [shard 0] storage_service - Drain on shutdown: starts
INFO 2019-03-13 11:13:30,970 [shard 0] storage_service - Stop transport: starts
INFO 2019-03-13 11:13:30,971 [shard 0] storage_service - Thrift server stopped
INFO 2019-03-13 11:13:30,974 [shard 0] storage_service - CQL server stopped
INFO 2019-03-13 11:13:30,974 [shard 0] storage_service - Stop transport: shutdown rpc and cql server done
INFO 2019-03-13 11:13:30,974 [shard 0] gossip - My status = NORMAL
INFO 2019-03-13 11:13:30,974 [shard 0] gossip - Announcing shutdown
INFO 2019-03-13 11:13:30,976 [shard 0] storage_service - Node 127.0.75.2 state jump to normal
Segmentation fault on shard 0.
Backtrace:
0x00000000041f2b02
0x00000000040f15b5
0x00000000040f18b5
0x00000000040f1903
0x00007f846257402f
0x0000000001296343
0x00000000012976fb
0x00000000040eea41
0x00000000040eec3e
0x00000000041c2d35
0x0000000004059f85
0x00000000009ea8e4
/lib64/libc.so.6+0x0000000000024412
0x0000000000a4a5cd
seastar::future<> seastar::futurize<seastar::future<> >::apply<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, reader_concurrency_semaphore::reader_permit>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&, std::tuple<reader_concurrency_semaphore::reader_permit>&&) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/shared_ptr.hh:294
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.hh:282
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.cc:585
(inlined by) restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}::operator()(reader_concurrency_semaphore::reader_permit) at /jenkins/workspace/scylla-master/dtest-release/scylla/mutation_reader.cc:616
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) seastar::future<> seastar::futurize<seastar::future<> >::apply<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, reader_concurrency_semaphore::reader_permit>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&, std::tuple<reader_concurrency_semaphore::reader_permit>&&) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:1478
auto seastar::future<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::then_impl<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, seastar::future<> >(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&)::{lambda(seastar::future<>)#1}::operator()<seastar::future_state<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> > >(seastar::future<>) at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:1015
(inlined by) seastar::continuation<seastar::future<seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::then_impl<restricting_mutation_reader::with_reader<restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}>(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit>)#1}, seastar::future<> >(restricting_mutation_reader::fill_buffer(std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >)::{lambda(flat_mutation_reader&)#1}&&)::{lambda(seastar::future<>)#1}, seastar::lw_shared_ptr<reader_concurrency_semaphore::reader_permit> >::run_and_dispose() at /jenkins/workspace/scylla-master/dtest-release/scylla/seastar/include/seastar/core/future.hh:425
```
| non_priority | segfault during repair kill test involving restricting mutation reader with reader scylla version seen in info compaction manager stopped info view stopping view builder info view stopping view builder info storage service drain on shutdown starts info storage service stop transport starts info storage service thrift server stopped info storage service cql server stopped info storage service stop transport shutdown rpc and cql server done info gossip my status normal info gossip announcing shutdown info storage service node state jump to normal segmentation fault on shard backtrace libc so seastar future seastar futurize apply lambda flat mutation reader restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std chrono time point lambda seastar lw shared ptr reader concurrency semaphore reader permit restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std tuple at jenkins workspace scylla master dtest release scylla seastar include seastar core shared ptr hh inlined by at jenkins workspace scylla master dtest release scylla mutation reader hh inlined by at jenkins workspace scylla master dtest release scylla mutation reader cc inlined by restricting mutation reader with reader lambda flat mutation reader restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std chrono time point lambda seastar lw shared ptr operator reader concurrency semaphore reader permit at jenkins workspace scylla master dtest release scylla mutation reader cc inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by seastar future seastar futurize apply lambda flat mutation reader restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std chrono time point lambda seastar lw shared ptr reader concurrency semaphore reader permit restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std tuple at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh auto seastar future then impl lambda flat mutation reader restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std chrono time point lambda seastar lw shared ptr seastar future restricting mutation reader fill buffer std chrono time point lambda flat mutation reader lambda seastar future operator seastar future at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by seastar continuation then impl lambda flat mutation reader restricting mutation reader fill buffer std chrono time point lambda flat mutation reader std chrono time point lambda seastar lw shared ptr seastar future restricting mutation reader fill buffer std chrono time point lambda flat mutation reader lambda seastar future seastar lw shared ptr run and dispose at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh | 0 |
242,654 | 7,845,098,396 | IssuesEvent | 2018-06-19 11:53:12 | google/google-api-dotnet-client | https://api.github.com/repos/google/google-api-dotnet-client | closed | unable to assign a file name when uploading data by Analytics API | :rotating_light: Priority: P2+ Type: Enhancement | Hi,
I am using Google.Apis.Analytics.v3 to upload my cost data to GAP and all the data uploaded to GAP are "unknown file". Is there a way to define the filename when using analytics API? Or it will be a new feature in the later version?
This is how I upload data currently.
` AnalyticsService service;`
`...`
`service.Management.Uploads.UploadData(accountId, propertyId, dataSourceId, streamData, "application/octet-stream").Upload();`
What I want is something like
`service.Management.Uploads.UploadData(accountId, propertyId, dataSourceId, streamData, "application/octet-stream", filename).Upload();`
Thanks. | 1.0 | unable to assign a file name when uploading data by Analytics API - Hi,
I am using Google.Apis.Analytics.v3 to upload my cost data to GAP and all the data uploaded to GAP are "unknown file". Is there a way to define the filename when using analytics API? Or it will be a new feature in the later version?
This is how I upload data currently.
` AnalyticsService service;`
`...`
`service.Management.Uploads.UploadData(accountId, propertyId, dataSourceId, streamData, "application/octet-stream").Upload();`
What I want is something like
`service.Management.Uploads.UploadData(accountId, propertyId, dataSourceId, streamData, "application/octet-stream", filename).Upload();`
Thanks. | priority | unable to assign a file name when uploading data by analytics api hi i am using google apis analytics to upload my cost data to gap and all the data uploaded to gap are unknown file is there a way to define the filename when using analytics api or it will be a new feature in the later version this is how i upload data currently analyticsservice service service management uploads uploaddata accountid propertyid datasourceid streamdata application octet stream upload what i want is something like service management uploads uploaddata accountid propertyid datasourceid streamdata application octet stream filename upload thanks | 1 |
48,424 | 13,068,522,279 | IssuesEvent | 2020-07-31 03:50:48 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [steamshovel] script execution broken in python3 (Trac #2370) | Migrated from Trac combo core defect | Apparently python3 does not support `execfile()`, breaking automatic execution of scripts using the `-s` command line argument.
Migrated from https://code.icecube.wisc.edu/ticket/2370
```json
{
"status": "closed",
"changetime": "2020-07-15T23:13:13",
"description": "Apparently python3 does not support `execfile()`, breaking automatic execution of scripts using the `-s` command line argument.",
"reporter": "karg",
"cc": "",
"resolution": "fixed",
"_ts": "1594854793115010",
"component": "combo core",
"summary": "[steamshovel] script execution broken in python3",
"priority": "critical",
"keywords": "",
"time": "2019-11-03T14:20:56",
"milestone": "Autumnal Equinox 2020",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | [steamshovel] script execution broken in python3 (Trac #2370) - Apparently python3 does not support `execfile()`, breaking automatic execution of scripts using the `-s` command line argument.
Migrated from https://code.icecube.wisc.edu/ticket/2370
```json
{
"status": "closed",
"changetime": "2020-07-15T23:13:13",
"description": "Apparently python3 does not support `execfile()`, breaking automatic execution of scripts using the `-s` command line argument.",
"reporter": "karg",
"cc": "",
"resolution": "fixed",
"_ts": "1594854793115010",
"component": "combo core",
"summary": "[steamshovel] script execution broken in python3",
"priority": "critical",
"keywords": "",
"time": "2019-11-03T14:20:56",
"milestone": "Autumnal Equinox 2020",
"owner": "olivas",
"type": "defect"
}
```
| non_priority | script execution broken in trac apparently does not support execfile breaking automatic execution of scripts using the s command line argument migrated from json status closed changetime description apparently does not support execfile breaking automatic execution of scripts using the s command line argument reporter karg cc resolution fixed ts component combo core summary script execution broken in priority critical keywords time milestone autumnal equinox owner olivas type defect | 0 |
339,650 | 10,257,246,307 | IssuesEvent | 2019-08-21 19:38:33 | gamerpals/Backend | https://api.github.com/repos/gamerpals/Backend | closed | POST api/Login creates second user with everything set to null | bug high priority invalid wontfix | # Expected Behavior
Log the user in and return the user object to that user
# Current Behavior
For some reason it created a new user with everything set to null:
Query `{GoogleId: '106288635263380130851'}` returns original user and 7x this:
```
{
"_id": "5d5d693a56d8890004b45704",
"CreateTime": "2019-08-21T15:54:34.434Z",
"GoogleId": "106288635263380130851",
"ProfileName": null,
"ProfileDescription": null,
"ProfilePicture": null,
"Birthday": "0001-01-01T00:00:00.000Z",
"OnlineStatus": null,
"Country": "000000000000000000000000",
"Languages": null,
"Gender": null,
"CurrentSession": null,
"Karma": null,
"GamesSelected": null,
"ActiveSearches": null,
"PassiveSearches": null,
"Role": "000000000000000000000000",
"FriendsList": null,
"RecievedFriendRequests": null,
"SentFriendRequests": null,
"PrivateChats": null,
"Notifications": null,
"ConnectedServices": null,
"ProfileComplete": false
}
```
# Possible Solution
/
# Detailed Description
/ | 1.0 | POST api/Login creates second user with everything set to null - # Expected Behavior
Log the user in and return the user object to that user
# Current Behavior
For some reason it created a new user with everything set to null:
Query `{GoogleId: '106288635263380130851'}` returns original user and 7x this:
```
{
"_id": "5d5d693a56d8890004b45704",
"CreateTime": "2019-08-21T15:54:34.434Z",
"GoogleId": "106288635263380130851",
"ProfileName": null,
"ProfileDescription": null,
"ProfilePicture": null,
"Birthday": "0001-01-01T00:00:00.000Z",
"OnlineStatus": null,
"Country": "000000000000000000000000",
"Languages": null,
"Gender": null,
"CurrentSession": null,
"Karma": null,
"GamesSelected": null,
"ActiveSearches": null,
"PassiveSearches": null,
"Role": "000000000000000000000000",
"FriendsList": null,
"RecievedFriendRequests": null,
"SentFriendRequests": null,
"PrivateChats": null,
"Notifications": null,
"ConnectedServices": null,
"ProfileComplete": false
}
```
# Possible Solution
/
# Detailed Description
/ | priority | post api login creates second user with everything set to null expected behavior log the user in and return the user object to that user current behavior for some reason it created a new user with everything set to null query googleid returns original user and this id createtime googleid profilename null profiledescription null profilepicture null birthday onlinestatus null country languages null gender null currentsession null karma null gamesselected null activesearches null passivesearches null role friendslist null recievedfriendrequests null sentfriendrequests null privatechats null notifications null connectedservices null profilecomplete false possible solution detailed description | 1 |
57,630 | 3,083,237,308 | IssuesEvent | 2015-08-24 07:30:21 | magro/memcached-session-manager | https://api.github.com/repos/magro/memcached-session-manager | closed | Support context configured with cookies="false" | bug imported Milestone-1.6.5 Priority-Medium | _From [lstczhan...@gmail.com](https://code.google.com/u/102339389615967637599/) on April 02, 2013 09:22:14_
<b>What steps will reproduce the problem?</b>
1. tomcat forbid cookies
/conf/context.xml:
<Context cookies="false">
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.1.55:11211,n2:192.168.1.56:11211"
sticky="false"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
/>
</Context>
2.start tomcat server,do Login request,
i put customer info in session,login success
but i do other action,like query customer info with "http://ip:port:/something;jsessionid=46A87DE63835612CAF557AF34013E18D-n2";
fail,note i'm not login.
why memcached do not save Session when tomcat forbid cookies?
3.logs:
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'ping' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.NodeAvailabilityCache updateIsNodeAvailable
良好: CacheLoader returned node availability 'true' for node 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'E6636323F89B006F4E86DEA12FC02653' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.MemcachedSessionService createSession
良好: Created new session with id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=159_ | 1.0 | Support context configured with cookies="false" - _From [lstczhan...@gmail.com](https://code.google.com/u/102339389615967637599/) on April 02, 2013 09:22:14_
<b>What steps will reproduce the problem?</b>
1. tomcat forbid cookies
/conf/context.xml:
<Context cookies="false">
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.1.55:11211,n2:192.168.1.56:11211"
sticky="false"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
/>
</Context>
2.start tomcat server,do Login request,
i put customer info in session,login success
but i do other action,like query customer info with "http://ip:port:/something;jsessionid=46A87DE63835612CAF557AF34013E18D-n2";
fail,note i'm not login.
why memcached do not save Session when tomcat forbid cookies?
3.logs:
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'ping' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.NodeAvailabilityCache updateIsNodeAvailable
良好: CacheLoader returned node availability 'true' for node 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'E6636323F89B006F4E86DEA12FC02653' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.MemcachedSessionService createSession
良好: Created new session with id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=159_ | priority | support context configured with cookies false from on april what steps will reproduce the problem tomcat forbid cookies conf context xml lt context cookies false gt lt default set of monitored resources gt lt watchedresource gt web inf web xml lt watchedresource gt lt manager classname de javakaffee web msm memcachedbackupsessionmanager memcachednodes sticky false sessionbackupasync false requesturiignorepattern ico png gif jpg css js transcoderfactoryclass de javakaffee web msm javaserializationtranscoderfactory gt lt context gt start tomcat server do login request i put customer info in session login success but i do other action like query customer info with fail note i m not login why memcached do not save session when tomcat forbid cookies logs de javakaffee web msm sessionidformat createsessionid 良好 creating new session id with orig id ping and memcached id de javakaffee web msm nodeavailabilitycache updateisnodeavailable 良好 cacheloader returned node availability true for node de javakaffee web msm sessionidformat createsessionid 良好 creating new session id with orig id and memcached id de javakaffee web msm memcachedsessionservice createsession 良好 created new session with id de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke 良好 lt lt lt lt lt lt request finished post ark client customer login de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke 良好 lt lt lt lt lt lt request finished post ark client customer login de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke original issue | 1 |
74,974 | 3,453,684,228 | IssuesEvent | 2015-12-17 12:36:49 | steve8x8/geotoad | https://api.github.com/repos/steve8x8/geotoad | closed | Date issues: logging dates may be wrong (for archived caches, with GCStatistic) | auto-migrated help wanted Priority-Low question | ```
I did as already written a myfinds as PM
but I have issues with importing the file into my statistic app.
I found out, that the archived caches do not get the right logging date:
<groundspeak:date>1980-01-01T08:00:00Z</groundspeak:date>
```
Original issue reported on code.google.com by `HeinzBro...@gmail.com` on 27 Sep 2013 at 8:24 | 1.0 | Date issues: logging dates may be wrong (for archived caches, with GCStatistic) - ```
I did as already written a myfinds as PM
but I have issues with importing the file into my statistic app.
I found out, that the archived caches do not get the right logging date:
<groundspeak:date>1980-01-01T08:00:00Z</groundspeak:date>
```
Original issue reported on code.google.com by `HeinzBro...@gmail.com` on 27 Sep 2013 at 8:24 | priority | date issues logging dates may be wrong for archived caches with gcstatistic i did as already written a myfinds as pm but i have issues with importing the file into my statistic app i found out that the archived caches do not get the right logging date original issue reported on code google com by heinzbro gmail com on sep at | 1 |
498,148 | 14,401,840,204 | IssuesEvent | 2020-12-03 14:13:25 | googleapis/python-pubsub | https://api.github.com/repos/googleapis/python-pubsub | opened | Synthesis failed for python-pubsub | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate python-pubsub. :broken_heart:
Here's the output from running `synth.py`:
```
"/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/commands.py", line 272, in build_extensions
"Failed `build_ext` step:\n{}".format(formatted_exception))
commands.CommandError: Failed `build_ext` step:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
_classic_spawn(self, command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/commands.py", line 267, in build_extensions
build_ext.build_ext.build_extensions(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 59, in _parallel_compile
_compile_single_file, objects)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'gcc' failed with exit status 1
----------------------------------------
Command "/tmpfs/src/github/synthtool/env/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmpfs/tmp/pip-install-t6thy66o/grpcio/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmpfs/tmp/pip-record-_ss41kml/install-record.txt --single-version-externally-managed --compile --install-headers /tmpfs/src/github/synthtool/env/include/site/python3.6/grpcio" failed with error code 1 in /tmpfs/tmp/pip-install-t6thy66o/grpcio/
You are using pip version 18.1, however version 20.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/python-pubsub/synth.py", line 121, in <module>
python.py_samples()
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 132, in py_samples
sample_readme_metadata = _get_sample_readme_metadata(sample_project_dir)
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 85, in _get_sample_readme_metadata
shell.run([sys.executable, "-m", "pip", "install", "-r", requirements])
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'pip', 'install', '-r', '/home/kbuilder/.cache/synthtool/python-pubsub/samples/snippets/requirements.txt']' returned non-zero exit status 1.
2020-12-03 06:13:22,564 autosynth [ERROR] > Synthesis failed
2020-12-03 06:13:22,565 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at 40628d0 chore: release 2.2.0 (#234)
2020-12-03 06:13:22,593 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2020-12-03 06:13:22,603 autosynth [DEBUG] > Running: git clean -fdx
Removing .pre-commit-config.yaml
Removing __pycache__/
Removing google/__pycache__/
Removing google/cloud/__pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main
commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop
has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
synthesizer.synthesize(synth_log_path, self.environ)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/a58840ba-ecc5-4600-a07b-5253f3b37cde/targets/github%2Fsynthtool;config=default/tests;query=python-pubsub;failed=false).
| 1.0 | Synthesis failed for python-pubsub - Hello! Autosynth couldn't regenerate python-pubsub. :broken_heart:
Here's the output from running `synth.py`:
```
"/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/commands.py", line 272, in build_extensions
"Failed `build_ext` step:\n{}".format(formatted_exception))
commands.CommandError: Failed `build_ext` step:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
_classic_spawn(self, command)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/commands.py", line 267, in build_extensions
build_ext.build_ext.build_extensions(self)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
_build_ext.build_extension(self, ext)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 59, in _parallel_compile
_compile_single_file, objects)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/tmpfs/tmp/pip-install-t6thy66o/grpcio/src/python/grpcio/_parallel_compile_patch.py", line 54, in _compile_single_file
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'gcc' failed with exit status 1
----------------------------------------
Command "/tmpfs/src/github/synthtool/env/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmpfs/tmp/pip-install-t6thy66o/grpcio/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmpfs/tmp/pip-record-_ss41kml/install-record.txt --single-version-externally-managed --compile --install-headers /tmpfs/src/github/synthtool/env/include/site/python3.6/grpcio" failed with error code 1 in /tmpfs/tmp/pip-install-t6thy66o/grpcio/
You are using pip version 18.1, however version 20.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/python-pubsub/synth.py", line 121, in <module>
python.py_samples()
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 132, in py_samples
sample_readme_metadata = _get_sample_readme_metadata(sample_project_dir)
File "/tmpfs/src/github/synthtool/synthtool/languages/python.py", line 85, in _get_sample_readme_metadata
shell.run([sys.executable, "-m", "pip", "install", "-r", requirements])
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'pip', 'install', '-r', '/home/kbuilder/.cache/synthtool/python-pubsub/samples/snippets/requirements.txt']' returned non-zero exit status 1.
2020-12-03 06:13:22,564 autosynth [ERROR] > Synthesis failed
2020-12-03 06:13:22,565 autosynth [DEBUG] > Running: git reset --hard HEAD
HEAD is now at 40628d0 chore: release 2.2.0 (#234)
2020-12-03 06:13:22,593 autosynth [DEBUG] > Running: git checkout autosynth
Switched to branch 'autosynth'
2020-12-03 06:13:22,603 autosynth [DEBUG] > Running: git clean -fdx
Removing .pre-commit-config.yaml
Removing __pycache__/
Removing google/__pycache__/
Removing google/cloud/__pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main
commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop
has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest)
File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch
synthesizer.synthesize(synth_log_path, self.environ)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/a58840ba-ecc5-4600-a07b-5253f3b37cde/targets/github%2Fsynthtool;config=default/tests;query=python-pubsub;failed=false).
| priority | synthesis failed for python pubsub hello autosynth couldn t regenerate python pubsub broken heart here s the output from running synth py tmpfs src github synthtool env lib site packages setuptools init py line in setup return distutils core setup attrs file home kbuilder pyenv versions lib distutils core py line in setup dist run commands file home kbuilder pyenv versions lib distutils dist py line in run commands self run command cmd file home kbuilder pyenv versions lib distutils dist py line in run command cmd obj run file tmpfs src github synthtool env lib site packages setuptools command install py line in run return orig install run self file home kbuilder pyenv versions lib distutils command install py line in run self run command build file home kbuilder pyenv versions lib distutils cmd py line in run command self distribution run command command file home kbuilder pyenv versions lib distutils dist py line in run command cmd obj run file home kbuilder pyenv versions lib distutils command build py line in run self run command cmd name file home kbuilder pyenv versions lib distutils cmd py line in run command self distribution run command command file home kbuilder pyenv versions lib distutils dist py line in run command cmd obj run file tmpfs src github synthtool env lib site packages setuptools command build ext py line in run build ext run self file home kbuilder pyenv versions lib distutils command build ext py line in run self build extensions file tmpfs tmp pip install grpcio src python grpcio commands py line in build extensions failed build ext step n format formatted exception commands commanderror failed build ext step traceback most recent call last file home kbuilder pyenv versions lib distutils unixccompiler py line in compile extra postargs file tmpfs tmp pip install grpcio src python grpcio spawn patch py line in commandfile spawn classic spawn self command file home kbuilder pyenv versions lib distutils ccompiler py line in spawn spawn cmd dry run self dry run file home kbuilder pyenv versions lib distutils spawn py line in spawn spawn posix cmd search path dry run dry run file home kbuilder pyenv versions lib distutils spawn py line in spawn posix cmd exit status distutils errors distutilsexecerror command gcc failed with exit status during handling of the above exception another exception occurred traceback most recent call last file tmpfs tmp pip install grpcio src python grpcio commands py line in build extensions build ext build ext build extensions self file home kbuilder pyenv versions lib distutils command build ext py line in build extensions self build extensions serial file home kbuilder pyenv versions lib distutils command build ext py line in build extensions serial self build extension ext file tmpfs src github synthtool env lib site packages setuptools command build ext py line in build extension build ext build extension self ext file home kbuilder pyenv versions lib distutils command build ext py line in build extension depends ext depends file tmpfs tmp pip install grpcio src python grpcio parallel compile patch py line in parallel compile compile single file objects file home kbuilder pyenv versions lib multiprocessing pool py line in map return self map async func iterable mapstar chunksize get file home kbuilder pyenv versions lib multiprocessing pool py line in get raise self value file home kbuilder pyenv versions lib multiprocessing pool py line in worker result true func args kwds file home kbuilder pyenv versions lib multiprocessing pool py line in mapstar return list map args file tmpfs tmp pip install grpcio src python grpcio parallel compile patch py line in compile single file self compile obj src ext cc args extra postargs pp opts file home kbuilder pyenv versions lib distutils unixccompiler py line in compile raise compileerror msg distutils errors compileerror command gcc failed with exit status command tmpfs src github synthtool env bin u c import setuptools tokenize file tmpfs tmp pip install grpcio setup py f getattr tokenize open open file code f read replace r n n f close exec compile code file exec install record tmpfs tmp pip record install record txt single version externally managed compile install headers tmpfs src github synthtool env include site grpcio failed with error code in tmpfs tmp pip install grpcio you are using pip version however version is available you should consider upgrading via the pip install upgrade pip command traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool python pubsub synth py line in python py samples file tmpfs src github synthtool synthtool languages python py line in py samples sample readme metadata get sample readme metadata sample project dir file tmpfs src github synthtool synthtool languages python py line in get sample readme metadata shell run file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git reset hard head head is now at chore release autosynth running git checkout autosynth switched to branch autosynth autosynth running git clean fdx removing pre commit config yaml removing pycache removing google pycache removing google cloud pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main commit count synthesize loop x multiple prs change pusher synthesizer file tmpfs src github synthtool autosynth synth py line in synthesize loop has changes toolbox synthesize version in new branch synthesizer youngest file tmpfs src github synthtool autosynth synth toolbox py line in synthesize version in new branch synthesizer synthesize synth log path self environ file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 1 |
214,584 | 16,600,012,223 | IssuesEvent | 2021-06-01 18:02:48 | sul-dlss/happy-heron | https://api.github.com/repos/sul-dlss/happy-heron | closed | Display approximate dates as "ca. [date]" | PO R1.0 user testing | These currently display with a question mark behind them.
A single approximate date should read "ca. 1492"
An approximate data range should read as one of the following, depending on whether one or both dates are approximate:
- ca. 1492 - ca. 1529
- ca. 1492 - 1529
- ca 1492 - ca. 1529 | 1.0 | Display approximate dates as "ca. [date]" - These currently display with a question mark behind them.
A single approximate date should read "ca. 1492"
An approximate data range should read as one of the following, depending on whether one or both dates are approximate:
- ca. 1492 - ca. 1529
- ca. 1492 - 1529
- ca 1492 - ca. 1529 | non_priority | display approximate dates as ca these currently display with a question mark behind them a single approximate date should read ca an approximate data range should read as one of the following depending on whether one or both dates are approximate ca ca ca ca ca | 0 |
157,111 | 24,628,504,961 | IssuesEvent | 2022-10-16 20:25:03 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | Query causes InvalidOperationException when creating an anonymous type | closed-by-design customer-reported | I've updated an application from EF Core 3.1 to EF Core 6.
The code now produces an InvalidOperationException : Nullable object must have a value on some queries.
Situation :
A subscription entity with a 0 to 1 relationship to a customer entity
In Customer entity configuration :
```C#
builder.HasMany(e => e.Subscriptions)
.WithOne(e => e.Customer)
.IsRequired(false);
```
following query ran successful on EF Core 3.1 but now produces an error on EF core 6
```C#
var query = from subscription in dbContext.Subscriptions
where subscription.Id == request.SourceSubscriptionId
select new
{
SubscriptionId = subscription.Id,
CustomerId = subscription.Customer.Id,
SerialNumber = subscription.SerialNumber
};
var result = await query.SingleOrDefaultAsync(cancellationToken);
```
The anonymous class is generated with CustomerId of type Guid instead of Guid?, even though the Customer is optional.
And the query fails because the subscription does not have a Customer.
It won't work even when I make the Customer property on the subscription entity nullable like this
```C#
public Customer? Customer { get; set; }
```
The query only runs when I explicitly cast the CustomerId in the query like this :
```C#
var query = from subscription in dbContext.Subscriptions
where subscription.Id == request.SourceSubscriptionId
select new
{
SubscriptionId = subscription.Id,
CustomerId = (Guid?) subscription.Customer.Id,
SerialNumber = subscription.SerialNumber
};
var result = await query.SingleOrDefaultAsync(cancellationToken);
```
part of stacktrace :
```
Message:
Test method xxxxxxxxxxxx threw exception:
System.InvalidOperationException: Nullable object must have a value.
Stack Trace:
lambda_method2553(Closure , QueryContext , DbDataReader , ResultContext , SingleQueryResultCoordinator )
AsyncEnumerator.MoveNextAsync()
ShapedQueryCompilingExpressionVisitor.SingleOrDefaultAsync[TSource](IAsyncEnumerable`1 asyncEnumerable, CancellationToken cancellationToken)
ShapedQueryCompilingExpressionVisitor.SingleOrDefaultAsync[TSource](IAsyncEnumerable`1 asyncEnumerable, CancellationToken cancellationToken)
ListRenewTargetSubscriptionsBySourceSubscriptionHandler.Handle(ListRenewTargetSubscriptionsBySourceSubscription request, CancellationToken cancellationToken) line 55
```
### Include provider and version information
EF Core version: 6.0.1
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: (e.g. .NET 6.0)
Operating system: Windows 11
IDE: (e.g. Visual Studio 2022 17.0.4)
| 1.0 | Query causes InvalidOperationException when creating an anonymous type - I've updated an application from EF Core 3.1 to EF Core 6.
The code now produces an InvalidOperationException : Nullable object must have a value on some queries.
Situation :
A subscription entity with a 0 to 1 relationship to a customer entity
In Customer entity configuration :
```C#
builder.HasMany(e => e.Subscriptions)
.WithOne(e => e.Customer)
.IsRequired(false);
```
following query ran successful on EF Core 3.1 but now produces an error on EF core 6
```C#
var query = from subscription in dbContext.Subscriptions
where subscription.Id == request.SourceSubscriptionId
select new
{
SubscriptionId = subscription.Id,
CustomerId = subscription.Customer.Id,
SerialNumber = subscription.SerialNumber
};
var result = await query.SingleOrDefaultAsync(cancellationToken);
```
The anonymous class is generated with CustomerId of type Guid instead of Guid?, even though the Customer is optional.
And the query fails because the subscription does not have a Customer.
It won't work even when I make the Customer property on the subscription entity nullable like this
```C#
public Customer? Customer { get; set; }
```
The query only runs when I explicitly cast the CustomerId in the query like this :
```C#
var query = from subscription in dbContext.Subscriptions
where subscription.Id == request.SourceSubscriptionId
select new
{
SubscriptionId = subscription.Id,
CustomerId = (Guid?) subscription.Customer.Id,
SerialNumber = subscription.SerialNumber
};
var result = await query.SingleOrDefaultAsync(cancellationToken);
```
part of stacktrace :
```
Message:
Test method xxxxxxxxxxxx threw exception:
System.InvalidOperationException: Nullable object must have a value.
Stack Trace:
lambda_method2553(Closure , QueryContext , DbDataReader , ResultContext , SingleQueryResultCoordinator )
AsyncEnumerator.MoveNextAsync()
ShapedQueryCompilingExpressionVisitor.SingleOrDefaultAsync[TSource](IAsyncEnumerable`1 asyncEnumerable, CancellationToken cancellationToken)
ShapedQueryCompilingExpressionVisitor.SingleOrDefaultAsync[TSource](IAsyncEnumerable`1 asyncEnumerable, CancellationToken cancellationToken)
ListRenewTargetSubscriptionsBySourceSubscriptionHandler.Handle(ListRenewTargetSubscriptionsBySourceSubscription request, CancellationToken cancellationToken) line 55
```
### Include provider and version information
EF Core version: 6.0.1
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: (e.g. .NET 6.0)
Operating system: Windows 11
IDE: (e.g. Visual Studio 2022 17.0.4)
| non_priority | query causes invalidoperationexception when creating an anonymous type i ve updated an application from ef core to ef core the code now produces an invalidoperationexception nullable object must have a value on some queries situation a subscription entity with a to relationship to a customer entity in customer entity configuration c builder hasmany e e subscriptions withone e e customer isrequired false following query ran successful on ef core but now produces an error on ef core c var query from subscription in dbcontext subscriptions where subscription id request sourcesubscriptionid select new subscriptionid subscription id customerid subscription customer id serialnumber subscription serialnumber var result await query singleordefaultasync cancellationtoken the anonymous class is generated with customerid of type guid instead of guid even though the customer is optional and the query fails because the subscription does not have a customer it won t work even when i make the customer property on the subscription entity nullable like this c public customer customer get set the query only runs when i explicitly cast the customerid in the query like this c var query from subscription in dbcontext subscriptions where subscription id request sourcesubscriptionid select new subscriptionid subscription id customerid guid subscription customer id serialnumber subscription serialnumber var result await query singleordefaultasync cancellationtoken part of stacktrace message test method xxxxxxxxxxxx threw exception system invalidoperationexception nullable object must have a value stack trace lambda closure querycontext dbdatareader resultcontext singlequeryresultcoordinator asyncenumerator movenextasync shapedquerycompilingexpressionvisitor singleordefaultasync iasyncenumerable asyncenumerable cancellationtoken cancellationtoken shapedquerycompilingexpressionvisitor singleordefaultasync iasyncenumerable asyncenumerable cancellationtoken cancellationtoken listrenewtargetsubscriptionsbysourcesubscriptionhandler handle listrenewtargetsubscriptionsbysourcesubscription request cancellationtoken cancellationtoken line include provider and version information ef core version database provider microsoft entityframeworkcore sqlserver target framework e g net operating system windows ide e g visual studio | 0 |
760,558 | 26,647,402,661 | IssuesEvent | 2023-01-25 11:02:39 | slsdetectorgroup/motorControlSoftware | https://api.github.com/repos/slsdetectorgroup/motorControlSoftware | closed | Fluorescence (big) move not proportional | action - Bug priority - High status - wont fix | <!-- Preview changes before submitting -->
<!-- Please fill out everything with an *, as this report will be discarded otherwise -->
<!-- This is a comment, the syntax is a bit different from c++ or bash -->
##### *Distribution:
<!-- RHEL7, RHEL6, Fedora, etc -->
##### *Xray Box type:
<!-- If applicable, Laser Box, Big Xray box, Vaccum Box -->
xray box
##### Priority:
<!-- Super Low, Low, Medium, High, Super High -->
##### *Describe the bug
<!-- A clear and concise description of what the bug is -->
Fluorescence motor later targets seem not proportional with position. Also makes weird sounds. umotmin, umotgrad and pitch were as per config (vt80)
@erikfrojdh
##### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
##### To Reproduce
<!-- Steps to reproduce the behavior: -->
<!-- 1. Go to '...' -->
<!-- 2. Click on '....' -->
<!-- 3. Scroll down to '....' -->
<!-- 4. See error -->
##### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
##### Additional context
<!-- Add any other context about the problem here. -->
| 1.0 | Fluorescence (big) move not proportional - <!-- Preview changes before submitting -->
<!-- Please fill out everything with an *, as this report will be discarded otherwise -->
<!-- This is a comment, the syntax is a bit different from c++ or bash -->
##### *Distribution:
<!-- RHEL7, RHEL6, Fedora, etc -->
##### *Xray Box type:
<!-- If applicable, Laser Box, Big Xray box, Vaccum Box -->
xray box
##### Priority:
<!-- Super Low, Low, Medium, High, Super High -->
##### *Describe the bug
<!-- A clear and concise description of what the bug is -->
Fluorescence motor later targets seem not proportional with position. Also makes weird sounds. umotmin, umotgrad and pitch were as per config (vt80)
@erikfrojdh
##### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
##### To Reproduce
<!-- Steps to reproduce the behavior: -->
<!-- 1. Go to '...' -->
<!-- 2. Click on '....' -->
<!-- 3. Scroll down to '....' -->
<!-- 4. See error -->
##### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
##### Additional context
<!-- Add any other context about the problem here. -->
| priority | fluorescence big move not proportional distribution xray box type xray box priority describe the bug fluorescence motor later targets seem not proportional with position also makes weird sounds umotmin umotgrad and pitch were as per config erikfrojdh expected behavior to reproduce screenshots additional context | 1 |
305,676 | 23,126,125,526 | IssuesEvent | 2022-07-28 05:51:48 | exoscale/terraform-provider-exoscale | https://api.github.com/repos/exoscale/terraform-provider-exoscale | closed | ~~Could you offer an darwin_arm64 package?~~ Update Readme | documentation | Hi,
it seems that I cannot use Exoscale/Terraform on my new Macbook:
```
│ Error: Incompatible provider version
│
│ Provider registry.terraform.io/exoscale/exoscale v0.18.2 does not have a package available for your current platform, darwin_arm64.
│
│ Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions of this provider may have different platforms supported.
╵
```
Could you provide a package? | 1.0 | ~~Could you offer an darwin_arm64 package?~~ Update Readme - Hi,
it seems that I cannot use Exoscale/Terraform on my new Macbook:
```
│ Error: Incompatible provider version
│
│ Provider registry.terraform.io/exoscale/exoscale v0.18.2 does not have a package available for your current platform, darwin_arm64.
│
│ Provider releases are separate from Terraform CLI releases, so not all providers are available for all platforms. Other versions of this provider may have different platforms supported.
╵
```
Could you provide a package? | non_priority | could you offer an darwin package update readme hi it seems that i cannot use exoscale terraform on my new macbook │ error incompatible provider version │ │ provider registry terraform io exoscale exoscale does not have a package available for your current platform darwin │ │ provider releases are separate from terraform cli releases so not all providers are available for all platforms other versions of this provider may have different platforms supported ╵ could you provide a package | 0 |
211,804 | 7,208,052,236 | IssuesEvent | 2018-02-07 00:56:30 | Motoxpro/WorldCupStatsSite | https://api.github.com/repos/Motoxpro/WorldCupStatsSite | closed | Get average speed for all riders for qualy and final | High Priority Data Issue | Once we have the venue length we can use that to get the average speed for each rider | 1.0 | Get average speed for all riders for qualy and final - Once we have the venue length we can use that to get the average speed for each rider | priority | get average speed for all riders for qualy and final once we have the venue length we can use that to get the average speed for each rider | 1 |
7,929 | 20,106,227,151 | IssuesEvent | 2022-02-07 10:45:34 | zowe/community | https://api.github.com/repos/zowe/community | opened | Discuss options for exposing extensions metadata to API ML | new Architecture-Call | We are extending the API ML with an extension that aims to use a custom package name. We will provide code to have this option in the API ML to scan other package names than the ones already defined. An issue that remains to be solved is how to provide this extension base package names to the APIML on startup.
One such possibility would be to use the extension manifest: https://docs.zowe.org/stable/extend/packaging-zos-extensions/#zowe-component-manifest | 1.0 | Discuss options for exposing extensions metadata to API ML - We are extending the API ML with an extension that aims to use a custom package name. We will provide code to have this option in the API ML to scan other package names than the ones already defined. An issue that remains to be solved is how to provide this extension base package names to the APIML on startup.
One such possibility would be to use the extension manifest: https://docs.zowe.org/stable/extend/packaging-zos-extensions/#zowe-component-manifest | non_priority | discuss options for exposing extensions metadata to api ml we are extending the api ml with an extension that aims to use a custom package name we will provide code to have this option in the api ml to scan other package names than the ones already defined an issue that remains to be solved is how to provide this extension base package names to the apiml on startup one such possibility would be to use the extension manifest | 0 |
69,263 | 14,983,494,932 | IssuesEvent | 2021-01-28 17:17:11 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution][Hosts] endpoint names apears in host tab multiple times | Team: SecuritySolution bug | **Describe the bug:**
Same host name appears multiple times in the Host tab
**Kibana/Elasticsearch Stack version:**
7.11
**Server OS version:**
**Browser and Browser OS versions:**
Firefox
**Elastic Endpoint version:**
**Original install method (e.g. download page, yum, from source, etc.):**
**Functional Area (e.g. Endpoint management, timelines, resolver, etc.):**
**Steps to reproduce:**
1. Install agent with Endpoint
2. In the Host tab you will see that host name appears multiple times
All Upper case, all lower case and with DNS name
3.
**Current behavior:**
**Expected behavior:**
**Screenshots (if relevant):**

**Errors in browser console (if relevant):**
**Provide logs and/or server output (if relevant):**
**Any additional context (logs, chat logs, magical formulas, etc.):**
| True | [Security Solution][Hosts] endpoint names apears in host tab multiple times - **Describe the bug:**
Same host name appears multiple times in the Host tab
**Kibana/Elasticsearch Stack version:**
7.11
**Server OS version:**
**Browser and Browser OS versions:**
Firefox
**Elastic Endpoint version:**
**Original install method (e.g. download page, yum, from source, etc.):**
**Functional Area (e.g. Endpoint management, timelines, resolver, etc.):**
**Steps to reproduce:**
1. Install agent with Endpoint
2. In the Host tab you will see that host name appears multiple times
All Upper case, all lower case and with DNS name
3.
**Current behavior:**
**Expected behavior:**
**Screenshots (if relevant):**

**Errors in browser console (if relevant):**
**Provide logs and/or server output (if relevant):**
**Any additional context (logs, chat logs, magical formulas, etc.):**
| non_priority | endpoint names apears in host tab multiple times describe the bug same host name appears multiple times in the host tab kibana elasticsearch stack version server os version browser and browser os versions firefox elastic endpoint version original install method e g download page yum from source etc functional area e g endpoint management timelines resolver etc steps to reproduce install agent with endpoint in the host tab you will see that host name appears multiple times all upper case all lower case and with dns name current behavior expected behavior screenshots if relevant errors in browser console if relevant provide logs and or server output if relevant any additional context logs chat logs magical formulas etc | 0 |
26,750 | 4,778,152,780 | IssuesEvent | 2016-10-27 18:25:49 | wheeler-microfluidics/microdrop | https://api.github.com/repos/wheeler-microfluidics/microdrop | closed | Device fails to import (Trac #110) | defect microdrop Migrated from Trac | Trying to import the attached device SVG file causes the following exception:
```
(<type 'exceptions.IndexError'>, IndexError('list index out of range',), <traceback object at 0xa73c2d4>) {}
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/gui/dmf_device_controller.py", line 472, in on_import_dmf_device
app.dmf_device = DmfDevice.load_svg(filename)
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/dmf_device.py", line 82, in load_svg
path_group = PathGroup.load_svg(svg_path, on_error=parse_warning)
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/path_group.py", line 31, in load_svg
boundary = svg.get_boundary()
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py", line 80, in get_boundary
boundary = Path([self.get_bounding_box()])
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py", line 69, in get_bounding_box
x_vals = zip(*points)[0]
```
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/110
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Trying to import the attached device SVG file causes the following exception:\n\n{{{\n(<type 'exceptions.IndexError'>, IndexError('list index out of range',), <traceback object at 0xa73c2d4>) {}\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/gui/dmf_device_controller.py\", line 472, in on_import_dmf_device\n app.dmf_device = DmfDevice.load_svg(filename)\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/dmf_device.py\", line 82, in load_svg\n path_group = PathGroup.load_svg(svg_path, on_error=parse_warning)\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/path_group.py\", line 31, in load_svg\n boundary = svg.get_boundary()\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py\", line 80, in get_boundary\n boundary = Path([self.get_bounding_box()])\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py\", line 69, in get_bounding_box\n x_vals = zip(*points)[0]\n}}}",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Device fails to import",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-07-01T04:14:46",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
| 1.0 | Device fails to import (Trac #110) - Trying to import the attached device SVG file causes the following exception:
```
(<type 'exceptions.IndexError'>, IndexError('list index out of range',), <traceback object at 0xa73c2d4>) {}
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/gui/dmf_device_controller.py", line 472, in on_import_dmf_device
app.dmf_device = DmfDevice.load_svg(filename)
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/dmf_device.py", line 82, in load_svg
path_group = PathGroup.load_svg(svg_path, on_error=parse_warning)
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/path_group.py", line 31, in load_svg
boundary = svg.get_boundary()
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py", line 80, in get_boundary
boundary = Path([self.get_bounding_box()])
File "/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py", line 69, in get_bounding_box
x_vals = zip(*points)[0]
```
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/110
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Trying to import the attached device SVG file causes the following exception:\n\n{{{\n(<type 'exceptions.IndexError'>, IndexError('list index out of range',), <traceback object at 0xa73c2d4>) {}\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/gui/dmf_device_controller.py\", line 472, in on_import_dmf_device\n app.dmf_device = DmfDevice.load_svg(filename)\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/dmf_device.py\", line 82, in load_svg\n path_group = PathGroup.load_svg(svg_path, on_error=parse_warning)\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/path_group.py\", line 31, in load_svg\n boundary = svg.get_boundary()\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py\", line 80, in get_boundary\n boundary = Path([self.get_bounding_box()])\n File \"/home/christian/Documents/dev/udrop/microdrop/microdrop/svg_model/svgload/svg_parser.py\", line 69, in get_bounding_box\n x_vals = zip(*points)[0]\n}}}",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Device fails to import",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-07-01T04:14:46",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
| non_priority | device fails to import trac trying to import the attached device svg file causes the following exception indexerror list index out of range file home christian documents dev udrop microdrop microdrop gui dmf device controller py line in on import dmf device app dmf device dmfdevice load svg filename file home christian documents dev udrop microdrop microdrop dmf device py line in load svg path group pathgroup load svg svg path on error parse warning file home christian documents dev udrop microdrop microdrop svg model path group py line in load svg boundary svg get boundary file home christian documents dev udrop microdrop microdrop svg model svgload svg parser py line in get boundary boundary path file home christian documents dev udrop microdrop microdrop svg model svgload svg parser py line in get bounding box x vals zip points migrated from json status closed changetime description trying to import the attached device svg file causes the following exception n n n indexerror list index out of range n file home christian documents dev udrop microdrop microdrop gui dmf device controller py line in on import dmf device n app dmf device dmfdevice load svg filename n file home christian documents dev udrop microdrop microdrop dmf device py line in load svg n path group pathgroup load svg svg path on error parse warning n file home christian documents dev udrop microdrop microdrop svg model path group py line in load svg n boundary svg get boundary n file home christian documents dev udrop microdrop microdrop svg model svgload svg parser py line in get boundary n boundary path n file home christian documents dev udrop microdrop microdrop svg model svgload svg parser py line in get bounding box n x vals zip points n reporter cfobel cc resolution fixed ts component microdrop summary device fails to import priority major keywords version time milestone microdrop owner cfobel type defect | 0 |
11,682 | 3,214,553,123 | IssuesEvent | 2015-10-07 03:04:52 | medic/medic-webapp | https://api.github.com/repos/medic/medic-webapp | closed | Text updates to Concierge for DIY | 4 - Acceptance testing DIY UI/UX | Update the text in concierge to make account creation (username and password) clearer. There are updates to both steps 1 and 2. Store the email and phone number that the user enters for the future - no specific purpose currently, but they will eventually be useful for password reset. This issue does not include password reset.
See pages 5 and 6 of this mockup: https://beta.moqups.com/joshnesbit/5YohdufDGG/view/page/a1fa74951 | 1.0 | Text updates to Concierge for DIY - Update the text in concierge to make account creation (username and password) clearer. There are updates to both steps 1 and 2. Store the email and phone number that the user enters for the future - no specific purpose currently, but they will eventually be useful for password reset. This issue does not include password reset.
See pages 5 and 6 of this mockup: https://beta.moqups.com/joshnesbit/5YohdufDGG/view/page/a1fa74951 | non_priority | text updates to concierge for diy update the text in concierge to make account creation username and password clearer there are updates to both steps and store the email and phone number that the user enters for the future no specific purpose currently but they will eventually be useful for password reset this issue does not include password reset see pages and of this mockup | 0 |
17,561 | 3,621,411,322 | IssuesEvent | 2016-02-09 00:00:35 | Lokiedu/libertysoil-site | https://api.github.com/repos/Lokiedu/libertysoil-site | closed | (5) Issue 110: "Minor update" checkbox won't publish the new post in follower's news feeds | Ready for QA / Testing | As a user, I want to be able to publish information without flooding my follower's news feeds - to be able to publish more updates without causing bad user experience for everyone who follows me. | 1.0 | (5) Issue 110: "Minor update" checkbox won't publish the new post in follower's news feeds - As a user, I want to be able to publish information without flooding my follower's news feeds - to be able to publish more updates without causing bad user experience for everyone who follows me. | non_priority | issue minor update checkbox won t publish the new post in follower s news feeds as a user i want to be able to publish information without flooding my follower s news feeds to be able to publish more updates without causing bad user experience for everyone who follows me | 0 |
562,642 | 16,665,719,869 | IssuesEvent | 2021-06-07 03:07:30 | docker-mailserver/docker-mailserver | https://api.github.com/repos/docker-mailserver/docker-mailserver | opened | [FR] Enable "Archive" folder by default | meta/needs triage priority/low | # Feature Request
## Context
After first installation, on my mobile phone I am using Outlook for Android. And on my Linux desktop, I am using Thunderbird.
Thunderbird has an "archive" feature and automatically creates "Archives" folder in IMAP.
Outlook for Android also has an "archive" feature. But it does not use "Archives" folder created by Thunderbird. It uses a folder named "Archive" which is not even created on IMAP.
### Is your Feature Request related to a Problem?
Kind of.
### Describe the Solution you'd like
Add "Archives" folder by default.
### Are you going to implement it?
Yes, because I know the probability of someone else doing it is low and I can learn from it.
### What are you going to contribute?
I changed the config file and verified it works. And it's a simple change.
## Additional context
### Alternatives you've considered
Should this config change has a feature toggle in the mailserv.env so it won't impact existing users?
### Who will that Feature be useful to?
I believe my usage, Outlook for Android and Thunderbird on desktop, is not that unusual.
### What have you done already?
I created the working config file and verified it in my installation.
| 1.0 | [FR] Enable "Archive" folder by default - # Feature Request
## Context
After first installation, on my mobile phone I am using Outlook for Android. And on my Linux desktop, I am using Thunderbird.
Thunderbird has an "archive" feature and automatically creates "Archives" folder in IMAP.
Outlook for Android also has an "archive" feature. But it does not use "Archives" folder created by Thunderbird. It uses a folder named "Archive" which is not even created on IMAP.
### Is your Feature Request related to a Problem?
Kind of.
### Describe the Solution you'd like
Add "Archives" folder by default.
### Are you going to implement it?
Yes, because I know the probability of someone else doing it is low and I can learn from it.
### What are you going to contribute?
I changed the config file and verified it works. And it's a simple change.
## Additional context
### Alternatives you've considered
Should this config change has a feature toggle in the mailserv.env so it won't impact existing users?
### Who will that Feature be useful to?
I believe my usage, Outlook for Android and Thunderbird on desktop, is not that unusual.
### What have you done already?
I created the working config file and verified it in my installation.
| priority | enable archive folder by default feature request context after first installation on my mobile phone i am using outlook for android and on my linux desktop i am using thunderbird thunderbird has an archive feature and automatically creates archives folder in imap outlook for android also has an archive feature but it does not use archives folder created by thunderbird it uses a folder named archive which is not even created on imap is your feature request related to a problem kind of describe the solution you d like add archives folder by default are you going to implement it yes because i know the probability of someone else doing it is low and i can learn from it what are you going to contribute i changed the config file and verified it works and it s a simple change additional context alternatives you ve considered should this config change has a feature toggle in the mailserv env so it won t impact existing users who will that feature be useful to i believe my usage outlook for android and thunderbird on desktop is not that unusual what have you done already i created the working config file and verified it in my installation | 1 |
232,241 | 7,656,635,968 | IssuesEvent | 2018-05-10 16:56:42 | YannCaron/Game4Kids | https://api.github.com/repos/YannCaron/Game4Kids | closed | Game.objectAt | hi priority new | Find the object at position
Check the
``` javascript
Game.Physics.Arcade.getObjectAtLocation()
```
[documentation](https://photonstorm.github.io/phaser-ce/Phaser.Physics.Arcade.html#getObjectsAtLocation) | 1.0 | Game.objectAt - Find the object at position
Check the
``` javascript
Game.Physics.Arcade.getObjectAtLocation()
```
[documentation](https://photonstorm.github.io/phaser-ce/Phaser.Physics.Arcade.html#getObjectsAtLocation) | priority | game objectat find the object at position check the javascript game physics arcade getobjectatlocation | 1 |
20,473 | 6,041,134,590 | IssuesEvent | 2017-06-10 21:05:45 | jtreml/fsxget | https://api.github.com/repos/jtreml/fsxget | opened | Can't open Google Earth from FSXGET menu | CodePlex Discussion | _Discussion thread [#440957](https://fsxget.codeplex.com/discussions/440957) migrated from [CodePlex](https://fsxget.codeplex.com/discussions):_
---
From: [jpfil](https://www.codeplex.com/site/users/view/jpfil)
On: Apr 19, 2013 at 9:04 PM
Edited: Apr 19, 2013 at 9:05 PM
Hi,
When I click Run Google Earth 4 in the FSXGET status bar menu, it open a kml file with this code
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.1">
</kml>
My Google Earth version is 5.1.3535.3218. It works fine standalone.
It work fine before but since I have upgraded GE, not. I dont remember and dont find the older version.
Any idea why?
Thanks | 1.0 | Can't open Google Earth from FSXGET menu - _Discussion thread [#440957](https://fsxget.codeplex.com/discussions/440957) migrated from [CodePlex](https://fsxget.codeplex.com/discussions):_
---
From: [jpfil](https://www.codeplex.com/site/users/view/jpfil)
On: Apr 19, 2013 at 9:04 PM
Edited: Apr 19, 2013 at 9:05 PM
Hi,
When I click Run Google Earth 4 in the FSXGET status bar menu, it open a kml file with this code
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.1">
</kml>
My Google Earth version is 5.1.3535.3218. It works fine standalone.
It work fine before but since I have upgraded GE, not. I dont remember and dont find the older version.
Any idea why?
Thanks | non_priority | can t open google earth from fsxget menu discussion thread migrated from from on apr at pm edited apr at pm hi when i click run google earth in the fsxget status bar menu it open a kml file with this code kml xmlns my google earth version is it works fine standalone it work fine before but since i have upgraded ge not i dont remember and dont find the older version any idea why thanks | 0 |
64,361 | 3,210,920,367 | IssuesEvent | 2015-10-06 07:47:08 | YetiForceCompany/YetiForceCRM | https://api.github.com/repos/YetiForceCompany/YetiForceCRM | closed | [enhancement] Inline editing in listview - UX improvement | Label::Logic Priority::#1 Low Type::Discussion | Fast editing record form listview (ex. status) will be very big user experience improvement and speed up work with CRM.
Something like editing in Summary view, but on list with all other records.
Did you think about that ? | 1.0 | [enhancement] Inline editing in listview - UX improvement - Fast editing record form listview (ex. status) will be very big user experience improvement and speed up work with CRM.
Something like editing in Summary view, but on list with all other records.
Did you think about that ? | priority | inline editing in listview ux improvement fast editing record form listview ex status will be very big user experience improvement and speed up work with crm something like editing in summary view but on list with all other records did you think about that | 1 |
324,078 | 9,883,597,192 | IssuesEvent | 2019-06-24 19:50:01 | desktop/desktop | https://api.github.com/repos/desktop/desktop | closed | styling tweak for GitHub Enterprise Server header | bug priority-3 | ## Description
This is a regression introduced by #7729 which we should polish before shipping:
<img width="483" src="https://user-images.githubusercontent.com/359239/59784665-5ed24180-9299-11e9-9e65-f8d39ee42d3b.png">
And with the tab selected, the text becomes harder to read:
<img width="183" src="https://user-images.githubusercontent.com/359239/59784767-a2c54680-9299-11e9-8e44-b4f13341cebd.png">
## Version
* GitHub Desktop: 7df871b2a9325208567edbad2f27edd585085c3f
* Operating system: macOS
## Steps to Reproduce
1. Open the Clone Repository dialog
### Expected Behavior
Text is visible and readable for user
### Actual Behavior
Text is obscured when tab is selected, and because of the length the label spans two lines
| 1.0 | styling tweak for GitHub Enterprise Server header - ## Description
This is a regression introduced by #7729 which we should polish before shipping:
<img width="483" src="https://user-images.githubusercontent.com/359239/59784665-5ed24180-9299-11e9-9e65-f8d39ee42d3b.png">
And with the tab selected, the text becomes harder to read:
<img width="183" src="https://user-images.githubusercontent.com/359239/59784767-a2c54680-9299-11e9-8e44-b4f13341cebd.png">
## Version
* GitHub Desktop: 7df871b2a9325208567edbad2f27edd585085c3f
* Operating system: macOS
## Steps to Reproduce
1. Open the Clone Repository dialog
### Expected Behavior
Text is visible and readable for user
### Actual Behavior
Text is obscured when tab is selected, and because of the length the label spans two lines
| priority | styling tweak for github enterprise server header description this is a regression introduced by which we should polish before shipping img width src and with the tab selected the text becomes harder to read img width src version github desktop operating system macos steps to reproduce open the clone repository dialog expected behavior text is visible and readable for user actual behavior text is obscured when tab is selected and because of the length the label spans two lines | 1 |
4,619 | 3,057,867,584 | IssuesEvent | 2015-08-14 01:25:13 | winjs/winjs | https://api.github.com/repos/winjs/winjs | closed | AppBar and ToolBar: Overflow button is not present if there are no commands in the overflowarea. | ..pri: 1 .kind: codebug feature: appbar feature: toolbar | There is a bug in the AppBar/ToolBar where the overflowbutton hides itself if the overflowarea is empty. This code is based off the old toolbar control that only ever had 1 display mode for the action area that allowed you to see all commands and their labels.
Unless the AppBar/ToolBar has both closedDisplayMode: 'full', and no commands in the overflowarea, the overflowbutton should be visible and clickable.
This is most severe with closedDisplayMode: "minimal" since users can see the AppBar but are unable to see or interact with their primary commands. | 1.0 | AppBar and ToolBar: Overflow button is not present if there are no commands in the overflowarea. - There is a bug in the AppBar/ToolBar where the overflowbutton hides itself if the overflowarea is empty. This code is based off the old toolbar control that only ever had 1 display mode for the action area that allowed you to see all commands and their labels.
Unless the AppBar/ToolBar has both closedDisplayMode: 'full', and no commands in the overflowarea, the overflowbutton should be visible and clickable.
This is most severe with closedDisplayMode: "minimal" since users can see the AppBar but are unable to see or interact with their primary commands. | non_priority | appbar and toolbar overflow button is not present if there are no commands in the overflowarea there is a bug in the appbar toolbar where the overflowbutton hides itself if the overflowarea is empty this code is based off the old toolbar control that only ever had display mode for the action area that allowed you to see all commands and their labels unless the appbar toolbar has both closeddisplaymode full and no commands in the overflowarea the overflowbutton should be visible and clickable this is most severe with closeddisplaymode minimal since users can see the appbar but are unable to see or interact with their primary commands | 0 |
1,120 | 2,575,545,188 | IssuesEvent | 2015-02-11 23:56:30 | GoogleCloudPlatform/kubernetes | https://api.github.com/repos/GoogleCloudPlatform/kubernetes | closed | kubectl help should display command-specific flags separate from global flags | area/documentation area/usability component/CLI priority/P2 status/help-wanted team/UX | It's visually daunting to parse the available flags in kubectl usage strings. Example:
```
Usage:
kubectl rollingupdate <old-controller-name> -f <new-controller.json> [flags]
Available Flags:
--alsologtostderr=false: log to standard error as well as files
--api-version="": The API version to use when talking to the server
-a, --auth-path="": Path to the auth info file. If missing, prompt the user. Only used if using https.
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client key file for TLS.
--client-key="": Path to a client key file for TLS.
--cluster="": The name of the kubeconfig cluster to use
--context="": The name of the kubeconfig context to use
-f, --filename="": Filename or URL to file to use to create the new controller
-h, --help=false: help for rollingupdate
--insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
--kubeconfig="": Path to the kubeconfig file to use for CLI requests.
--log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
--log_dir=: If non-empty, write log files in this directory
--log_flush_frequency=5s: Maximum number of seconds between log flushes
--logtostderr=true: log to standard error instead of files
--match-server-version=false: Require server version to match client version
--namespace="": If present, the namespace scope for this CLI request.
--poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-s, --server="": The address of the Kubernetes API server
--stderrthreshold=2: logs at or above this threshold go to stderr
--timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--token="": Bearer token for authentication to the API server.
--update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--user="": The name of the kubeconfig user to use
--v=0: log level for V logs
--validate=false: If true, use a schema to validate the input before sending it
--vmodule=: comma-separated list of pattern=N settings for file-filtered logging
```
Which flags are actually relevant to my command? The usage string helps a bit, but if I want to know the defaults, I have to squint at the wall of text looking for the flag I care about amidst a host of common flags. We should replace Cobra's default usage text with one that looks like
```
Usage:
kubectl rollingupdate <old-controller-name> -f <new-controller.json> [flags]
Flags:
-f, --filename="": Filename or URL to file to use to create the new controller
--poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-h, --help=false: help for rollingupdate
General Flags:
--alsologtostderr=false: log to standard error as well as files
--api-version="": The API version to use when talking to the server
-a, --auth-path="": Path to the auth info file. If missing, prompt the user. Only used if using https.
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client key file for TLS.
--client-key="": Path to a client key file for TLS.
--cluster="": The name of the kubeconfig cluster to use
--context="": The name of the kubeconfig context to use
--insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
--kubeconfig="": Path to the kubeconfig file to use for CLI requests.
--log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
--log_dir=: If non-empty, write log files in this directory
--log_flush_frequency=5s: Maximum number of seconds between log flushes
--logtostderr=true: log to standard error instead of files
--match-server-version=false: Require server version to match client version
--namespace="": If present, the namespace scope for this CLI request.
-s, --server="": The address of the Kubernetes API server
--stderrthreshold=2: logs at or above this threshold go to stderr
--token="": Bearer token for authentication to the API server.
--user="": The name of the kubeconfig user to use
--v=0: log level for V logs
--validate=false: If true, use a schema to validate the input before sending it
--vmodule=: comma-separated list of pattern=N settings for file-filtered logging
```
Flags specific to the command at the top, inherited/general flags after. Bonus points to upstream it to https://github.com/spf13/cobra .
cc @mbforbes, @MikeJeffrey | 1.0 | kubectl help should display command-specific flags separate from global flags - It's visually daunting to parse the available flags in kubectl usage strings. Example:
```
Usage:
kubectl rollingupdate <old-controller-name> -f <new-controller.json> [flags]
Available Flags:
--alsologtostderr=false: log to standard error as well as files
--api-version="": The API version to use when talking to the server
-a, --auth-path="": Path to the auth info file. If missing, prompt the user. Only used if using https.
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client key file for TLS.
--client-key="": Path to a client key file for TLS.
--cluster="": The name of the kubeconfig cluster to use
--context="": The name of the kubeconfig context to use
-f, --filename="": Filename or URL to file to use to create the new controller
-h, --help=false: help for rollingupdate
--insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
--kubeconfig="": Path to the kubeconfig file to use for CLI requests.
--log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
--log_dir=: If non-empty, write log files in this directory
--log_flush_frequency=5s: Maximum number of seconds between log flushes
--logtostderr=true: log to standard error instead of files
--match-server-version=false: Require server version to match client version
--namespace="": If present, the namespace scope for this CLI request.
--poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-s, --server="": The address of the Kubernetes API server
--stderrthreshold=2: logs at or above this threshold go to stderr
--timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--token="": Bearer token for authentication to the API server.
--update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--user="": The name of the kubeconfig user to use
--v=0: log level for V logs
--validate=false: If true, use a schema to validate the input before sending it
--vmodule=: comma-separated list of pattern=N settings for file-filtered logging
```
Which flags are actually relevant to my command? The usage string helps a bit, but if I want to know the defaults, I have to squint at the wall of text looking for the flag I care about amidst a host of common flags. We should replace Cobra's default usage text with one that looks like
```
Usage:
kubectl rollingupdate <old-controller-name> -f <new-controller.json> [flags]
Flags:
-f, --filename="": Filename or URL to file to use to create the new controller
--poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-h, --help=false: help for rollingupdate
General Flags:
--alsologtostderr=false: log to standard error as well as files
--api-version="": The API version to use when talking to the server
-a, --auth-path="": Path to the auth info file. If missing, prompt the user. Only used if using https.
--certificate-authority="": Path to a cert. file for the certificate authority.
--client-certificate="": Path to a client key file for TLS.
--client-key="": Path to a client key file for TLS.
--cluster="": The name of the kubeconfig cluster to use
--context="": The name of the kubeconfig context to use
--insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
--kubeconfig="": Path to the kubeconfig file to use for CLI requests.
--log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
--log_dir=: If non-empty, write log files in this directory
--log_flush_frequency=5s: Maximum number of seconds between log flushes
--logtostderr=true: log to standard error instead of files
--match-server-version=false: Require server version to match client version
--namespace="": If present, the namespace scope for this CLI request.
-s, --server="": The address of the Kubernetes API server
--stderrthreshold=2: logs at or above this threshold go to stderr
--token="": Bearer token for authentication to the API server.
--user="": The name of the kubeconfig user to use
--v=0: log level for V logs
--validate=false: If true, use a schema to validate the input before sending it
--vmodule=: comma-separated list of pattern=N settings for file-filtered logging
```
Flags specific to the command at the top, inherited/general flags after. Bonus points to upstream it to https://github.com/spf13/cobra .
cc @mbforbes, @MikeJeffrey | non_priority | kubectl help should display command specific flags separate from global flags it s visually daunting to parse the available flags in kubectl usage strings example usage kubectl rollingupdate f available flags alsologtostderr false log to standard error as well as files api version the api version to use when talking to the server a auth path path to the auth info file if missing prompt the user only used if using https certificate authority path to a cert file for the certificate authority client certificate path to a client key file for tls client key path to a client key file for tls cluster the name of the kubeconfig cluster to use context the name of the kubeconfig context to use f filename filename or url to file to use to create the new controller h help false help for rollingupdate insecure skip tls verify false if true the server s certificate will not be checked for validity this will make your https connections insecure kubeconfig path to the kubeconfig file to use for cli requests log backtrace at when logging hits line file n emit a stack trace log dir if non empty write log files in this directory log flush frequency maximum number of seconds between log flushes logtostderr true log to standard error instead of files match server version false require server version to match client version namespace if present the namespace scope for this cli request poll interval time delay between polling controller status after update valid time units are ns us or µs ms s m h s server the address of the kubernetes api server stderrthreshold logs at or above this threshold go to stderr timeout max time to wait for a controller to update before giving up valid time units are ns us or µs ms s m h token bearer token for authentication to the api server update period time to wait between updating pods valid time units are ns us or µs ms s m h user the name of the kubeconfig user to use v log level for v logs validate false if true use a schema to validate the input before sending it vmodule comma separated list of pattern n settings for file filtered logging which flags are actually relevant to my command the usage string helps a bit but if i want to know the defaults i have to squint at the wall of text looking for the flag i care about amidst a host of common flags we should replace cobra s default usage text with one that looks like usage kubectl rollingupdate f flags f filename filename or url to file to use to create the new controller poll interval time delay between polling controller status after update valid time units are ns us or µs ms s m h timeout max time to wait for a controller to update before giving up valid time units are ns us or µs ms s m h update period time to wait between updating pods valid time units are ns us or µs ms s m h h help false help for rollingupdate general flags alsologtostderr false log to standard error as well as files api version the api version to use when talking to the server a auth path path to the auth info file if missing prompt the user only used if using https certificate authority path to a cert file for the certificate authority client certificate path to a client key file for tls client key path to a client key file for tls cluster the name of the kubeconfig cluster to use context the name of the kubeconfig context to use insecure skip tls verify false if true the server s certificate will not be checked for validity this will make your https connections insecure kubeconfig path to the kubeconfig file to use for cli requests log backtrace at when logging hits line file n emit a stack trace log dir if non empty write log files in this directory log flush frequency maximum number of seconds between log flushes logtostderr true log to standard error instead of files match server version false require server version to match client version namespace if present the namespace scope for this cli request s server the address of the kubernetes api server stderrthreshold logs at or above this threshold go to stderr token bearer token for authentication to the api server user the name of the kubeconfig user to use v log level for v logs validate false if true use a schema to validate the input before sending it vmodule comma separated list of pattern n settings for file filtered logging flags specific to the command at the top inherited general flags after bonus points to upstream it to cc mbforbes mikejeffrey | 0 |
260,994 | 27,785,070,101 | IssuesEvent | 2023-03-17 02:00:58 | panasalap/linux-4.19.72_test | https://api.github.com/repos/panasalap/linux-4.19.72_test | opened | CVE-2023-1249 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## CVE-2023-1249 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel’s core dump subsystem. This flaw allows a local user to crash the system.
<p>Publish Date: 2023-03-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1249>CVE-2023-1249</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1249">https://www.linuxkernelcves.com/cves/CVE-2023-1249</a></p>
<p>Release Date: 2023-03-07</p>
<p>Fix Resolution: v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-1249 (Medium) detected in multiple libraries - ## CVE-2023-1249 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel’s core dump subsystem. This flaw allows a local user to crash the system.
<p>Publish Date: 2023-03-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1249>CVE-2023-1249</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1249">https://www.linuxkernelcves.com/cves/CVE-2023-1249</a></p>
<p>Release Date: 2023-03-07</p>
<p>Fix Resolution: v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linux linux linux vulnerability details a use after free flaw was found in the linux kernel’s core dump subsystem this flaw allows a local user to crash the system publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
753,636 | 26,356,627,496 | IssuesEvent | 2023-01-11 10:11:35 | stdlib-js/google-summer-of-code | https://api.github.com/repos/stdlib-js/google-summer-of-code | opened | [Idea]: develop a Google Sheets extension which exposes stdlib functionality | idea priority: high tech: javascript tech: nodejs difficulty: 2 | ### Idea
The goal of this idea is to allow users to call stdlib APIs from within Google Sheets. This will allow users to perform linear algebra and various machine learning operations directly on spreadsheet data and all within the browser.
### Expected Outcomes
Google Sheets users will be able to install an add-on which exposes stdlib functionality, run statistical tests, evaluate mathematical functions, and perform linear algebra operations using stdlib.
### Involved Software
No other software is necessary.
### Prerequisite Knowledge
JavaScript, Node.js.
### Difficulty
Beginner/Intermediate.
### Project Length
175/350 hours. Can be scoped accordingly. A skilled contributor can work strategy for performant fused operations. | 1.0 | [Idea]: develop a Google Sheets extension which exposes stdlib functionality - ### Idea
The goal of this idea is to allow users to call stdlib APIs from within Google Sheets. This will allow users to perform linear algebra and various machine learning operations directly on spreadsheet data and all within the browser.
### Expected Outcomes
Google Sheets users will be able to install an add-on which exposes stdlib functionality, run statistical tests, evaluate mathematical functions, and perform linear algebra operations using stdlib.
### Involved Software
No other software is necessary.
### Prerequisite Knowledge
JavaScript, Node.js.
### Difficulty
Beginner/Intermediate.
### Project Length
175/350 hours. Can be scoped accordingly. A skilled contributor can work strategy for performant fused operations. | priority | develop a google sheets extension which exposes stdlib functionality idea the goal of this idea is to allow users to call stdlib apis from within google sheets this will allow users to perform linear algebra and various machine learning operations directly on spreadsheet data and all within the browser expected outcomes google sheets users will be able to install an add on which exposes stdlib functionality run statistical tests evaluate mathematical functions and perform linear algebra operations using stdlib involved software no other software is necessary prerequisite knowledge javascript node js difficulty beginner intermediate project length hours can be scoped accordingly a skilled contributor can work strategy for performant fused operations | 1 |
4,204 | 6,444,184,236 | IssuesEvent | 2017-08-12 07:29:50 | Microsoft/vscode-cpptools | https://api.github.com/repos/Microsoft/vscode-cpptools | closed | Auto-Completion has some error | Language Service more info needed question | Auto-Completion has some error,can't suggest right word
<img width="506" alt="qq20170809-001127 2x" src="https://user-images.githubusercontent.com/7554917/29082470-34885ce8-7c98-11e7-8d02-065ed7f87f21.png">
| 1.0 | Auto-Completion has some error - Auto-Completion has some error,can't suggest right word
<img width="506" alt="qq20170809-001127 2x" src="https://user-images.githubusercontent.com/7554917/29082470-34885ce8-7c98-11e7-8d02-065ed7f87f21.png">
| non_priority | auto completion has some error auto completion has some error can t suggest right word img width alt src | 0 |
144,314 | 5,538,465,955 | IssuesEvent | 2017-03-22 01:48:34 | Esri/data-assistant | https://api.github.com/repos/Esri/data-assistant | closed | Exception caught: unable to cast DIAMETER to Double : 'None' | bug fixed Installed priority: high | Is this a valid exception? My field map for this has other cast to 0, so I would assume Null in the source would be 0 in the result
@SteveGrise | 1.0 | Exception caught: unable to cast DIAMETER to Double : 'None' - Is this a valid exception? My field map for this has other cast to 0, so I would assume Null in the source would be 0 in the result
@SteveGrise | priority | exception caught unable to cast diameter to double none is this a valid exception my field map for this has other cast to so i would assume null in the source would be in the result stevegrise | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.