Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
268,766
| 28,781,283,236
|
IssuesEvent
|
2023-05-02 01:04:05
|
MikeGratsas/currency-converter
|
https://api.github.com/repos/MikeGratsas/currency-converter
|
opened
|
CVE-2016-1000027 (High) detected in spring-web-5.3.24.jar
|
Mend: dependency security vulnerability
|
## CVE-2016-1000027 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.3.24.jar</b></p></summary>
<p>Spring Web</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.24/spring-web-5.3.24.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.7.7.jar (Root Library)
- :x: **spring-web-5.3.24.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
Mend Note: After conducting further research, Mend has determined that all versions of spring-web up to version 6.0.0 are vulnerable to CVE-2016-1000027.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1000027>CVE-2016-1000027</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-4wrc-f8pq-fpqp">https://github.com/advisories/GHSA-4wrc-f8pq-fpqp</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution (org.springframework:spring-web): 6.0.0</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-1000027 (High) detected in spring-web-5.3.24.jar - ## CVE-2016-1000027 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.3.24.jar</b></p></summary>
<p>Spring Web</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-web/5.3.24/spring-web-5.3.24.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.7.7.jar (Root Library)
- :x: **spring-web-5.3.24.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
Mend Note: After conducting further research, Mend has determined that all versions of spring-web up to version 6.0.0 are vulnerable to CVE-2016-1000027.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-1000027>CVE-2016-1000027</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-4wrc-f8pq-fpqp">https://github.com/advisories/GHSA-4wrc-f8pq-fpqp</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution (org.springframework:spring-web): 6.0.0</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in spring web jar cve high severity vulnerability vulnerable library spring web jar spring web path to dependency file pom xml path to vulnerable library home wss scanner repository org springframework spring web spring web jar dependency hierarchy spring boot starter web jar root library x spring web jar vulnerable library found in base branch main vulnerability details pivotal spring framework through suffers from a potential remote code execution rce issue if used for java deserialization of untrusted data depending on how the library is implemented within a product this issue may or not occur and authentication may be required note the vendor s position is that untrusted data is not an intended use case the product s behavior will not be changed because some users rely on deserialization of trusted data mend note after conducting further research mend has determined that all versions of spring web up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web direct dependency fix resolution org springframework boot spring boot starter web step up your open source security game with mend
| 0
|
68,543
| 7,103,414,408
|
IssuesEvent
|
2018-01-16 04:53:05
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Manual test run for Chromium upgrade on OS X for 0.19.x Hotfix 13
|
OS/macOS release-notes/exclude tests
|
## Per release specialty tests
- [x] Update to Muon 4.7.0. ([#12641](https://github.com/brave/browser-laptop/issues/12641))
- [x] Update to Chromium 64. ([#12640](https://github.com/brave/browser-laptop/issues/12640))
- [x] "Folder Upload" feature in Google Drive Broken. ([#8601](https://github.com/brave/browser-laptop/issues/8601))
## Installer
- [x] Check that installer is close to the size of last release.
- [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
- [x] Check Brave, muon, and libchromiumcontent version in `about:brave` and make sure it is EXACTLY as expected.
## Printing
- [x] Test that you can print a PDF
## Widevine/Netflix test
- [ ] Test that you can log into Netflix and start a show.
## Performance test
_Each start should take less than 7 seconds_
- [x] Enable only sync (new sync group).
- [ ] Enable only sync with a large sync group (many entries).
- [x] Enable only payments.
- [x] Only import a large set of bookmarks.
- [x] Combine sync, payments, and a large set of bookmarks.
# Ledger
- [x] Verify wallet is auto created after enabling payments
- [x] Verify monthly budget and account balance shows correct BAT and USD value
- [x] Click on `add funds` and click on each currency and verify it shows wallet address and QR Code
- [x] Verify that Brave BAT wallet address can be copied
- [ ] Verify adding funds via any of the currencies flows into BAT Wallet after specified amount of time
- [ ] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [x] Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting
- [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
- [x] Check that disabling payments and enabling them again does not lose state.
- [ ] Upgrade from older version
- [ ] Verify the wallet overlay is shown when wallet transition is happening upon upgrade
- [ ] Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade
- [ ] Verify publishers list is not lost after upgrade when payment is disabled in the older version
### Ledger Media
- [x] Visit any YouTube video in a normal/session tab and ensure the video publisher name is listed in ledger table
- [x] Visit any YouTube video in a private tab and ensure the video publisher name is not listed in ledger table
- [x] Visit any live YouTube video and ensure the time spent is shown under ledger table
- [x] Visit any embeded YouTube video and ensure the video publisher name is listed in ledger table
- [x] Ensure total time spent is correctly calculated for each publisher video
- [x] Ensure total time spent is correctly calculated when switching to YouTube video from an embeded video
- [x] Ensure YouTube publishers are not listed when `Allow contributions to video` is disabled in adavanced settings
- [x] Ensure existing YouTube publishers are not lost when `Allow contributions to video` is disabled in adavanced settings
- [x] Ensure YouTube publishers is listed but not included when `auto-include` is disabled
- [x] Update Advanced settings to different time/visit value and ensure YouTube videos are added to ledger table once criteria is met
## Sync
- [x] Verify you are able to sync two devices using the secret code
- [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
- [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
- [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
- [x] Ensure imported bookmark folder structure is maintained on device 2
- [x] Ensure bookmark favicons are shown after sync
## Data
- [x] Make sure that data from the last version appears in the new version OK.
- [x] With data from the last version, test that
- [x] cookies are preserved
- [x] pinned tabs can be opened
- [x] pinned tabs can be unpinned
- [x] unpinned tabs can be re-pinned
- [x] opened tabs can be reloaded
- [x] bookmarks on the bookmark toolbar can be opened
- [x] bookmarks in the bookmark folder toolbar can be opened
## Bookmarks
- [x] Test that creating a bookmark on the bookmarks toolbar works
- [x] Test that creating a bookmark folder on the bookmarks toolbar works
- [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
- [x] Test that clicking a bookmark in the toolbar loads the bookmark.
- [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
- [x] Make sure context menu items in the URL bar work
- [x] Make sure context menu items on content work with no selected text.
- [x] Make sure context menu items on content work with selected text.
- [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
- [x] Ensure search box is shown with shortcut
- [x] Test successful find
- [x] Test forward and backward find navigation
- [x] Test failed find shows 0 results
- [x] Test match case find
## Keyboard Shortcuts
- [x] Open a new window: `Command` + `n` (macOS) || `Ctrl` + `n` (Win/Linux)
- [x] Open a new tab: `Command` + `t` (macOS) || `Ctrl` + `t` (Win/Linux)
- [x] Open a new private tab: `Command` + `Shift` + `p` (macOS) || `Ctrl` + `Shift` + `p` (Win/Linux)
- [x] Reopen the latest closed tab: `Command` + `Shift` + `t` (macOS) || `Ctrl` + `Shift` + `t` (Win/Linux)
- [x] Jump to the next tab: `Command` + `Option` + `->` (macOS) || `Ctrl` + `PgDn` (Win/Linux)
- [x] Jump to the previous tab: `Command` + `Option` + `<-` (macOS) || `Ctrl` + `PgUp` (Win/Linux)
- [x] Jump to the next tab: `Ctrl` + `Tab` (macOS/Win/Linux)
- [x] Jump to the previous tab: `Ctrl` + `Shift` + `Tab` (macOS/Win/Linux)
- [x] Open Brave preferences: `Command` + `,` (macOS) || `Ctrl` + `,` (Win/Linux)
- [x] Jump into the URL bar: `Command` + `l` (macOS) || `Ctrl` + `l` (Win/Linux)
- [x] Reload page: `Command` + `r` (macOS) || `Ctrl` + `r` (Win/Linux)
- [x] Select All: `Command` + `a` (macOS) || `Ctrl` + `a` (Win/Linux)
- [x] Copying text: `Command` + `c` (macOS) || `Ctrl` + `c` (Win/Linux)
- [x] Pasting text: `Command` + `v` (macOS) || `Ctrl` + `v` (Win/Linux)
- [x] Minimize Brave: `Command` + `m` (macOS) || `Ctrl` + `m` (Win/Linux)
- [x] Quit Brave: `Command` + `q` (macOS) || `Ctrl` + `q` (Win/Linux)
## Geolocation
- [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
- [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
- [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
- [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
- [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
- [x] Test that tabs are pinnable
- [x] Test that tabs are unpinnable
- [x] Test that tabs are draggable to same tabset
- [x] Test that tabs are draggable to alternate tabset
- [x] Test that tabs can be detached to create a new window
- [x] Test that you are able to reattach a tab to an existing window
- [x] Test that you can quickly switch tabs
- [x] Test that tabs can be cloned
## Zoom
- [x] Test zoom in / out shortcut works
- [x] Test hamburger menu zooms.
- [x] Test zoom saved when you close the browser and restore on a single site.
- [x] Test zoom saved when you navigate within a single origin site.
- [x] Test that navigating to a different origin resets the zoom
## Bravery settings
- [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that ad replacement works on http://slashdot.org
- [x] Check that toggling to blocking and allow ads works as expected.
- [x] Test that clicking through a cert error in https://badssl.com/ works.
- [x] Test that Safe Browsing works (https://www.raisegame.com/)
- [x] Turning Safe Browsing off and shields off both disable safe browsing for https://www.raisegame.com/.
- [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
- [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
- [x] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
- [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
- [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
- [x] Go to https://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
- [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
- [x] Open a github issue and type some misspellings, make sure they are underlined.
- [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
- [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
- [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
- [x] Test that PDF is loaded over https at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over http at http://www.pdf995.com/samples/pdf.pdf
- [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
- [x] Test that WebSockets are working by ensuring http://slither.io/ runs once "Play" has been clicked.
## Flash tests
- [x] Test that flash placeholder appears on http://www.homestarrunner.com
- [x] Test with flash enabled in preferences, auto play option is shown when visiting http://www.homestarrunner.com
## Autofill tests
- [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
- [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave` in Windows, `./config/brave` in Ubuntu)
- [x] Test that windows and tabs restore when closed, including active tab.
- [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave` in Windows, `./config/brave` in Ubuntu)
## Cookie and Cache
- [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
- [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
- [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
- [ ] Test that using `BRAVE_ENABLE_PREVIEW_UPDATES=TRUE` env variable works and prompts for preview build updates.
|
1.0
|
Manual test run for Chromium upgrade on OS X for 0.19.x Hotfix 13 - ## Per release specialty tests
- [x] Update to Muon 4.7.0. ([#12641](https://github.com/brave/browser-laptop/issues/12641))
- [x] Update to Chromium 64. ([#12640](https://github.com/brave/browser-laptop/issues/12640))
- [x] "Folder Upload" feature in Google Drive Broken. ([#8601](https://github.com/brave/browser-laptop/issues/8601))
## Installer
- [x] Check that installer is close to the size of last release.
- [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
- [x] Check Brave, muon, and libchromiumcontent version in `about:brave` and make sure it is EXACTLY as expected.
## Printing
- [x] Test that you can print a PDF
## Widevine/Netflix test
- [ ] Test that you can log into Netflix and start a show.
## Performance test
_Each start should take less than 7 seconds_
- [x] Enable only sync (new sync group).
- [ ] Enable only sync with a large sync group (many entries).
- [x] Enable only payments.
- [x] Only import a large set of bookmarks.
- [x] Combine sync, payments, and a large set of bookmarks.
# Ledger
- [x] Verify wallet is auto created after enabling payments
- [x] Verify monthly budget and account balance shows correct BAT and USD value
- [x] Click on `add funds` and click on each currency and verify it shows wallet address and QR Code
- [x] Verify that Brave BAT wallet address can be copied
- [ ] Verify adding funds via any of the currencies flows into BAT Wallet after specified amount of time
- [ ] Verify adding funds to an existing wallet with amount, adjusts the BAT value appropriately
- [x] Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting
- [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
- [x] Check that disabling payments and enabling them again does not lose state.
- [ ] Upgrade from older version
- [ ] Verify the wallet overlay is shown when wallet transition is happening upon upgrade
- [ ] Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade
- [ ] Verify publishers list is not lost after upgrade when payment is disabled in the older version
### Ledger Media
- [x] Visit any YouTube video in a normal/session tab and ensure the video publisher name is listed in ledger table
- [x] Visit any YouTube video in a private tab and ensure the video publisher name is not listed in ledger table
- [x] Visit any live YouTube video and ensure the time spent is shown under ledger table
- [x] Visit any embeded YouTube video and ensure the video publisher name is listed in ledger table
- [x] Ensure total time spent is correctly calculated for each publisher video
- [x] Ensure total time spent is correctly calculated when switching to YouTube video from an embeded video
- [x] Ensure YouTube publishers are not listed when `Allow contributions to video` is disabled in adavanced settings
- [x] Ensure existing YouTube publishers are not lost when `Allow contributions to video` is disabled in adavanced settings
- [x] Ensure YouTube publishers is listed but not included when `auto-include` is disabled
- [x] Update Advanced settings to different time/visit value and ensure YouTube videos are added to ledger table once criteria is met
## Sync
- [x] Verify you are able to sync two devices using the secret code
- [x] Visit a site on device 1 and change shield setting, ensure that the saved site preference is synced to device 2
- [x] Enable Browsing history sync on device 1, ensure the history is shown on device 2
- [x] Import/Add bookmarks on device 1, ensure it is synced on device 2
- [x] Ensure imported bookmark folder structure is maintained on device 2
- [x] Ensure bookmark favicons are shown after sync
## Data
- [x] Make sure that data from the last version appears in the new version OK.
- [x] With data from the last version, test that
- [x] cookies are preserved
- [x] pinned tabs can be opened
- [x] pinned tabs can be unpinned
- [x] unpinned tabs can be re-pinned
- [x] opened tabs can be reloaded
- [x] bookmarks on the bookmark toolbar can be opened
- [x] bookmarks in the bookmark folder toolbar can be opened
## Bookmarks
- [x] Test that creating a bookmark on the bookmarks toolbar works
- [x] Test that creating a bookmark folder on the bookmarks toolbar works
- [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
- [x] Test that clicking a bookmark in the toolbar loads the bookmark.
- [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
- [x] Make sure context menu items in the URL bar work
- [x] Make sure context menu items on content work with no selected text.
- [x] Make sure context menu items on content work with selected text.
- [x] Make sure context menu items on content work inside an editable control on `about:styles` (input, textarea, or contenteditable).
## Find on page
- [x] Ensure search box is shown with shortcut
- [x] Test successful find
- [x] Test forward and backward find navigation
- [x] Test failed find shows 0 results
- [x] Test match case find
## Keyboard Shortcuts
- [x] Open a new window: `Command` + `n` (macOS) || `Ctrl` + `n` (Win/Linux)
- [x] Open a new tab: `Command` + `t` (macOS) || `Ctrl` + `t` (Win/Linux)
- [x] Open a new private tab: `Command` + `Shift` + `p` (macOS) || `Ctrl` + `Shift` + `p` (Win/Linux)
- [x] Reopen the latest closed tab: `Command` + `Shift` + `t` (macOS) || `Ctrl` + `Shift` + `t` (Win/Linux)
- [x] Jump to the next tab: `Command` + `Option` + `->` (macOS) || `Ctrl` + `PgDn` (Win/Linux)
- [x] Jump to the previous tab: `Command` + `Option` + `<-` (macOS) || `Ctrl` + `PgUp` (Win/Linux)
- [x] Jump to the next tab: `Ctrl` + `Tab` (macOS/Win/Linux)
- [x] Jump to the previous tab: `Ctrl` + `Shift` + `Tab` (macOS/Win/Linux)
- [x] Open Brave preferences: `Command` + `,` (macOS) || `Ctrl` + `,` (Win/Linux)
- [x] Jump into the URL bar: `Command` + `l` (macOS) || `Ctrl` + `l` (Win/Linux)
- [x] Reload page: `Command` + `r` (macOS) || `Ctrl` + `r` (Win/Linux)
- [x] Select All: `Command` + `a` (macOS) || `Ctrl` + `a` (Win/Linux)
- [x] Copying text: `Command` + `c` (macOS) || `Ctrl` + `c` (Win/Linux)
- [x] Pasting text: `Command` + `v` (macOS) || `Ctrl` + `v` (Win/Linux)
- [x] Minimize Brave: `Command` + `m` (macOS) || `Ctrl` + `m` (Win/Linux)
- [x] Quit Brave: `Command` + `q` (macOS) || `Ctrl` + `q` (Win/Linux)
## Geolocation
- [ ] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
- [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
- [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
- [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
- [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs, Pinning and Tear off tabs
- [x] Test that tabs are pinnable
- [x] Test that tabs are unpinnable
- [x] Test that tabs are draggable to same tabset
- [x] Test that tabs are draggable to alternate tabset
- [x] Test that tabs can be detached to create a new window
- [x] Test that you are able to reattach a tab to an existing window
- [x] Test that you can quickly switch tabs
- [x] Test that tabs can be cloned
## Zoom
- [x] Test zoom in / out shortcut works
- [x] Test hamburger menu zooms.
- [x] Test zoom saved when you close the browser and restore on a single site.
- [x] Test zoom saved when you navigate within a single origin site.
- [x] Test that navigating to a different origin resets the zoom
## Bravery settings
- [x] Check that HTTPS Everywhere works by loading https://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that ad replacement works on http://slashdot.org
- [x] Check that toggling to blocking and allow ads works as expected.
- [x] Test that clicking through a cert error in https://badssl.com/ works.
- [x] Test that Safe Browsing works (https://www.raisegame.com/)
- [x] Turning Safe Browsing off and shields off both disable safe browsing for https://www.raisegame.com/.
- [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
- [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
- [x] Test that browser is not detected on https://extensions.inrialpes.fr/brave/
## Content tests
- [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
- [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
- [x] Go to https://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
- [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
- [x] Open a github issue and type some misspellings, make sure they are underlined.
- [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
- [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
- [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
- [x] Test that PDF is loaded over https at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over http at http://www.pdf995.com/samples/pdf.pdf
- [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
- [x] Test that WebSockets are working by ensuring http://slither.io/ runs once "Play" has been clicked.
## Flash tests
- [x] Test that flash placeholder appears on http://www.homestarrunner.com
- [x] Test with flash enabled in preferences, auto play option is shown when visiting http://www.homestarrunner.com
## Autofill tests
- [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
- [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave` in Windows, `./config/brave` in Ubuntu)
- [x] Test that windows and tabs restore when closed, including active tab.
- [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave` in Windows, `./config/brave` in Ubuntu)
## Cookie and Cache
- [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
- [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
- [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly.
- [ ] Test that using `BRAVE_ENABLE_PREVIEW_UPDATES=TRUE` env variable works and prompts for preview build updates.
|
test
|
manual test run for chromium upgrade on os x for x hotfix per release specialty tests update to muon update to chromium folder upload feature in google drive broken installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave muon and libchromiumcontent version in about brave and make sure it is exactly as expected printing test that you can print a pdf widevine netflix test test that you can log into netflix and start a show performance test each start should take less than seconds enable only sync new sync group enable only sync with a large sync group many entries enable only payments only import a large set of bookmarks combine sync payments and a large set of bookmarks ledger verify wallet is auto created after enabling payments verify monthly budget and account balance shows correct bat and usd value click on add funds and click on each currency and verify it shows wallet address and qr code verify that brave bat wallet address can be copied verify adding funds via any of the currencies flows into bat wallet after specified amount of time verify adding funds to an existing wallet with amount adjusts the bat value appropriately change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting visit nytimes com for a few seconds and make sure it shows up in the payments table check that disabling payments and enabling them again does not lose state upgrade from older version verify the wallet overlay is shown when wallet transition is happening upon upgrade verify transition overlay is shown post upgrade even if the payment is disabled before upgrade verify publishers list is not lost after upgrade when payment is disabled in the older version ledger media visit any youtube video in a normal session tab and ensure the video publisher name is listed in ledger table visit any youtube video in a private tab and ensure the video publisher name is not listed in ledger table visit any live youtube video and ensure the time spent is shown under ledger table visit any embeded youtube video and ensure the video publisher name is listed in ledger table ensure total time spent is correctly calculated for each publisher video ensure total time spent is correctly calculated when switching to youtube video from an embeded video ensure youtube publishers are not listed when allow contributions to video is disabled in adavanced settings ensure existing youtube publishers are not lost when allow contributions to video is disabled in adavanced settings ensure youtube publishers is listed but not included when auto include is disabled update advanced settings to different time visit value and ensure youtube videos are added to ledger table once criteria is met sync verify you are able to sync two devices using the secret code visit a site on device and change shield setting ensure that the saved site preference is synced to device enable browsing history sync on device ensure the history is shown on device import add bookmarks on device ensure it is synced on device ensure imported bookmark folder structure is maintained on device ensure bookmark favicons are shown after sync data make sure that data from the last version appears in the new version ok with data from the last version test that cookies are preserved pinned tabs can be opened pinned tabs can be unpinned unpinned tabs can be re pinned opened tabs can be reloaded bookmarks on the bookmark toolbar can be opened bookmarks in the bookmark folder toolbar can be opened bookmarks test that creating a bookmark on the bookmarks toolbar works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control on about styles input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find keyboard shortcuts open a new window command n macos ctrl n win linux open a new tab command t macos ctrl t win linux open a new private tab command shift p macos ctrl shift p win linux reopen the latest closed tab command shift t macos ctrl shift t win linux jump to the next tab command option macos ctrl pgdn win linux jump to the previous tab command option macos ctrl pgup win linux jump to the next tab ctrl tab macos win linux jump to the previous tab ctrl shift tab macos win linux open brave preferences command macos ctrl win linux jump into the url bar command l macos ctrl l win linux reload page command r macos ctrl r win linux select all command a macos ctrl a win linux copying text command c macos ctrl c win linux pasting text command v macos ctrl v win linux minimize brave command m macos ctrl m win linux quit brave command q macos ctrl q win linux geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs pinning and tear off tabs test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset test that tabs can be detached to create a new window test that you are able to reattach a tab to an existing window test that you can quickly switch tabs test that tabs can be cloned zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on test that browser is not detected on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open a github issue and type some misspellings make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded over https at test that pdf is loaded over http at test that shows up as grey not red no mixed content scripts are run test that websockets are working by ensuring runs once play has been clicked flash tests test that flash placeholder appears on test with flash enabled in preferences auto play option is shown when visiting autofill tests test that autofill works on session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly test that using brave enable preview updates true env variable works and prompts for preview build updates
| 1
|
341,936
| 30,606,988,969
|
IssuesEvent
|
2023-07-23 05:49:05
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix jax_numpy_linalg.test_jax_matrix_power
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix jax_numpy_linalg.test_jax_matrix_power - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5515503096/jobs/10055891857"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix jax numpy linalg test jax matrix power tensorflow a href src paddle a href src jax a href src numpy a href src torch a href src
| 1
|
225,139
| 24,814,593,806
|
IssuesEvent
|
2022-10-25 12:13:07
|
Baneeishaque/Android-Common-Utils16
|
https://api.github.com/repos/Baneeishaque/Android-Common-Utils16
|
closed
|
CVE-2019-17359 (High) detected in bcprov-jdk15on-1.56.jar - autoclosed
|
security vulnerability
|
## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tests16/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-30.0.3.jar (Root Library)
- builder-7.0.3.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/Android-Common-Utils16/commit/0800a0e70bf571d1c17edd75edb182d01b9521ce">0800a0e70bf571d1c17edd75edb182d01b9521ce</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-17359 (High) detected in bcprov-jdk15on-1.56.jar - autoclosed - ## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tests16/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-30.0.3.jar (Root Library)
- builder-7.0.3.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/Android-Common-Utils16/commit/0800a0e70bf571d1c17edd75edb182d01b9521ce">0800a0e70bf571d1c17edd75edb182d01b9521ce</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in bcprov jar autoclosed cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org bouncycastle bcprov bcprov jar home wss scanner gradle caches modules files org bouncycastle bcprov bcprov jar dependency hierarchy lint gradle jar root library builder jar x bcprov jar vulnerable library found in head commit a href found in base branch master vulnerability details the asn parser in bouncy castle crypto aka bc java can trigger a large attempted memory allocation and resultant outofmemoryerror error via crafted asn data this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov step up your open source security game with mend
| 0
|
46,015
| 13,148,138,324
|
IssuesEvent
|
2020-08-08 19:38:17
|
faizulho/vuepress-deploy
|
https://api.github.com/repos/faizulho/vuepress-deploy
|
opened
|
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz
|
security vulnerability
|
## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress-deploy/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/vuepress-deploy/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-0.6.1.tgz (Root Library)
- webpack-4.6.0.tgz
- micromatch-3.1.10.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/vuepress-deploy/commit/b72fdd9b4a95a0a14352d3d76e253eccbdb95192">b72fdd9b4a95a0a14352d3d76e253eccbdb95192</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2019-12-30</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz - ## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress-deploy/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/vuepress-deploy/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- vuepress-0.6.1.tgz (Root Library)
- webpack-4.6.0.tgz
- micromatch-3.1.10.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/vuepress-deploy/commit/b72fdd9b4a95a0a14352d3d76e253eccbdb95192">b72fdd9b4a95a0a14352d3d76e253eccbdb95192</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2019-12-30</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in kind of tgz cve high severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href path to dependency file tmp ws scm vuepress deploy package json path to vulnerable library tmp ws scm vuepress deploy node modules kind of package json dependency hierarchy vuepress tgz root library webpack tgz micromatch tgz x kind of tgz vulnerable library found in head commit a href vulnerability details ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by constructor name symbol hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
26,896
| 6,812,695,610
|
IssuesEvent
|
2017-11-06 05:06:07
|
BTDF/DeploymentFramework
|
https://api.github.com/repos/BTDF/DeploymentFramework
|
closed
|
Undeploy should not fail if pipeline component assembly cannot be deleted
|
bug CodePlexMigrationInitiated General Impact: Low Release 5.0
|
Undeploy should not fail if a pipeline component assembly cannot be deleted.
#### This work item was migrated from CodePlex
CodePlex work item ID: '3659'
Assigned to: 'tfabraham'
Vote count: '0'
|
1.0
|
Undeploy should not fail if pipeline component assembly cannot be deleted - Undeploy should not fail if a pipeline component assembly cannot be deleted.
#### This work item was migrated from CodePlex
CodePlex work item ID: '3659'
Assigned to: 'tfabraham'
Vote count: '0'
|
non_test
|
undeploy should not fail if pipeline component assembly cannot be deleted undeploy should not fail if a pipeline component assembly cannot be deleted this work item was migrated from codeplex codeplex work item id assigned to tfabraham vote count
| 0
|
340,075
| 10,266,394,056
|
IssuesEvent
|
2019-08-22 21:19:15
|
googleapis/gapic-generator-python
|
https://api.github.com/repos/googleapis/gapic-generator-python
|
opened
|
Read service configs to determine method retry configuration
|
priority: p1 type: feature request
|
Some methods have particular custom retry requirements, which are detailed in the service_config file.
Feature request to read in the service config (if provided) and customize method retries in the generated client.
|
1.0
|
Read service configs to determine method retry configuration - Some methods have particular custom retry requirements, which are detailed in the service_config file.
Feature request to read in the service config (if provided) and customize method retries in the generated client.
|
non_test
|
read service configs to determine method retry configuration some methods have particular custom retry requirements which are detailed in the service config file feature request to read in the service config if provided and customize method retries in the generated client
| 0
|
64,758
| 6,919,219,524
|
IssuesEvent
|
2017-11-29 14:51:14
|
curationexperts/epigaea
|
https://api.github.com/repos/curationexperts/epigaea
|
closed
|
Normalize whitespace for batch ingest
|
acceptance testing bug in progress
|
Whitespace should be stripped and normalized in the same manner as deposit.
|
1.0
|
Normalize whitespace for batch ingest - Whitespace should be stripped and normalized in the same manner as deposit.
|
test
|
normalize whitespace for batch ingest whitespace should be stripped and normalized in the same manner as deposit
| 1
|
343,489
| 30,668,797,526
|
IssuesEvent
|
2023-07-25 20:30:02
|
CollinHeist/TitleCardMaker-Blueprints
|
https://api.github.com/repos/CollinHeist/TitleCardMaker-Blueprints
|
closed
|
[Blueprint] 1923
|
blueprint passed-tests
|
### Series Name
1923
### Series Year
2022
### Creator Username
_No response_
### Blueprint Description
Standard format card using the Boul Mich Regular font
### Blueprint
```json
{
"series": {
"font_id": 0,
"card_type": "standard",
"template_ids": []
},
"episodes": {},
"templates": [],
"fonts": [
{
"name": "1923",
"delete_missing": true,
"file": "Boul Mich Regular.ttf"
}
],
"description": [
"Descriptive information about this Blueprint"
],
"creator": "Your (user)name here",
"preview": "preview.jpg"
}
```
### Preview Title Card

### Zip of Font Files
[Boul Mich Regular.ttf.zip](https://github.com/CollinHeist/TitleCardMaker-Blueprints/files/12165330/Boul.Mich.Regular.ttf.zip)
|
1.0
|
[Blueprint] 1923 - ### Series Name
1923
### Series Year
2022
### Creator Username
_No response_
### Blueprint Description
Standard format card using the Boul Mich Regular font
### Blueprint
```json
{
"series": {
"font_id": 0,
"card_type": "standard",
"template_ids": []
},
"episodes": {},
"templates": [],
"fonts": [
{
"name": "1923",
"delete_missing": true,
"file": "Boul Mich Regular.ttf"
}
],
"description": [
"Descriptive information about this Blueprint"
],
"creator": "Your (user)name here",
"preview": "preview.jpg"
}
```
### Preview Title Card

### Zip of Font Files
[Boul Mich Regular.ttf.zip](https://github.com/CollinHeist/TitleCardMaker-Blueprints/files/12165330/Boul.Mich.Regular.ttf.zip)
|
test
|
series name series year creator username no response blueprint description standard format card using the boul mich regular font blueprint json series font id card type standard template ids episodes templates fonts name delete missing true file boul mich regular ttf description descriptive information about this blueprint creator your user name here preview preview jpg preview title card zip of font files
| 1
|
144,456
| 11,616,904,882
|
IssuesEvent
|
2020-02-26 16:25:48
|
tendermint/tendermint
|
https://api.github.com/repos/tendermint/tendermint
|
closed
|
blockchain v2: TestReactorTerminationScenarios fails on master
|
C:sync T:test
|
```
--- FAIL: TestReactorTerminationScenarios (0.55s)
--- PASS: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_one_peer (0.09s)
--- FAIL: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_two_peers (0.12s)
reactor_test.go:337:
Error Trace: reactor_test.go:337
Error: Should be true
Test: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_two_peers
```
cc @brapse
|
1.0
|
blockchain v2: TestReactorTerminationScenarios fails on master - ```
--- FAIL: TestReactorTerminationScenarios (0.55s)
--- PASS: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_one_peer (0.09s)
--- FAIL: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_two_peers (0.12s)
reactor_test.go:337:
Error Trace: reactor_test.go:337
Error: Should be true
Test: TestReactorTerminationScenarios/simple_termination_on_max_peer_height_-_two_peers
```
cc @brapse
|
test
|
blockchain testreactorterminationscenarios fails on master fail testreactorterminationscenarios pass testreactorterminationscenarios simple termination on max peer height one peer fail testreactorterminationscenarios simple termination on max peer height two peers reactor test go error trace reactor test go error should be true test testreactorterminationscenarios simple termination on max peer height two peers cc brapse
| 1
|
213,483
| 7,254,194,138
|
IssuesEvent
|
2018-02-16 09:59:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.amazon.com - see bug description
|
browser-firefox-mobile priority-critical
|
<!-- @browser: Firefox Mobile 60.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0 -->
<!-- @reported_with: web -->
**URL**: https://www.amazon.com/
**Browser / Version**: Firefox Mobile 60.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Mobile site on my mobile phone. I have a high res screen and hate the limitations of the mobile site
**Steps to Reproduce**:
Firefox does not have a default to desktop setting, and so I manually have to manually request desktop. I wish my Amazon account would only send me desktop site pages.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.amazon.com - see bug description - <!-- @browser: Firefox Mobile 60.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0 -->
<!-- @reported_with: web -->
**URL**: https://www.amazon.com/
**Browser / Version**: Firefox Mobile 60.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Mobile site on my mobile phone. I have a high res screen and hate the limitations of the mobile site
**Steps to Reproduce**:
Firefox does not have a default to desktop setting, and so I manually have to manually request desktop. I wish my Amazon account would only send me desktop site pages.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
see bug description url browser version firefox mobile operating system android tested another browser unknown problem type something else description mobile site on my mobile phone i have a high res screen and hate the limitations of the mobile site steps to reproduce firefox does not have a default to desktop setting and so i manually have to manually request desktop i wish my amazon account would only send me desktop site pages from with ❤️
| 0
|
251,777
| 21,522,403,475
|
IssuesEvent
|
2022-04-28 15:14:10
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
twister fails on several stm32 boards with tests/arch/arm testcases
|
bug priority: medium platform: STM32 area: Tests area: Twister
|
**Describe the bug**
It appears that several (various) test cases of the tests/arch/arm running on the nucleo_wb55rg or nucleo_f103rb
are failing due to **FAILED: Testsuite mismatch**
twister reports failed test :
`twister - ERROR - nucleo_wb55rg tests/arch/arm/arm_irq_advanced_features/arch.arm.irq_advanced_features FAILED : Testsuite mismatch`
handler.log reports:
```
================================================================
PROJECT EXECUTION SUCCESSFUL
```
which is not different from a successful handler.log
--> It appears that reverting the https://github.com/zephyrproject-rtos/zephyr/pull/42482/
can fix the pb
**To Reproduce**
Steps to reproduce the behavior:
1. twister --device-testing - --hardware-map ../map.yaml -T tests/arch/arm
**Environment (please complete the following information):**
- OS: Ubuntu 20.04.2 LTS
- Toolchain : zephyr-sdk-0.13.2
- Commit SHA : 33fde4b10a17e3d16b61e7f7838141391c68d6ae
|
1.0
|
twister fails on several stm32 boards with tests/arch/arm testcases - **Describe the bug**
It appears that several (various) test cases of the tests/arch/arm running on the nucleo_wb55rg or nucleo_f103rb
are failing due to **FAILED: Testsuite mismatch**
twister reports failed test :
`twister - ERROR - nucleo_wb55rg tests/arch/arm/arm_irq_advanced_features/arch.arm.irq_advanced_features FAILED : Testsuite mismatch`
handler.log reports:
```
================================================================
PROJECT EXECUTION SUCCESSFUL
```
which is not different from a successful handler.log
--> It appears that reverting the https://github.com/zephyrproject-rtos/zephyr/pull/42482/
can fix the pb
**To Reproduce**
Steps to reproduce the behavior:
1. twister --device-testing - --hardware-map ../map.yaml -T tests/arch/arm
**Environment (please complete the following information):**
- OS: Ubuntu 20.04.2 LTS
- Toolchain : zephyr-sdk-0.13.2
- Commit SHA : 33fde4b10a17e3d16b61e7f7838141391c68d6ae
|
test
|
twister fails on several boards with tests arch arm testcases describe the bug it appears that several various test cases of the tests arch arm running on the nucleo or nucleo are failing due to failed testsuite mismatch twister reports failed test twister error nucleo tests arch arm arm irq advanced features arch arm irq advanced features failed testsuite mismatch handler log reports project execution successful which is not different from a successful handler log it appears that reverting the can fix the pb to reproduce steps to reproduce the behavior twister device testing hardware map map yaml t tests arch arm environment please complete the following information os ubuntu lts toolchain zephyr sdk commit sha
| 1
|
241,239
| 20,112,305,426
|
IssuesEvent
|
2022-02-07 16:06:54
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
closed
|
Vulnerability detector test module refactor: `test_providers_no_os`
|
team/qa type/rework test/integration feature/vuln-detector subteam/qa-thunder
|
It is asked to refactor the test module named `test_providers_no_os.py`.
It is disabled for now, as it was failing or unstable, causing false positives.
## Tasks
- [x] Make a study of the objectives of the test, and what is being tested.
- [x] Refactor the test. Clean and modularizable code.
- [x] Check that the test always starts from the same state and restores it completely at the end of the test (independent of the tests previously executed in that environment).
- [x] Review test documentation, and modify if necessary.
- [x] Proven that tests **pass** when they have to pass.
- [x] Proven that tests **fail** when they have to fail.
- [x] Test in 5-10 rounds of execution that the test always shows the same result.
## Checks
- [x] The code complies with the standard PEP-8 format.
- [x] Python codebase is documented following the Google Style for Python docstrings.
- [x] The test is processed by `qa-docs` tool without errors.
|
1.0
|
Vulnerability detector test module refactor: `test_providers_no_os` - It is asked to refactor the test module named `test_providers_no_os.py`.
It is disabled for now, as it was failing or unstable, causing false positives.
## Tasks
- [x] Make a study of the objectives of the test, and what is being tested.
- [x] Refactor the test. Clean and modularizable code.
- [x] Check that the test always starts from the same state and restores it completely at the end of the test (independent of the tests previously executed in that environment).
- [x] Review test documentation, and modify if necessary.
- [x] Proven that tests **pass** when they have to pass.
- [x] Proven that tests **fail** when they have to fail.
- [x] Test in 5-10 rounds of execution that the test always shows the same result.
## Checks
- [x] The code complies with the standard PEP-8 format.
- [x] Python codebase is documented following the Google Style for Python docstrings.
- [x] The test is processed by `qa-docs` tool without errors.
|
test
|
vulnerability detector test module refactor test providers no os it is asked to refactor the test module named test providers no os py it is disabled for now as it was failing or unstable causing false positives tasks make a study of the objectives of the test and what is being tested refactor the test clean and modularizable code check that the test always starts from the same state and restores it completely at the end of the test independent of the tests previously executed in that environment review test documentation and modify if necessary proven that tests pass when they have to pass proven that tests fail when they have to fail test in rounds of execution that the test always shows the same result checks the code complies with the standard pep format python codebase is documented following the google style for python docstrings the test is processed by qa docs tool without errors
| 1
|
562
| 2,532,779,010
|
IssuesEvent
|
2015-01-23 18:25:56
|
fatty-tuna/vr-vs-world
|
https://api.github.com/repos/fatty-tuna/vr-vs-world
|
closed
|
4.0 - Milestones
|
documentation
|
- Weekly goals (items to be completed in order to realize the feature list in section 3.0)
- Specify the dates along with the goals/tasks that are to be completed for each week.
- Create a scrum board for weekly scrum meetings
|
1.0
|
4.0 - Milestones - - Weekly goals (items to be completed in order to realize the feature list in section 3.0)
- Specify the dates along with the goals/tasks that are to be completed for each week.
- Create a scrum board for weekly scrum meetings
|
non_test
|
milestones weekly goals items to be completed in order to realize the feature list in section specify the dates along with the goals tasks that are to be completed for each week create a scrum board for weekly scrum meetings
| 0
|
107,000
| 9,199,883,241
|
IssuesEvent
|
2019-03-07 15:51:57
|
DanRic/Bonus-idrico
|
https://api.github.com/repos/DanRic/Bonus-idrico
|
opened
|
Errore produzione esiti Cessazioni
|
Test Interno
|
Ciao, la generazione degli esiti per le cessazioni da errore nel formato del file prodotto.
Puoi correggere e riprovare?

Anche il file di log è formattato male, gli a capo non sono gestiti bene:
[LOG_Creazione_Esiti_07-03-2019_1646.zip](https://github.com/DanRic/Bonus-idrico/files/2942088/LOG_Creazione_Esiti_07-03-2019_1646.zip)
|
1.0
|
Errore produzione esiti Cessazioni - Ciao, la generazione degli esiti per le cessazioni da errore nel formato del file prodotto.
Puoi correggere e riprovare?

Anche il file di log è formattato male, gli a capo non sono gestiti bene:
[LOG_Creazione_Esiti_07-03-2019_1646.zip](https://github.com/DanRic/Bonus-idrico/files/2942088/LOG_Creazione_Esiti_07-03-2019_1646.zip)
|
test
|
errore produzione esiti cessazioni ciao la generazione degli esiti per le cessazioni da errore nel formato del file prodotto puoi correggere e riprovare anche il file di log è formattato male gli a capo non sono gestiti bene
| 1
|
62,982
| 26,224,701,181
|
IssuesEvent
|
2023-01-04 17:34:12
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Meeting - OSE Neighborhood Zones web map demo 1/4/2023
|
Workgroup: OSE Type: Map Request Type: Data Service: Geo
|
Notes from meeting January 4 2023
“How the map works” A meeting about the new Neighborhood Block Party Map for outreach.
Zach, Maria, Margaret
Zach showed the details of the map and how in depth it can go. We agreed the Web Application can be made in the future for end users like Jim Dale or other departments. The application will be a locked down/un-editable version of the map so that end-users only click on the map layers and search, without editing capabilities.
So that OSE has more understanding of how to work with Arc GIS Online, Zach will compile a list of “how to” courses for us (for whoever in the office is interested). Once we have finished them, Zach will show Margaret and Maria more in depth explanation of how to view and edit the NBP Outreach Map. His point is to “empower” us. Then we at OSE will have more autonomy in editing our maps with Zach (and the GIS crew) as support.
Note: It is possible that once the web application is done that other departments might be able to use the map in a web application format.
|
1.0
|
Meeting - OSE Neighborhood Zones web map demo 1/4/2023 - Notes from meeting January 4 2023
“How the map works” A meeting about the new Neighborhood Block Party Map for outreach.
Zach, Maria, Margaret
Zach showed the details of the map and how in depth it can go. We agreed the Web Application can be made in the future for end users like Jim Dale or other departments. The application will be a locked down/un-editable version of the map so that end-users only click on the map layers and search, without editing capabilities.
So that OSE has more understanding of how to work with Arc GIS Online, Zach will compile a list of “how to” courses for us (for whoever in the office is interested). Once we have finished them, Zach will show Margaret and Maria more in depth explanation of how to view and edit the NBP Outreach Map. His point is to “empower” us. Then we at OSE will have more autonomy in editing our maps with Zach (and the GIS crew) as support.
Note: It is possible that once the web application is done that other departments might be able to use the map in a web application format.
|
non_test
|
meeting ose neighborhood zones web map demo notes from meeting january “how the map works” a meeting about the new neighborhood block party map for outreach zach maria margaret zach showed the details of the map and how in depth it can go we agreed the web application can be made in the future for end users like jim dale or other departments the application will be a locked down un editable version of the map so that end users only click on the map layers and search without editing capabilities so that ose has more understanding of how to work with arc gis online zach will compile a list of “how to” courses for us for whoever in the office is interested once we have finished them zach will show margaret and maria more in depth explanation of how to view and edit the nbp outreach map his point is to “empower” us then we at ose will have more autonomy in editing our maps with zach and the gis crew as support note it is possible that once the web application is done that other departments might be able to use the map in a web application format
| 0
|
82,483
| 7,842,663,240
|
IssuesEvent
|
2018-06-19 00:57:06
|
AlphaConsole/AlphaConsoleBot
|
https://api.github.com/repos/AlphaConsole/AlphaConsoleBot
|
closed
|
Have !mute DM the user
|
Testing Phase enhancement
|
When a user is muted through the `!mute` command, the bot should DM the user the same embed that is posted in `#mute-reasons` and `#mod-log`
|
1.0
|
Have !mute DM the user - When a user is muted through the `!mute` command, the bot should DM the user the same embed that is posted in `#mute-reasons` and `#mod-log`
|
test
|
have mute dm the user when a user is muted through the mute command the bot should dm the user the same embed that is posted in mute reasons and mod log
| 1
|
20,869
| 3,851,435,967
|
IssuesEvent
|
2016-04-06 02:00:49
|
pixelhumain/communecter
|
https://api.github.com/repos/pixelhumain/communecter
|
closed
|
Actualité non affichée
|
bug priority 1 to test
|
Lorsque je crée une actu dans Fil d'actualités, lorsque je la valide j'ai un marqueur "Saved successfully" mais elle n'apparait pas dans mon fil d'actualités.
J'en ai créé 3 (en sélectionnant my Wall, en mettant des tags, une date ou non, même résultat) et j'ai toujours "Sorry no news available..."
|
1.0
|
Actualité non affichée - Lorsque je crée une actu dans Fil d'actualités, lorsque je la valide j'ai un marqueur "Saved successfully" mais elle n'apparait pas dans mon fil d'actualités.
J'en ai créé 3 (en sélectionnant my Wall, en mettant des tags, une date ou non, même résultat) et j'ai toujours "Sorry no news available..."
|
test
|
actualité non affichée lorsque je crée une actu dans fil d actualités lorsque je la valide j ai un marqueur saved successfully mais elle n apparait pas dans mon fil d actualités j en ai créé en sélectionnant my wall en mettant des tags une date ou non même résultat et j ai toujours sorry no news available
| 1
|
111,030
| 9,488,467,587
|
IssuesEvent
|
2019-04-22 19:39:23
|
phetsims/isotopes-and-atomic-mass
|
https://api.github.com/repos/phetsims/isotopes-and-atomic-mass
|
closed
|
Fluorine 18 and 19 should have different amu
|
status:fixed-pending-testing type:bug
|
**Test device:**
Dell
**Operating System:**
Win 10
**Browser:**
Chrome
**Problem description:**
For https://github.com/phetsims/QA/issues/300
Bot F-18 and F-19 say amu=18.99840 on the isotopes screen. According to Wikipedia F-18 should be 18.0009373(5).
**Steps to reproduce:**
1. Go to the Isotopes screen
2. Select F on the periodic table
3. Select Atomic Mass (amu)
4. Remove a neutron to compare the amu of F-18 and F-19
**Screenshots:**

**Troubleshooting information (do not edit):**
<details>
Name: Isotopes and Atomic Mass
URL: https://phet-dev.colorado.edu/html/isotopes-and-atomic-mass/1.1.0-rc.1/phet/isotopes-and-atomic-mass_en_phet.html
Version: 1.1.0-rc.1 2019-03-22 21:49:06 UTC
Features missing: touch
Flags: pixelRatioScaling
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36
Language: en-US
Window: 1536x722
Pixel Ratio: 2.5/1
WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium)
GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)
Vendor: WebKit (WebKit WebGL)
Vertex: attribs: 16 varying: 30 uniform: 4096
Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32)
Max viewport: 16384x16384
OES_texture_float: true
Dependencies JSON: {}
</details>
|
1.0
|
Fluorine 18 and 19 should have different amu - **Test device:**
Dell
**Operating System:**
Win 10
**Browser:**
Chrome
**Problem description:**
For https://github.com/phetsims/QA/issues/300
Bot F-18 and F-19 say amu=18.99840 on the isotopes screen. According to Wikipedia F-18 should be 18.0009373(5).
**Steps to reproduce:**
1. Go to the Isotopes screen
2. Select F on the periodic table
3. Select Atomic Mass (amu)
4. Remove a neutron to compare the amu of F-18 and F-19
**Screenshots:**

**Troubleshooting information (do not edit):**
<details>
Name: Isotopes and Atomic Mass
URL: https://phet-dev.colorado.edu/html/isotopes-and-atomic-mass/1.1.0-rc.1/phet/isotopes-and-atomic-mass_en_phet.html
Version: 1.1.0-rc.1 2019-03-22 21:49:06 UTC
Features missing: touch
Flags: pixelRatioScaling
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36
Language: en-US
Window: 1536x722
Pixel Ratio: 2.5/1
WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium)
GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)
Vendor: WebKit (WebKit WebGL)
Vertex: attribs: 16 varying: 30 uniform: 4096
Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32)
Max viewport: 16384x16384
OES_texture_float: true
Dependencies JSON: {}
</details>
|
test
|
fluorine and should have different amu test device dell operating system win browser chrome problem description for bot f and f say amu on the isotopes screen according to wikipedia f should be steps to reproduce go to the isotopes screen select f on the periodic table select atomic mass amu remove a neutron to compare the amu of f and f screenshots troubleshooting information do not edit name isotopes and atomic mass url version rc utc features missing touch flags pixelratioscaling user agent mozilla windows nt applewebkit khtml like gecko chrome safari language en us window pixel ratio webgl webgl opengl es chromium glsl webgl glsl es opengl es glsl es chromium vendor webkit webkit webgl vertex attribs varying uniform texture size imageunits vertex combined max viewport oes texture float true dependencies json
| 1
|
223,268
| 17,109,384,535
|
IssuesEvent
|
2021-07-10 01:49:44
|
crankyoldgit/IRremoteESP8266
|
https://api.github.com/repos/crankyoldgit/IRremoteESP8266
|
closed
|
[TODO] Do less with/move away from Travis.
|
Documentation enhancement help wanted
|
We have run out of _free_ credits with Travis's Continuous Integration service.
We need to:
1. Reduce the amount we use Travis to try to get under budget. e.g. Unit Tests are already running under GitHub Actions.
2. Move more of the Travis tasks/processes to GitHub Actions.
3. De-dup the existing Travis tasks/processes.
4. Get more credits.
5. Look at other _free_ services.
This is not critical, yet. But it is heading there fast.
|
1.0
|
[TODO] Do less with/move away from Travis. - We have run out of _free_ credits with Travis's Continuous Integration service.
We need to:
1. Reduce the amount we use Travis to try to get under budget. e.g. Unit Tests are already running under GitHub Actions.
2. Move more of the Travis tasks/processes to GitHub Actions.
3. De-dup the existing Travis tasks/processes.
4. Get more credits.
5. Look at other _free_ services.
This is not critical, yet. But it is heading there fast.
|
non_test
|
do less with move away from travis we have run out of free credits with travis s continuous integration service we need to reduce the amount we use travis to try to get under budget e g unit tests are already running under github actions move more of the travis tasks processes to github actions de dup the existing travis tasks processes get more credits look at other free services this is not critical yet but it is heading there fast
| 0
|
14,493
| 10,169,354,946
|
IssuesEvent
|
2019-08-08 00:06:05
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
webapp:Add CLI support for Vnet Integration
|
Feature Request Service Attention Web Apps
|
**Is your feature request related to a problem? Please describe.**
customers have been requesting Vnet integration support for some time.
|
1.0
|
webapp:Add CLI support for Vnet Integration - **Is your feature request related to a problem? Please describe.**
customers have been requesting Vnet integration support for some time.
|
non_test
|
webapp add cli support for vnet integration is your feature request related to a problem please describe customers have been requesting vnet integration support for some time
| 0
|
102
| 2,507,838,734
|
IssuesEvent
|
2015-01-12 21:10:43
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
Peacock stumbles when parsing an input file with multiple directives per line
|
C: Peacock P: normal T: defect
|
When reading an input file that does not contain a line break or comment after each directive, it completely messes up the parsed syntax tree. However, such files are read and parsed correctly by moose itself
For example, reading
[Variables]
[./xxx] # my fancy variable comment
x = 3 # and a parameter comment
[../]
[./aaa] a = 1 [../]
[./bbb] b = 2 [../] [./ccc] c = 3 [../]
[./ddd] d = 4 [./eee] e = 5 [../] [../] [./fff] f = 6 [../]
[./ggg]
g = 7 [../]
[]
results in

Especially in recurring definitions with only few parameters like variables, almost identical kernels, etc. I am using the one-line notation quite often since it avoids too much scrolling and shows differences between the entries and possible typos much more obvious, e.g. in
[Kernels]
[./stressdiv_dispx] type=CardiacKirchhoffStressDivergence variable=dispx component=0 displacements='dispx dispy dispz' [../]
[./stressdiv_dispy] type=CardiacKirchhoffStressDivergence variable=dispy component=1 displacements='dispx dispy dispz' [../]
[./stressdiv_dispz] type=CardiacKirchhoffStressDivergence variable=dispz component=2 displacements='dispx dispy dispz' [../]
[]
Therefore, I would like to see this issue fixed.
|
1.0
|
Peacock stumbles when parsing an input file with multiple directives per line - When reading an input file that does not contain a line break or comment after each directive, it completely messes up the parsed syntax tree. However, such files are read and parsed correctly by moose itself
For example, reading
[Variables]
[./xxx] # my fancy variable comment
x = 3 # and a parameter comment
[../]
[./aaa] a = 1 [../]
[./bbb] b = 2 [../] [./ccc] c = 3 [../]
[./ddd] d = 4 [./eee] e = 5 [../] [../] [./fff] f = 6 [../]
[./ggg]
g = 7 [../]
[]
results in

Especially in recurring definitions with only few parameters like variables, almost identical kernels, etc. I am using the one-line notation quite often since it avoids too much scrolling and shows differences between the entries and possible typos much more obvious, e.g. in
[Kernels]
[./stressdiv_dispx] type=CardiacKirchhoffStressDivergence variable=dispx component=0 displacements='dispx dispy dispz' [../]
[./stressdiv_dispy] type=CardiacKirchhoffStressDivergence variable=dispy component=1 displacements='dispx dispy dispz' [../]
[./stressdiv_dispz] type=CardiacKirchhoffStressDivergence variable=dispz component=2 displacements='dispx dispy dispz' [../]
[]
Therefore, I would like to see this issue fixed.
|
non_test
|
peacock stumbles when parsing an input file with multiple directives per line when reading an input file that does not contain a line break or comment after each directive it completely messes up the parsed syntax tree however such files are read and parsed correctly by moose itself for example reading my fancy variable comment x and a parameter comment a b c d e f g results in especially in recurring definitions with only few parameters like variables almost identical kernels etc i am using the one line notation quite often since it avoids too much scrolling and shows differences between the entries and possible typos much more obvious e g in type cardiackirchhoffstressdivergence variable dispx component displacements dispx dispy dispz type cardiackirchhoffstressdivergence variable dispy component displacements dispx dispy dispz type cardiackirchhoffstressdivergence variable dispz component displacements dispx dispy dispz therefore i would like to see this issue fixed
| 0
|
66,304
| 20,146,365,263
|
IssuesEvent
|
2022-02-09 07:59:30
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
PrimeNG Calendar (p-calendar) Loses alignment when positioned at the top of the input and the user clicks in the year/month
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
The original examples from PrimeNG:
https://stackblitz.com/edit/primeng-calendar-demo
**Current behavior**
When the calendar is aligned on top of the input field and we click on the year/month, to display the list of years/months, the calendar loses its bottom alignment
**Expected behavior**
When the calendar is aligned on top of the input field and we click on the year/month, to display the list of years/months the calendar should not loses its alignment
**Minimal reproduction of the problem with instructions**
1. Go to https://stackblitz.com/edit/primeng-calendar-demo
2. Click in the input field(calendar) in the bottom of the page. In my example I am using the calendar Multiple.
<img width="285" alt="multiple-calendar-start" src="https://user-images.githubusercontent.com/11298993/148526374-c7e51c07-73b2-4bfc-b650-dd1469d6e19f.PNG">
3. Click in the year. Result: **The calendar loses it's bottom alignment**
<img width="276" alt="multiple-calendar-year-selection" src="https://user-images.githubusercontent.com/11298993/148526387-d13c7ca2-9800-4191-bf07-2fabb4c682bd.PNG">
4. Select the month. Result: **The calendar loses it's bottom alignment**
<img width="283" alt="multiple-calendar-month-selection" src="https://user-images.githubusercontent.com/11298993/148526494-ef6c0282-da29-4ab2-a7df-499769107c52.PNG">
5. After select month the calendar goes back to starting point. Result: **The calendar is not aligned to the bottom of the input anymore**
<img width="277" alt="multiple-calendar-final-result" src="https://user-images.githubusercontent.com/11298993/148526534-3b27cbbd-c8b2-412d-a1f8-27ba17c30625.PNG">
**What is the motivation / use case for changing the behavior?**
This is misleading behaviour for users.
**Please tell us about your environment:**
* **Angular version:** 13.0.2
* **PrimeNG version:** 13.0.4
* **Browser:** [Chrome 96.0.4664.110(64 bits) | Chrome (Android 11) 96.0.4664.104 | Firefox 95.0.2(64 bits) | Edge 96.0.1054.62 (64 bits) ]
This are the browsers I have available for testing
|
1.0
|
PrimeNG Calendar (p-calendar) Loses alignment when positioned at the top of the input and the user clicks in the year/month - **I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
The original examples from PrimeNG:
https://stackblitz.com/edit/primeng-calendar-demo
**Current behavior**
When the calendar is aligned on top of the input field and we click on the year/month, to display the list of years/months, the calendar loses its bottom alignment
**Expected behavior**
When the calendar is aligned on top of the input field and we click on the year/month, to display the list of years/months the calendar should not loses its alignment
**Minimal reproduction of the problem with instructions**
1. Go to https://stackblitz.com/edit/primeng-calendar-demo
2. Click in the input field(calendar) in the bottom of the page. In my example I am using the calendar Multiple.
<img width="285" alt="multiple-calendar-start" src="https://user-images.githubusercontent.com/11298993/148526374-c7e51c07-73b2-4bfc-b650-dd1469d6e19f.PNG">
3. Click in the year. Result: **The calendar loses it's bottom alignment**
<img width="276" alt="multiple-calendar-year-selection" src="https://user-images.githubusercontent.com/11298993/148526387-d13c7ca2-9800-4191-bf07-2fabb4c682bd.PNG">
4. Select the month. Result: **The calendar loses it's bottom alignment**
<img width="283" alt="multiple-calendar-month-selection" src="https://user-images.githubusercontent.com/11298993/148526494-ef6c0282-da29-4ab2-a7df-499769107c52.PNG">
5. After select month the calendar goes back to starting point. Result: **The calendar is not aligned to the bottom of the input anymore**
<img width="277" alt="multiple-calendar-final-result" src="https://user-images.githubusercontent.com/11298993/148526534-3b27cbbd-c8b2-412d-a1f8-27ba17c30625.PNG">
**What is the motivation / use case for changing the behavior?**
This is misleading behaviour for users.
**Please tell us about your environment:**
* **Angular version:** 13.0.2
* **PrimeNG version:** 13.0.4
* **Browser:** [Chrome 96.0.4664.110(64 bits) | Chrome (Android 11) 96.0.4664.104 | Firefox 95.0.2(64 bits) | Edge 96.0.1054.62 (64 bits) ]
This are the browsers I have available for testing
|
non_test
|
primeng calendar p calendar loses alignment when positioned at the top of the input and the user clicks in the year month i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports the original examples from primeng current behavior when the calendar is aligned on top of the input field and we click on the year month to display the list of years months the calendar loses its bottom alignment expected behavior when the calendar is aligned on top of the input field and we click on the year month to display the list of years months the calendar should not loses its alignment minimal reproduction of the problem with instructions go to click in the input field calendar in the bottom of the page in my example i am using the calendar multiple img width alt multiple calendar start src click in the year result the calendar loses it s bottom alignment img width alt multiple calendar year selection src select the month result the calendar loses it s bottom alignment img width alt multiple calendar month selection src after select month the calendar goes back to starting point result the calendar is not aligned to the bottom of the input anymore img width alt multiple calendar final result src what is the motivation use case for changing the behavior this is misleading behaviour for users please tell us about your environment angular version primeng version browser this are the browsers i have available for testing
| 0
|
217,177
| 16,848,832,409
|
IssuesEvent
|
2021-06-20 04:11:19
|
hakehuang/infoflow
|
https://api.github.com/repos/hakehuang/infoflow
|
opened
|
tests-ci :testing.ztest.error_hook.catch_assert_in_isr : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
|
area: Tests
|
**Describe the bug**
testing.ztest.error_hook.catch_assert_in_isr test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test testing.ztest
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
w*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite error_hook_tests
===================================================================
START - test_catch_assert_fail
ASSERTION FAIL [a != ((void *)0)] @ WEST_TOPDIR/zephyr/tests/ztest/error_hook/src/main.c:41
parameter a should not be NULL!
Caught assert failed
Assert error expected as part of test case.
PASS - test_catch_assert_fail in 0.17 seconds
===================================================================
START - test_catch_fatal_error
case type is 0
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
Caught assert failed
Assert failed was unexpected, aborting...
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
Caught assert failed
Assert failed was unexpected, aborting...
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
1.0
|
tests-ci :testing.ztest.error_hook.catch_assert_in_isr : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
-
**Describe the bug**
testing.ztest.error_hook.catch_assert_in_isr test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test testing.ztest
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
w*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite error_hook_tests
===================================================================
START - test_catch_assert_fail
ASSERTION FAIL [a != ((void *)0)] @ WEST_TOPDIR/zephyr/tests/ztest/error_hook/src/main.c:41
parameter a should not be NULL!
Caught assert failed
Assert error expected as part of test case.
PASS - test_catch_assert_fail in 0.17 seconds
===================================================================
START - test_catch_fatal_error
case type is 0
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
Caught assert failed
Assert failed was unexpected, aborting...
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
Caught assert failed
Assert failed was unexpected, aborting...
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
test
|
tests ci testing ztest error hook catch assert in isr zephyr test timeout describe the bug testing ztest error hook catch assert in isr test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test testing ztest see error expected behavior test pass impact logs and console output w booting zephyr os build zephyr running test suite error hook tests start test catch assert fail assertion fail west topdir zephyr tests ztest error hook src main c parameter a should not be null caught assert failed assert error expected as part of test case pass test catch assert fail in seconds start test catch fatal error case type is assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur caught assert failed assert failed was unexpected aborting assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur caught assert failed assert failed was unexpected aborting environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
| 1
|
175,815
| 13,611,488,398
|
IssuesEvent
|
2020-09-23 08:56:23
|
prisma/prisma-client-js
|
https://api.github.com/repos/prisma/prisma-client-js
|
closed
|
Integration Tests, Errors: Writing a signed Int to an unsigned column
|
kind/improvement team/engines team/typescript topic: error topic: tests
|
We should add a test that confirms and tracks the behavior and error message when you try to write a signed Int to an unsigned Int column.
See https://github.com/prisma/prisma/issues/1823#issuecomment-670524499
|
1.0
|
Integration Tests, Errors: Writing a signed Int to an unsigned column - We should add a test that confirms and tracks the behavior and error message when you try to write a signed Int to an unsigned Int column.
See https://github.com/prisma/prisma/issues/1823#issuecomment-670524499
|
test
|
integration tests errors writing a signed int to an unsigned column we should add a test that confirms and tracks the behavior and error message when you try to write a signed int to an unsigned int column see
| 1
|
83,617
| 15,712,467,847
|
IssuesEvent
|
2021-03-27 12:15:23
|
emilykaldwin1827/goof
|
https://api.github.com/repos/emilykaldwin1827/goof
|
closed
|
WS-2019-0231 (Medium) detected in adm-zip-0.4.7.tgz
|
security vulnerability
|
## WS-2019-0231 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.7.tgz</b></p></summary>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz</a></p>
<p>Path to dependency file: goof/package.json</p>
<p>Path to vulnerable library: goof/node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- :x: **adm-zip-0.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames
<p>Publish Date: 2018-04-22
<p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p>
<p>Release Date: 2019-09-09</p>
<p>Fix Resolution: 0.4.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0231 (Medium) detected in adm-zip-0.4.7.tgz - ## WS-2019-0231 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>adm-zip-0.4.7.tgz</b></p></summary>
<p>A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk</p>
<p>Library home page: <a href="https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz">https://registry.npmjs.org/adm-zip/-/adm-zip-0.4.7.tgz</a></p>
<p>Path to dependency file: goof/package.json</p>
<p>Path to vulnerable library: goof/node_modules/adm-zip/package.json</p>
<p>
Dependency Hierarchy:
- :x: **adm-zip-0.4.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilykaldwin1827/goof/commit/27563f2447d85b487d3c44ea67f0f561f0c44b91">27563f2447d85b487d3c44ea67f0f561f0c44b91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
adm-zip versions before 0.4.9 are vulnerable to Arbitrary File Write due to extraction of a specifically crafted archive that contains path traversal filenames
<p>Publish Date: 2018-04-22
<p>URL: <a href=https://hackerone.com/reports/362118>WS-2019-0231</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/994">https://www.npmjs.com/advisories/994</a></p>
<p>Release Date: 2019-09-09</p>
<p>Fix Resolution: 0.4.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws medium detected in adm zip tgz ws medium severity vulnerability vulnerable library adm zip tgz a javascript implementation of zip for nodejs allows user to create or extract zip files both in memory or to from disk library home page a href path to dependency file goof package json path to vulnerable library goof node modules adm zip package json dependency hierarchy x adm zip tgz vulnerable library found in head commit a href found in base branch master vulnerability details adm zip versions before are vulnerable to arbitrary file write due to extraction of a specifically crafted archive that contains path traversal filenames publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
4,492
| 2,729,432,771
|
IssuesEvent
|
2015-04-16 08:31:38
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
closed
|
Density test is failing in 100-node cluster on cleanup
|
area/performance misc/perf-1.0 priority/P1 team/testing
|
In Jenkins scalability suite, Density test is failing on cleanup:
Expected error:
<*errors.errorString | 0xc20d27c460>: {
s: "Error waiting for replication controller my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555 replicas to reach 0: Get http://127.0.0.1:4001/v2/keys/registry/controllers/default/my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: cannot assign requested address",
}
Error waiting for replication controller my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555 replicas to reach 0: Get http://127.0.0.1:4001/v2/keys/registry/controllers/default/my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: cannot assign requested address
not to have occurred
We need some retrying while waiting for rc to be deleted (similar to what I've done for waiting for all replicas in #6854)
|
1.0
|
Density test is failing in 100-node cluster on cleanup - In Jenkins scalability suite, Density test is failing on cleanup:
Expected error:
<*errors.errorString | 0xc20d27c460>: {
s: "Error waiting for replication controller my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555 replicas to reach 0: Get http://127.0.0.1:4001/v2/keys/registry/controllers/default/my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: cannot assign requested address",
}
Error waiting for replication controller my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555 replicas to reach 0: Get http://127.0.0.1:4001/v2/keys/registry/controllers/default/my-hostname-density3000-7c8a90c7-e362-11e4-90b1-42010af01555?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: cannot assign requested address
not to have occurred
We need some retrying while waiting for rc to be deleted (similar to what I've done for waiting for all replicas in #6854)
|
test
|
density test is failing in node cluster on cleanup in jenkins scalability suite density test is failing on cleanup expected error s error waiting for replication controller my hostname replicas to reach get dial tcp cannot assign requested address error waiting for replication controller my hostname replicas to reach get dial tcp cannot assign requested address not to have occurred we need some retrying while waiting for rc to be deleted similar to what i ve done for waiting for all replicas in
| 1
|
235,378
| 7,737,626,718
|
IssuesEvent
|
2018-05-28 08:56:21
|
fuse-open/fuselibs
|
https://api.github.com/repos/fuse-open/fuselibs
|
closed
|
Example app crashing on Android
|
Priority: Critical Severity: Bug Severity: Release blocker
|
I built the release app to test #1093 but upon launching on Android I get a crash. I'm currently using `master` branches. I will test with 1.8 as well.
```
Running logcat on 'SWVGJV6D4L6HGILZ'
03-13 08:39:15.585 783 3473 I ActivityManager: Start proc 10194:com.apps.example/u0a136 for activity com.apps.example/.example
03-13 08:39:15.700 10194 10194 I art : Late-enabling -Xcheck:jni
03-13 08:39:15.741 783 3473 I libPerfService: perfSetFavorPid - pid:10194, 27d2
03-13 08:39:15.744 1385 1776 D ForegroundUtils: Foreground changed, PID: 10194 UID: 10136 foreground: true
03-13 08:39:15.744 1385 1776 D ForegroundUtils: UID: 10136 PID: 10194
03-13 08:39:15.756 10194 10194 D ActivityThread: BIND_APPLICATION handled : 0 / AppBindData{appInfo=ApplicationInfo{d12a837 com.apps.example}}
03-13 08:39:15.756 10194 10194 V ActivityThread: Handling launch of ActivityRecord{1bc91a4 token=android.os.BinderProxy@a822b0d {com.apps.example/com.apps.example.example}} startsNotResumed=false
03-13 08:39:15.765 10194 10194 D example : SDK: 23
03-13 08:39:15.826 10194 10194 V ActivityThread: ActivityRecord{1bc91a4 token=android.os.BinderProxy@a822b0d {com.apps.example/com.apps.example.example}}: app=android.app.Application@95f90d3, appName=com.apps.example, pkg=com.apps.example, comp={com.apps.example/com.apps.example.example}, dir=/data/app/com.apps.example-2/base.apk
03-13 08:39:15.895 10194 10194 I BufferQueue: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:-1) BufferQueue core=(10194:com.apps.example)
03-13 08:39:15.897 10194 10194 D BufferQueueDump: [unnamed-10194-0] this:0xf477e0c0, value:0xffadd800, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) connect(C): consumer=(10194:com.apps.example) controlledByApp=true
03-13 08:39:15.931 10194 10194 D BufferQueueDump: [unnamed-10194-0] this:0xf477e0c0, value:0xffadd840, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) setConsumerName: unnamed-10194-0
03-13 08:39:15.931 10194 10194 D BufferQueueDump: [SurfaceTexture-1-10194-0] this:0xf477e0c0, value:0xffadd878, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [SurfaceTexture-1-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) setConsumerName: SurfaceTexture-1-10194-0
03-13 08:39:15.943 10194 10194 D Surface : Surface::setBuffersUserDimensions(this=0xef485f00,w=0,h=0)
03-13 08:39:15.943 10194 10194 D Surface : Surface::connect(this=0xef485f00,api=1)
03-13 08:39:15.943 10194 10194 I BufferQueueProducer: [SurfaceTexture-1-10194-0](this:0xf46f6400,id:0,api:1,p:10194,c:10194) connect(P): api=1 producer=(10194:com.apps.example) producerControlledByApp=true
03-13 08:39:15.943 10194 10194 W libEGL : [ANDROID_RECORDABLE] format: 1
03-13 08:39:15.944 10194 10194 D mali_winsys: new_window_surface returns 0x3000
03-13 08:39:15.949 10194 10194 F libc : Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1a in tid 10194 (om.apps.example)
03-13 08:39:15.961 10257 10257 I AEE/AED : check process 10194 name:om.apps.example
03-13 08:39:15.961 10257 10257 I AEE/AED : tid 10194 abort msg address is:0x00000000, si_code is:0 (request from 1:10194)
03-13 08:39:15.961 10257 10257 I AEE/AED : BOOM: pid=10194 uid=10136 gid=10136 tid=10194
03-13 08:39:15.962 10257 10257 I AEE/AED : [OnPurpose Redunant in void preset_info(aed_report_record*, int, int)] pid: 10194, tid: 10194, name: om.apps.example >>> com.apps.example <<<
03-13 08:39:16.014 10257 10257 I AEE/AED : pid: 10194, tid: 10194, name: om.apps.example >>> com.apps.example <<<
03-13 08:39:16.030 783 3473 I libPerfService: perfGetLastBoostPid 10194
03-13 08:39:16.030 783 3473 D PerfServiceManager: [PerfService] getLastBoostPid 10194
03-13 08:39:16.030 783 3473 I libPerfService: perfGetLastBoostPid 10194
03-13 08:39:16.343 783 10267 D AES : pid : 10194
03-13 08:39:16.348 201 201 E lowmemorykiller: Error writing /proc/10194/oom_score_adj; errno=22
03-13 08:39:16.401 308 9940 D GuiExt : [GuiExtS] binder of dump tunnel(BQM-[10194:com.apps.example]) died
03-13 08:39:16.401 783 1439 I ActivityManager: Process com.apps.example (pid 10194) has died
03-13 08:39:16.401 783 1439 D ActivityManager: SVC-handleAppDiedLocked: app = ProcessRecord{e167e3 10194:com.apps.example/u0a136}, app.pid = 10194
03-13 08:39:16.401 783 794 D DisplayManagerService: Display listener for pid 10194 died.
03-13 08:39:16.401 783 1439 D ActivityManager: cleanUpApplicationRecord -- 10194
03-13 08:39:16.416 1385 1802 D ForegroundUtils: Process died; UID 10136 PID 10194
03-13 08:39:16.416 1385 1802 D ForegroundUtils: Foreground changed, PID: 10194 UID: 10136 foreground: false
03-13 08:39:16.449 311 311 I Zygote : Process 10194 exited due to signal (11)
Process com.apps.example terminated.
```
Lenovo TB3-X70F
|
1.0
|
Example app crashing on Android - I built the release app to test #1093 but upon launching on Android I get a crash. I'm currently using `master` branches. I will test with 1.8 as well.
```
Running logcat on 'SWVGJV6D4L6HGILZ'
03-13 08:39:15.585 783 3473 I ActivityManager: Start proc 10194:com.apps.example/u0a136 for activity com.apps.example/.example
03-13 08:39:15.700 10194 10194 I art : Late-enabling -Xcheck:jni
03-13 08:39:15.741 783 3473 I libPerfService: perfSetFavorPid - pid:10194, 27d2
03-13 08:39:15.744 1385 1776 D ForegroundUtils: Foreground changed, PID: 10194 UID: 10136 foreground: true
03-13 08:39:15.744 1385 1776 D ForegroundUtils: UID: 10136 PID: 10194
03-13 08:39:15.756 10194 10194 D ActivityThread: BIND_APPLICATION handled : 0 / AppBindData{appInfo=ApplicationInfo{d12a837 com.apps.example}}
03-13 08:39:15.756 10194 10194 V ActivityThread: Handling launch of ActivityRecord{1bc91a4 token=android.os.BinderProxy@a822b0d {com.apps.example/com.apps.example.example}} startsNotResumed=false
03-13 08:39:15.765 10194 10194 D example : SDK: 23
03-13 08:39:15.826 10194 10194 V ActivityThread: ActivityRecord{1bc91a4 token=android.os.BinderProxy@a822b0d {com.apps.example/com.apps.example.example}}: app=android.app.Application@95f90d3, appName=com.apps.example, pkg=com.apps.example, comp={com.apps.example/com.apps.example.example}, dir=/data/app/com.apps.example-2/base.apk
03-13 08:39:15.895 10194 10194 I BufferQueue: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:-1) BufferQueue core=(10194:com.apps.example)
03-13 08:39:15.897 10194 10194 D BufferQueueDump: [unnamed-10194-0] this:0xf477e0c0, value:0xffadd800, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) connect(C): consumer=(10194:com.apps.example) controlledByApp=true
03-13 08:39:15.931 10194 10194 D BufferQueueDump: [unnamed-10194-0] this:0xf477e0c0, value:0xffadd840, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [unnamed-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) setConsumerName: unnamed-10194-0
03-13 08:39:15.931 10194 10194 D BufferQueueDump: [SurfaceTexture-1-10194-0] this:0xf477e0c0, value:0xffadd878, iLen:6
03-13 08:39:15.931 10194 10194 I BufferQueueConsumer: [SurfaceTexture-1-10194-0](this:0xf46f6400,id:0,api:0,p:-1,c:10194) setConsumerName: SurfaceTexture-1-10194-0
03-13 08:39:15.943 10194 10194 D Surface : Surface::setBuffersUserDimensions(this=0xef485f00,w=0,h=0)
03-13 08:39:15.943 10194 10194 D Surface : Surface::connect(this=0xef485f00,api=1)
03-13 08:39:15.943 10194 10194 I BufferQueueProducer: [SurfaceTexture-1-10194-0](this:0xf46f6400,id:0,api:1,p:10194,c:10194) connect(P): api=1 producer=(10194:com.apps.example) producerControlledByApp=true
03-13 08:39:15.943 10194 10194 W libEGL : [ANDROID_RECORDABLE] format: 1
03-13 08:39:15.944 10194 10194 D mali_winsys: new_window_surface returns 0x3000
03-13 08:39:15.949 10194 10194 F libc : Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1a in tid 10194 (om.apps.example)
03-13 08:39:15.961 10257 10257 I AEE/AED : check process 10194 name:om.apps.example
03-13 08:39:15.961 10257 10257 I AEE/AED : tid 10194 abort msg address is:0x00000000, si_code is:0 (request from 1:10194)
03-13 08:39:15.961 10257 10257 I AEE/AED : BOOM: pid=10194 uid=10136 gid=10136 tid=10194
03-13 08:39:15.962 10257 10257 I AEE/AED : [OnPurpose Redunant in void preset_info(aed_report_record*, int, int)] pid: 10194, tid: 10194, name: om.apps.example >>> com.apps.example <<<
03-13 08:39:16.014 10257 10257 I AEE/AED : pid: 10194, tid: 10194, name: om.apps.example >>> com.apps.example <<<
03-13 08:39:16.030 783 3473 I libPerfService: perfGetLastBoostPid 10194
03-13 08:39:16.030 783 3473 D PerfServiceManager: [PerfService] getLastBoostPid 10194
03-13 08:39:16.030 783 3473 I libPerfService: perfGetLastBoostPid 10194
03-13 08:39:16.343 783 10267 D AES : pid : 10194
03-13 08:39:16.348 201 201 E lowmemorykiller: Error writing /proc/10194/oom_score_adj; errno=22
03-13 08:39:16.401 308 9940 D GuiExt : [GuiExtS] binder of dump tunnel(BQM-[10194:com.apps.example]) died
03-13 08:39:16.401 783 1439 I ActivityManager: Process com.apps.example (pid 10194) has died
03-13 08:39:16.401 783 1439 D ActivityManager: SVC-handleAppDiedLocked: app = ProcessRecord{e167e3 10194:com.apps.example/u0a136}, app.pid = 10194
03-13 08:39:16.401 783 794 D DisplayManagerService: Display listener for pid 10194 died.
03-13 08:39:16.401 783 1439 D ActivityManager: cleanUpApplicationRecord -- 10194
03-13 08:39:16.416 1385 1802 D ForegroundUtils: Process died; UID 10136 PID 10194
03-13 08:39:16.416 1385 1802 D ForegroundUtils: Foreground changed, PID: 10194 UID: 10136 foreground: false
03-13 08:39:16.449 311 311 I Zygote : Process 10194 exited due to signal (11)
Process com.apps.example terminated.
```
Lenovo TB3-X70F
|
non_test
|
example app crashing on android i built the release app to test but upon launching on android i get a crash i m currently using master branches i will test with as well running logcat on i activitymanager start proc com apps example for activity com apps example example i art late enabling xcheck jni i libperfservice perfsetfavorpid pid d foregroundutils foreground changed pid uid foreground true d foregroundutils uid pid d activitythread bind application handled appbinddata appinfo applicationinfo com apps example v activitythread handling launch of activityrecord token android os binderproxy com apps example com apps example example startsnotresumed false d example sdk v activitythread activityrecord token android os binderproxy com apps example com apps example example app android app application appname com apps example pkg com apps example comp com apps example com apps example example dir data app com apps example base apk i bufferqueue this id api p c bufferqueue core com apps example d bufferqueuedump this value ilen i bufferqueueconsumer this id api p c connect c consumer com apps example controlledbyapp true d bufferqueuedump this value ilen i bufferqueueconsumer this id api p c setconsumername unnamed d bufferqueuedump this value ilen i bufferqueueconsumer this id api p c setconsumername surfacetexture d surface surface setbuffersuserdimensions this w h d surface surface connect this api i bufferqueueproducer this id api p c connect p api producer com apps example producercontrolledbyapp true w libegl format d mali winsys new window surface returns f libc fatal signal sigsegv code fault addr in tid om apps example i aee aed check process name om apps example i aee aed tid abort msg address is si code is request from i aee aed boom pid uid gid tid i aee aed pid tid name om apps example com apps example i aee aed pid tid name om apps example com apps example i libperfservice perfgetlastboostpid d perfservicemanager getlastboostpid i libperfservice perfgetlastboostpid d aes pid e lowmemorykiller error writing proc oom score adj errno d guiext binder of dump tunnel bqm died i activitymanager process com apps example pid has died d activitymanager svc handleappdiedlocked app processrecord com apps example app pid d displaymanagerservice display listener for pid died d activitymanager cleanupapplicationrecord d foregroundutils process died uid pid d foregroundutils foreground changed pid uid foreground false i zygote process exited due to signal process com apps example terminated lenovo
| 0
|
291,555
| 21,929,690,775
|
IssuesEvent
|
2022-05-23 08:39:30
|
rucio/rucio
|
https://api.github.com/repos/rucio/rucio
|
closed
|
Operators documentation and recipe repository
|
enhancement Documentation
|
Motivation
----------
As discussed at the Community workshop. We want to establish some kind of repository for operator documentation / recipes which allows convenient ways of contribution (Wiki or similar)
|
1.0
|
Operators documentation and recipe repository - Motivation
----------
As discussed at the Community workshop. We want to establish some kind of repository for operator documentation / recipes which allows convenient ways of contribution (Wiki or similar)
|
non_test
|
operators documentation and recipe repository motivation as discussed at the community workshop we want to establish some kind of repository for operator documentation recipes which allows convenient ways of contribution wiki or similar
| 0
|
320,285
| 27,430,223,186
|
IssuesEvent
|
2023-03-02 00:20:04
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Receitas - Dados de Receitas - São Geraldo
|
generalization test development
|
DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados de Receitas para o Município de São Geraldo.
|
1.0
|
Teste de generalizacao para a tag Receitas - Dados de Receitas - São Geraldo - DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados de Receitas para o Município de São Geraldo.
|
test
|
teste de generalizacao para a tag receitas dados de receitas são geraldo dod realizar o teste de generalização do validador da tag receitas dados de receitas para o município de são geraldo
| 1
|
164,449
| 12,806,945,324
|
IssuesEvent
|
2020-07-03 10:25:52
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
tests/net/utils failed on multiple arm platforms.
|
area: Tests bug
|
**Describe the bug**
tests/net/utils failed on frdm_k64f, mimxrt1050_evk, sam_e70_xplained boards.
**To Reproduce**
Steps to reproduce the behavior:
1. west build -b frdm_k64f -p auto tests/net/utils
2. west flash
3. see error
**Logs and console output**
*** Booting Zephyr OS build zephyr-v2.3.0-720-gab3e778f47d8 ***
Running test suite test_utils_fn
START - test_net_addr
START - test_ipv4_pton_1
Failed to verify
failed
START - test_ipv4_pton_2
Failed to verify
failed
START - test_ipv4_pton_3
Failed to verify
failed
START - test_ipv4_pton_4
Failed to verify 255.255.255
failed
START - test_ipv4_pton_5
passed
START - test_ipv4_pton_6
Failed to verify
failed
START - test_ipv4_pton_7
Failed to verify
failed
START - test_ipv4_pton_8
Failed to verify
failed
**START - test_ipv6_pton_1
E: ***** BUS FAULT *****
E: Precise data bus error**
E: BFAR Address: 0x20002788
E: NXP MPU error, port 3
E: Mode: User, Data Address: 0x20002788
E: Type: Read, Master: 0, Regions: 0x8800
E: r0/a1: 0x00000002 r1/a2: 0x20000d3c r2/a3: 0x00000000
E: r3/a4: 0x20002788 r12/ip: 0x00011137 r14/lr: 0x00000b41
E: xpsr: 0xa1000000
E: Faulting instruction address (r15/pc): 0x00005632
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Current thread: 0x200021f8 (unknown)
E: Halting system
**Environment (please complete the following information):**
- OS: fedora28
- Toolchain: zephyr-sdk-0.11.4
- Commit ID: ab3e778f47d8
|
1.0
|
tests/net/utils failed on multiple arm platforms. - **Describe the bug**
tests/net/utils failed on frdm_k64f, mimxrt1050_evk, sam_e70_xplained boards.
**To Reproduce**
Steps to reproduce the behavior:
1. west build -b frdm_k64f -p auto tests/net/utils
2. west flash
3. see error
**Logs and console output**
*** Booting Zephyr OS build zephyr-v2.3.0-720-gab3e778f47d8 ***
Running test suite test_utils_fn
START - test_net_addr
START - test_ipv4_pton_1
Failed to verify
failed
START - test_ipv4_pton_2
Failed to verify
failed
START - test_ipv4_pton_3
Failed to verify
failed
START - test_ipv4_pton_4
Failed to verify 255.255.255
failed
START - test_ipv4_pton_5
passed
START - test_ipv4_pton_6
Failed to verify
failed
START - test_ipv4_pton_7
Failed to verify
failed
START - test_ipv4_pton_8
Failed to verify
failed
**START - test_ipv6_pton_1
E: ***** BUS FAULT *****
E: Precise data bus error**
E: BFAR Address: 0x20002788
E: NXP MPU error, port 3
E: Mode: User, Data Address: 0x20002788
E: Type: Read, Master: 0, Regions: 0x8800
E: r0/a1: 0x00000002 r1/a2: 0x20000d3c r2/a3: 0x00000000
E: r3/a4: 0x20002788 r12/ip: 0x00011137 r14/lr: 0x00000b41
E: xpsr: 0xa1000000
E: Faulting instruction address (r15/pc): 0x00005632
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Current thread: 0x200021f8 (unknown)
E: Halting system
**Environment (please complete the following information):**
- OS: fedora28
- Toolchain: zephyr-sdk-0.11.4
- Commit ID: ab3e778f47d8
|
test
|
tests net utils failed on multiple arm platforms describe the bug tests net utils failed on frdm evk sam xplained boards to reproduce steps to reproduce the behavior west build b frdm p auto tests net utils west flash see error logs and console output booting zephyr os build zephyr running test suite test utils fn start test net addr start test pton failed to verify failed start test pton failed to verify failed start test pton failed to verify failed start test pton failed to verify failed start test pton passed start test pton failed to verify failed start test pton failed to verify failed start test pton failed to verify failed start test pton e bus fault e precise data bus error e bfar address e nxp mpu error port e mode user data address e type read master regions e e ip lr e xpsr e faulting instruction address pc e zephyr fatal error cpu exception on cpu e current thread unknown e halting system environment please complete the following information os toolchain zephyr sdk commit id
| 1
|
249,958
| 27,012,178,090
|
IssuesEvent
|
2023-02-10 16:13:41
|
snowdensb/SecurityShepherd
|
https://api.github.com/repos/snowdensb/SecurityShepherd
|
opened
|
CVE-2015-0254 (High) detected in jstl-1.2.jar
|
security vulnerability
|
## CVE-2015-0254 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/jstl/jstl/1.2/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/SecurityShepherd/commit/1ec9c491ae57398d2c355bac56538b1340c3c189">1ec9c491ae57398d2c355bac56538b1340c3c189</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2015-0254 (High) detected in jstl-1.2.jar - ## CVE-2015-0254 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/jstl/jstl/1.2/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/SecurityShepherd/commit/1ec9c491ae57398d2c355bac56538b1340c3c189">1ec9c491ae57398d2c355bac56538b1340c3c189</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_test
|
cve high detected in jstl jar cve high severity vulnerability vulnerable library jstl jar path to dependency file pom xml path to vulnerable library home wss scanner repository jstl jstl jstl jar dependency hierarchy x jstl jar vulnerable library found in head commit a href found in base branch dev vulnerability details apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a or jstl xml tag publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache taglibs taglibs standard impl rescue worker helmet automatic remediation is available for this issue
| 0
|
130,949
| 10,676,551,494
|
IssuesEvent
|
2019-10-21 13:57:46
|
input-output-hk/ouroboros-network
|
https://api.github.com/repos/input-output-hk/ouroboros-network
|
opened
|
Immutable DB: make the model polymorphic in its block type
|
consensus maintenance testing
|
#1144 adds `dbmTipBlock :: DBModel hash -> Maybe TestBlock`, which requires deserialising the block at the tip. This is needed to generate the next block in the state machine tests. We should avoiding such costly/ugly deserialisations by storing the unserialised block directly. The `DBModel` can be made parametric in the block type. It can be instantiated to `TestBlock`, `ByteString`, or even `(TestBlock, ByteString)`. Note that the corruption simulation code needs the `ByteString`s.
Such a change would mean that we no longer have one `run` function in the state machine tests, as the ImmutableDB API is in terms of `Builder`s (for appending) and `ByteString`s (for reading), but one for the model and one for the real implementation. This split will also make simulating corruptions easier and will help us avoid passing around the `runCorruption` function.
|
1.0
|
Immutable DB: make the model polymorphic in its block type - #1144 adds `dbmTipBlock :: DBModel hash -> Maybe TestBlock`, which requires deserialising the block at the tip. This is needed to generate the next block in the state machine tests. We should avoiding such costly/ugly deserialisations by storing the unserialised block directly. The `DBModel` can be made parametric in the block type. It can be instantiated to `TestBlock`, `ByteString`, or even `(TestBlock, ByteString)`. Note that the corruption simulation code needs the `ByteString`s.
Such a change would mean that we no longer have one `run` function in the state machine tests, as the ImmutableDB API is in terms of `Builder`s (for appending) and `ByteString`s (for reading), but one for the model and one for the real implementation. This split will also make simulating corruptions easier and will help us avoid passing around the `runCorruption` function.
|
test
|
immutable db make the model polymorphic in its block type adds dbmtipblock dbmodel hash maybe testblock which requires deserialising the block at the tip this is needed to generate the next block in the state machine tests we should avoiding such costly ugly deserialisations by storing the unserialised block directly the dbmodel can be made parametric in the block type it can be instantiated to testblock bytestring or even testblock bytestring note that the corruption simulation code needs the bytestring s such a change would mean that we no longer have one run function in the state machine tests as the immutabledb api is in terms of builder s for appending and bytestring s for reading but one for the model and one for the real implementation this split will also make simulating corruptions easier and will help us avoid passing around the runcorruption function
| 1
|
345,830
| 30,846,509,591
|
IssuesEvent
|
2023-08-02 14:03:16
|
ita-social-projects/Space2Study-Client-mvp
|
https://api.github.com/repos/ita-social-projects/Space2Study-Client-mvp
|
closed
|
(SP: 1) Write unit test for "AppContentSwitcher" component
|
FrontEnd part Unit test
|
### Component unit test
Unit test for **"AppContentSwitcher"** component
Scenaries descriptions:
- [x] Should render correctly with props
- [x] Should call the function "onChange" when it was clicked on the switch
- [x] Should render tooltips if the tooltips props are passed
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/app-content-switcher/AppContentSwitcher.spec.jsx)
Current coverage:
<img width="956" alt="image" src="https://github.com/ita-social-projects/Space2Study-Client-mvp/assets/90138904/d46f2e7c-03ee-4722-82d6-0ba8d43a3b22">
|
1.0
|
(SP: 1) Write unit test for "AppContentSwitcher" component - ### Component unit test
Unit test for **"AppContentSwitcher"** component
Scenaries descriptions:
- [x] Should render correctly with props
- [x] Should call the function "onChange" when it was clicked on the switch
- [x] Should render tooltips if the tooltips props are passed
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/app-content-switcher/AppContentSwitcher.spec.jsx)
Current coverage:
<img width="956" alt="image" src="https://github.com/ita-social-projects/Space2Study-Client-mvp/assets/90138904/d46f2e7c-03ee-4722-82d6-0ba8d43a3b22">
|
test
|
sp write unit test for appcontentswitcher component component unit test unit test for appcontentswitcher component scenaries descriptions should render correctly with props should call the function onchange when it was clicked on the switch should render tooltips if the tooltips props are passed current coverage img width alt image src
| 1
|
169,022
| 13,111,001,302
|
IssuesEvent
|
2020-08-04 21:52:46
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
unknown: package timed out
|
C-test-failure O-robot branch-release-20.1
|
[(unknown).(unknown) failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2148615&tab=buildLog) on [release-20.1@0dcf35be8c499daef1efc3b9bc5838695b82a352](https://github.com/cockroachdb/cockroach/commits/0dcf35be8c499daef1efc3b9bc5838695b82a352):
```
Slow failing tests:
TestLogic/fakedist-disk/distinct_on - 0.33s
Slow passing tests:
TestBackfillErrors - 927.22s
TestIndexSkipTableReaderMisplannedRangesMetadata - 904.88s
TestNextRowInterleaved - 898.39s
TestCreateRandomSchema - 126.07s
TestRepartitioning - 103.36s
TestInitialPartitioning - 85.58s
TestRingBuffer - 78.40s
TestChangefeedSchemaChangeNoBackfill - 62.23s
TestPrimaryKeyChangeWithCancel - 61.17s
TestBTree - 59.38s
TestExecBuild - 52.33s
TestMigrateSchemaChanges - 51.29s
TestDumpData - 50.83s
TestRemoveDeadReplicas - 50.22s
TestColumnConversions - 46.23s
TestSpillingQueue - 44.43s
TestBTree - 44.28s
TestChangefeedNoBackfill - 42.31s
TestChangefeedDiff - 41.61s
TestAggregatorRandom - 39.60s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=(unknown) PKG=./pkg/unknown TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #46323 unknown: package timed out [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-note-script-update](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-note-script-update)
- #43564 unknown: package timed out [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2A%28unknown%29.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
unknown: package timed out - [(unknown).(unknown) failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2148615&tab=buildLog) on [release-20.1@0dcf35be8c499daef1efc3b9bc5838695b82a352](https://github.com/cockroachdb/cockroach/commits/0dcf35be8c499daef1efc3b9bc5838695b82a352):
```
Slow failing tests:
TestLogic/fakedist-disk/distinct_on - 0.33s
Slow passing tests:
TestBackfillErrors - 927.22s
TestIndexSkipTableReaderMisplannedRangesMetadata - 904.88s
TestNextRowInterleaved - 898.39s
TestCreateRandomSchema - 126.07s
TestRepartitioning - 103.36s
TestInitialPartitioning - 85.58s
TestRingBuffer - 78.40s
TestChangefeedSchemaChangeNoBackfill - 62.23s
TestPrimaryKeyChangeWithCancel - 61.17s
TestBTree - 59.38s
TestExecBuild - 52.33s
TestMigrateSchemaChanges - 51.29s
TestDumpData - 50.83s
TestRemoveDeadReplicas - 50.22s
TestColumnConversions - 46.23s
TestSpillingQueue - 44.43s
TestBTree - 44.28s
TestChangefeedNoBackfill - 42.31s
TestChangefeedDiff - 41.61s
TestAggregatorRandom - 39.60s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=(unknown) PKG=./pkg/unknown TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Related:
- #46323 unknown: package timed out [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-note-script-update](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-note-script-update)
- #43564 unknown: package timed out [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2A%28unknown%29.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
unknown package timed out on slow failing tests testlogic fakedist disk distinct on slow passing tests testbackfillerrors testindexskiptablereadermisplannedrangesmetadata testnextrowinterleaved testcreaterandomschema testrepartitioning testinitialpartitioning testringbuffer testchangefeedschemachangenobackfill testprimarykeychangewithcancel testbtree testexecbuild testmigrateschemachanges testdumpdata testremovedeadreplicas testcolumnconversions testspillingqueue testbtree testchangefeednobackfill testchangefeeddiff testaggregatorrandom more parameters goflags json make stressrace tests unknown pkg pkg unknown testtimeout stressflags timeout related unknown package timed out unknown package timed out powered by
| 1
|
349,340
| 31,793,984,194
|
IssuesEvent
|
2023-09-13 06:39:40
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_nondeterministic_alert_MaxUnpool2d_cuda_float32 (__main__.TestTorchDeviceTypeCUDA)
|
module: flaky-tests skipped oncall: pt2
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool2d_cuda_float32&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16736765453).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool2d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_torch.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"422792","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"e774cf397307c4c11e0a581b2258b54d36c079b283546f13beeb2d80ed18b6cf\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"9A3A:380B:798740:912FAB:65015924","accept-ranges":"bytes","date":"Wed, 13 Sep 2023 06:39:33 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000140-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1694587173.107265,VS0,VE176","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"13152540781d8e5f8c129c621c20215635673f7e","expires":"Wed, 13 Sep 2023 06:44:33 GMT","source-age":"0"}
|
1.0
|
DISABLED test_nondeterministic_alert_MaxUnpool2d_cuda_float32 (__main__.TestTorchDeviceTypeCUDA) - Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nondeterministic_alert_MaxUnpool2d_cuda_float32&suite=TestTorchDeviceTypeCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/16736765453).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nondeterministic_alert_MaxUnpool2d_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_torch.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_torch.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"422792","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"e774cf397307c4c11e0a581b2258b54d36c079b283546f13beeb2d80ed18b6cf\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"9A3A:380B:798740:912FAB:65015924","accept-ranges":"bytes","date":"Wed, 13 Sep 2023 06:39:33 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000140-SJC","x-cache":"HIT","x-cache-hits":"1","x-timer":"S1694587173.107265,VS0,VE176","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"13152540781d8e5f8c129c621c20215635673f7e","expires":"Wed, 13 Sep 2023 06:44:33 GMT","source-age":"0"}
|
test
|
disabled test nondeterministic alert cuda main testtorchdevicetypecuda platforms inductor this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test nondeterministic alert cuda there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path test torch py responsetimeouterror response timeout for get connected true keepalive socket false sockethandledrequests sockethandledresponses headers connection keep alive content length cache control max age content security policy default src none style src unsafe inline sandbox content type text plain charset utf etag strict transport security max age x content type options nosniff x frame options deny x xss protection mode block x github request id accept ranges bytes date wed sep gmt via varnish x served by cache sjc x cache hit x cache hits x timer vary authorization accept encoding origin access control allow origin cross origin resource policy cross origin x fastly request id expires wed sep gmt source age
| 1
|
27,741
| 4,328,253,750
|
IssuesEvent
|
2016-07-26 13:32:19
|
restrictcontentpro/restrict-content-pro
|
https://api.github.com/repos/restrictcontentpro/restrict-content-pro
|
closed
|
Auto-fill Stripe Checkout Email with entered email
|
bug has PR needs testing
|
When a user is creating an account and subscribing in one step the entered email should be pre-filled in Stripe Checkout to reduce steps and prevent confusion.
|
1.0
|
Auto-fill Stripe Checkout Email with entered email - When a user is creating an account and subscribing in one step the entered email should be pre-filled in Stripe Checkout to reduce steps and prevent confusion.
|
test
|
auto fill stripe checkout email with entered email when a user is creating an account and subscribing in one step the entered email should be pre filled in stripe checkout to reduce steps and prevent confusion
| 1
|
93,652
| 8,440,710,126
|
IssuesEvent
|
2018-10-18 08:10:13
|
friends-of-contao/contao-privacy
|
https://api.github.com/repos/friends-of-contao/contao-privacy
|
closed
|
Fehler beim Datenschutz-Hinweis im Registrierungsformular. Der Feldinhalt wird zum Feldnamen.
|
bug to-test
|
Unter Contao 4.4 tritt im Registrierungsformular beim Speichern ein Fehler auf obwohl alle Felder gefüllt sind.
Der <legend> Bereich von ID ctrl_privacyConsent wird als Feldname ausgegeben.
_Die Nutzungsbedingungen sowie die Datenschutzerklärung habe ich zur Kenntnis genommen und erkläre hierzu mein Einverständnis. Die Speicherung meiner personenbezogenen Daten kann ich jederzeit per E-Mail ... widerrufen.
Bitte füllen Sie das Feld "
Die Nutzungsbedingungen sowie die Datenschutzerklärung habe ich zur Kenntnis genommen und erkläre hierzu mein Einverständnis. Die Speicherung meiner personenbezogenen Daten kann ich jederzeit per E-Mail ... widerrufen.
" aus!_
|
1.0
|
Fehler beim Datenschutz-Hinweis im Registrierungsformular. Der Feldinhalt wird zum Feldnamen. - Unter Contao 4.4 tritt im Registrierungsformular beim Speichern ein Fehler auf obwohl alle Felder gefüllt sind.
Der <legend> Bereich von ID ctrl_privacyConsent wird als Feldname ausgegeben.
_Die Nutzungsbedingungen sowie die Datenschutzerklärung habe ich zur Kenntnis genommen und erkläre hierzu mein Einverständnis. Die Speicherung meiner personenbezogenen Daten kann ich jederzeit per E-Mail ... widerrufen.
Bitte füllen Sie das Feld "
Die Nutzungsbedingungen sowie die Datenschutzerklärung habe ich zur Kenntnis genommen und erkläre hierzu mein Einverständnis. Die Speicherung meiner personenbezogenen Daten kann ich jederzeit per E-Mail ... widerrufen.
" aus!_
|
test
|
fehler beim datenschutz hinweis im registrierungsformular der feldinhalt wird zum feldnamen unter contao tritt im registrierungsformular beim speichern ein fehler auf obwohl alle felder gefüllt sind der bereich von id ctrl privacyconsent wird als feldname ausgegeben die nutzungsbedingungen sowie die datenschutzerklärung habe ich zur kenntnis genommen und erkläre hierzu mein einverständnis die speicherung meiner personenbezogenen daten kann ich jederzeit per e mail widerrufen bitte füllen sie das feld die nutzungsbedingungen sowie die datenschutzerklärung habe ich zur kenntnis genommen und erkläre hierzu mein einverständnis die speicherung meiner personenbezogenen daten kann ich jederzeit per e mail widerrufen aus
| 1
|
39,629
| 5,241,074,542
|
IssuesEvent
|
2017-01-31 14:52:16
|
ValveSoftware/Source-1-Games
|
https://api.github.com/repos/ValveSoftware/Source-1-Games
|
closed
|
[TF2]Teleport effect persists after round end
|
Need Retest Team Fortress 2
|
If you teleport during the round end before the new round starts the teleport effect persists even when you die.
http://cloud-4.steampowered.com/ugc/703983393821970949/A867A73D8F1A6C217049FFE99F09B10E5E2AA5C1/
http://cloud-2.steampowered.com/ugc/703983393821968260/C205DE76F4D37D0371FCDE5B201394B358B63BB6/
Specs: https://gist.github.com/b4ckd0or/6571637
|
1.0
|
[TF2]Teleport effect persists after round end - If you teleport during the round end before the new round starts the teleport effect persists even when you die.
http://cloud-4.steampowered.com/ugc/703983393821970949/A867A73D8F1A6C217049FFE99F09B10E5E2AA5C1/
http://cloud-2.steampowered.com/ugc/703983393821968260/C205DE76F4D37D0371FCDE5B201394B358B63BB6/
Specs: https://gist.github.com/b4ckd0or/6571637
|
test
|
teleport effect persists after round end if you teleport during the round end before the new round starts the teleport effect persists even when you die specs
| 1
|
162,395
| 12,664,381,388
|
IssuesEvent
|
2020-06-18 04:28:36
|
replicatedhq/kURL
|
https://api.github.com/repos/replicatedhq/kURL
|
closed
|
[testgrid] show sonobuoy results
|
testgrid
|
In completed tests, we show a button to view logs.
Let's rename this button to `kURL Logs`
And add a new button labeled `Sonobuoy Results`.
Clicking this should show the `sonobuoy_results` column from the `testinstance` table for the instance.
|
1.0
|
[testgrid] show sonobuoy results - In completed tests, we show a button to view logs.
Let's rename this button to `kURL Logs`
And add a new button labeled `Sonobuoy Results`.
Clicking this should show the `sonobuoy_results` column from the `testinstance` table for the instance.
|
test
|
show sonobuoy results in completed tests we show a button to view logs let s rename this button to kurl logs and add a new button labeled sonobuoy results clicking this should show the sonobuoy results column from the testinstance table for the instance
| 1
|
73,839
| 7,359,688,623
|
IssuesEvent
|
2018-03-10 10:01:01
|
GTNewHorizons/NewHorizons
|
https://api.github.com/repos/GTNewHorizons/NewHorizons
|
closed
|
super solar panels consistency
|
addition need to be tested
|
so i was just reading the chat on the gregtech discord and found out theres an IV+ addon for advanced solar panels thats just adding higher tiers, now i was wondering if u wanna add that and just do recipes for the panel blocks like the advanced to quantum one... i think it would be nice to have.
|
1.0
|
super solar panels consistency - so i was just reading the chat on the gregtech discord and found out theres an IV+ addon for advanced solar panels thats just adding higher tiers, now i was wondering if u wanna add that and just do recipes for the panel blocks like the advanced to quantum one... i think it would be nice to have.
|
test
|
super solar panels consistency so i was just reading the chat on the gregtech discord and found out theres an iv addon for advanced solar panels thats just adding higher tiers now i was wondering if u wanna add that and just do recipes for the panel blocks like the advanced to quantum one i think it would be nice to have
| 1
|
174,926
| 13,526,193,064
|
IssuesEvent
|
2020-09-15 13:57:43
|
BRI-EES-House/heat_load_calc
|
https://api.github.com/repos/BRI-EES-House/heat_load_calc
|
closed
|
core test 定常状態想定での手計算との比較(steady05_相当外気温度)
|
test
|
# 概要
屋根・南壁の10℃、他の面の相当外気温度0℃、換気なし、内部発熱なしでの室温、表面温度、表面熱流を手計算と比較する
# タスク
- [ ] steady01にあわせてテストコードを更新する
- [ ] (ここにタスクを記す)
# 参考情報
(ここに添付ファイル・画像・URL等を貼り付ける)
|
1.0
|
core test 定常状態想定での手計算との比較(steady05_相当外気温度) - # 概要
屋根・南壁の10℃、他の面の相当外気温度0℃、換気なし、内部発熱なしでの室温、表面温度、表面熱流を手計算と比較する
# タスク
- [ ] steady01にあわせてテストコードを更新する
- [ ] (ここにタスクを記す)
# 参考情報
(ここに添付ファイル・画像・URL等を貼り付ける)
|
test
|
core test 定常状態想定での手計算との比較( 相当外気温度) 概要 屋根・ ℃、 ℃、換気なし、内部発熱なしでの室温、表面温度、表面熱流を手計算と比較する タスク (ここにタスクを記す) 参考情報 (ここに添付ファイル・画像・url等を貼り付ける)
| 1
|
264,041
| 23,097,773,330
|
IssuesEvent
|
2022-07-26 21:29:29
|
IntellectualSites/PlotSquared
|
https://api.github.com/repos/IntellectualSites/PlotSquared
|
closed
|
p alias set and remove
|
Requires Testing
|
### Server Implementation
Paper
### Server Version
1.19
### Describe the bug
p alias set and remove does not work correctly. If you want to set an alias, the message that alias is already taken appears. However, this name was not assigned as an alias or does a player with this name exist. please help
p alias set und remove funktioniert nicht richtig., nach dem mergen ist der alias des plots verschwunden und kann weder neu gesetzt noch entfernt werden. sihe Bild. Hilfe wäre schön deswegen bitte nicht schließen, wäre gut zu wissen wo die aliase gespeichert werden oder das ihr da bitte einen blick drauf werft. wie am bild zu erkennen kann man den alias weder setzten noch entfernen und wenn man /p v alias name macht existiert es nicht. und nein der alias ist kein spielername der auf unserem server existiert ebenso wie hogwarts und viel andere aliase die nicht benutz werden können weil sie angeblich schon benutzt werden, was nicht der fall ist. bitte dringend um hilfe mit diesem problem.
### To Reproduce
help with this issui what can i do ?
### Expected behaviour
that it works to delete and reassign aliases
### Screenshots / Videos

### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/8eb4d3b881f1443aa104433b15005149
### PlotSquared Version
6.9.3
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
1.0
|
p alias set and remove - ### Server Implementation
Paper
### Server Version
1.19
### Describe the bug
p alias set and remove does not work correctly. If you want to set an alias, the message that alias is already taken appears. However, this name was not assigned as an alias or does a player with this name exist. please help
p alias set und remove funktioniert nicht richtig., nach dem mergen ist der alias des plots verschwunden und kann weder neu gesetzt noch entfernt werden. sihe Bild. Hilfe wäre schön deswegen bitte nicht schließen, wäre gut zu wissen wo die aliase gespeichert werden oder das ihr da bitte einen blick drauf werft. wie am bild zu erkennen kann man den alias weder setzten noch entfernen und wenn man /p v alias name macht existiert es nicht. und nein der alias ist kein spielername der auf unserem server existiert ebenso wie hogwarts und viel andere aliase die nicht benutz werden können weil sie angeblich schon benutzt werden, was nicht der fall ist. bitte dringend um hilfe mit diesem problem.
### To Reproduce
help with this issui what can i do ?
### Expected behaviour
that it works to delete and reassign aliases
### Screenshots / Videos

### Error log (if applicable)
_No response_
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/8eb4d3b881f1443aa104433b15005149
### PlotSquared Version
6.9.3
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
_No response_
|
test
|
p alias set and remove server implementation paper server version describe the bug p alias set and remove does not work correctly if you want to set an alias the message that alias is already taken appears however this name was not assigned as an alias or does a player with this name exist please help p alias set und remove funktioniert nicht richtig nach dem mergen ist der alias des plots verschwunden und kann weder neu gesetzt noch entfernt werden sihe bild hilfe wäre schön deswegen bitte nicht schließen wäre gut zu wissen wo die aliase gespeichert werden oder das ihr da bitte einen blick drauf werft wie am bild zu erkennen kann man den alias weder setzten noch entfernen und wenn man p v alias name macht existiert es nicht und nein der alias ist kein spielername der auf unserem server existiert ebenso wie hogwarts und viel andere aliase die nicht benutz werden können weil sie angeblich schon benutzt werden was nicht der fall ist bitte dringend um hilfe mit diesem problem to reproduce help with this issui what can i do expected behaviour that it works to delete and reassign aliases screenshots videos error log if applicable no response plot debugpaste plotsquared version checklist i have included a plot debugpaste i am using the newest build from and the issue still persists anything else no response
| 1
|
173,356
| 21,159,506,714
|
IssuesEvent
|
2022-04-07 08:05:43
|
nanopathi/linux-4.19.72_CVE-2020-14381
|
https://api.github.com/repos/nanopathi/linux-4.19.72_CVE-2020-14381
|
opened
|
CVE-2020-29534 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-29534 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.9.3. io_uring takes a non-refcounted reference to the files_struct of the process that submitted a request, causing execve() to incorrectly optimize unshare_fd(), aka CID-0f2122045b94.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29534>CVE-2020-29534</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29534">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29534</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: v5.9.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-29534 (High) detected in multiple libraries - ## CVE-2020-29534 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.9.3. io_uring takes a non-refcounted reference to the files_struct of the process that submitted a request, causing execve() to incorrectly optimize unshare_fd(), aka CID-0f2122045b94.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29534>CVE-2020-29534</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29534">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29534</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: v5.9.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux linuxlinux linuxlinux vulnerability details an issue was discovered in the linux kernel before io uring takes a non refcounted reference to the files struct of the process that submitted a request causing execve to incorrectly optimize unshare fd aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
22,383
| 3,955,297,428
|
IssuesEvent
|
2016-04-29 20:21:44
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive CI Failure
|
test
|
Seems to be semi-consistently failing:
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/345/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=oraclelinux/343/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/341/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/615/console
However, I was unable to reproduce any of these failures on either of my two machines. :(
```
gradle :core:test -Dtests.seed=A1BF5ED4DA80E472 -Dtests.class=org.elasticsearch.threadpool.ScalingThreadPoolTests -Dtests.method="testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive" -Des.logger.level=WARN -Dtests.security.manager=true -Dtests.locale=ar-LB -Dtests.timezone=Asia/Anadyr
FAILURE 0.14s J1 | ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive <<< FAILURES!
> Throwable #1: java.lang.AssertionError:
> Expected: <128L>
> but: was <127L>
> at __randomizedtesting.SeedInfo.seed([A1BF5ED4DA80E472:BC4ACB642517FFFA]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive$7(ScalingThreadPoolTests.java:196)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.runScalingThreadPoolTest(ScalingThreadPoolTests.java:244)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive(ScalingThreadPoolTests.java:167)
> at java.lang.Thread.run(Thread.java:745)
1> [2016-04-29 19:31:31,374][WARN ][org.elasticsearch.common.settings] [testResizingScalingThreadPoolQueue] failed to apply settings
1> java.lang.IllegalArgumentException: thread pool [warmer] of type scaling can not have its queue re-sized but was [518323132]
1> at org.elasticsearch.threadpool.ThreadPool.rebuild(ThreadPool.java:528)
1> at org.elasticsearch.threadpool.ThreadPool.updateSettings(ThreadPool.java:628)
1> at org.elasticsearch.common.settings.Setting$3$1.apply(Setting.java:727)
1> at org.elasticsearch.common.settings.Setting$3$1.apply(Setting.java:702)
1> at org.elasticsearch.common.settings.AbstractScopedSettings$SettingUpdater.lambda$updater$0(AbstractScopedSettings.java:319)
1> at org.elasticsearch.common.settings.AbstractScopedSettings.applySettings(AbstractScopedSettings.java:165)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$null$9(ScalingThreadPoolTests.java:226)
1> at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2677)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$testResizingScalingThreadPoolQueue$10(ScalingThreadPoolTests.java:224)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.runScalingThreadPoolTest(ScalingThreadPoolTests.java:244)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.testResizingScalingThreadPoolQueue(ScalingThreadPoolTests.java:222)
1> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
1> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
1> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
1> at java.lang.reflect.Method.invoke(Method.java:498)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
1> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
1> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
1> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
1> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
1> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
1> at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
1> at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
1> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
1> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
1> at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
1> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+multijob-os-compatibility/os/opensuse/core/build/testrun/test/J1/temp/org.elasticsearch.threadpool.ScalingThreadPoolTests_A1BF5ED4DA80E472-001
2> NOTE: test params are: codec=Asserting(Lucene60): {}, docValues:{}, maxPointsInLeafNode=1221, maxMBSortInHeap=6.699272557998749, sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=ar-LB, timezone=Asia/Anadyr
2> NOTE: Linux 3.16.7-29-default amd64/Oracle Corporation 1.8.0_72 (64-bit)/cpus=4,threads=1,free=431645528,total=517996544
2> NOTE: All tests run in this JVM: [JoinProcessorTests, URIPatternTests, PrimaryElectionRoutingTests, BootstrapSettingsTests, RecoveryStatusTests, ExceptionSerializationTests, DateFormatTests, AwarenessAllocationTests, LegacyLongFieldTypeTests, PipelineExecutionServiceTests, NodeVersionAllocationDeciderTests, FsBlobStoreContainerTests, ParentFieldTypeTests, TransportAnalyzeActionTests, SimpleMapperTests, EnvironmentTests, RebalanceAfterActiveTests, FailProcessorTests, TermVectorsUnitTests, SimulateProcessorResultTests, PercentilesTests, LockedRecyclerTests, MaxMapCountCheckTests, QueryPhaseTests, ByteUtilsTests, DiskThresholdDeciderTests, LongNestedSortingTests, ShardRoutingTests, ClusterChangedEventTests, YamlFilteringGeneratorTests, GeoQueryContextTests, BlockingClusterStatePublishResponseHandlerTests, GeoDistanceQueryBuilderTests, IndicesStatsTests, CompoundProcessorTests, HistogramTests, CamelCaseFieldNameTests, FieldDataCacheTests, LaplaceModelTests, FileInfoTests, SpanNearQueryBuilderTests, FuzzinessTests, CardinalityTests, MinTests, ScoreSortBuilderTests, MatchPhrasePrefixQueryBuilderTests, QueryShardContextTests, IndicesRequestCacheTests, Murmur3HashFunctionTests, EnvelopeBuilderTests, RenameProcessorTests, NodeClientHeadersTests, SearchSlowLogTests, WildcardExpressionResolverTests, ReverseNestedTests, KeepFilterFactoryTests, IndexSearcherWrapperTests, GeoUtilsTests, ScriptContextRegistryTests, ScalingThreadPoolTests]
```
```
BUILD INFO
Build 20160429192716-1B1E4023
Log https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/345/console
Duration 5m 41s (341210ms)
Started 2016-04-29T19:27:16.547Z
Ended 2016-04-29T19:32:57.757Z
Exit Code 1
Host slave-f1f8d528 (up 116 days)
OS OpenSuSE 13.2, Linux 3.16.7-29-default
Specs 4 CPUs, 15.45GB RAM
java.version 1.8.0_72
java.vm.name OpenJDK 64-Bit Server VM
java.vm.version 25.72-b15
java.runtime.version 1.8.0_72-b15
java.home /usr/lib64/jvm/java-1.8.0-openjdk-1.8.0
```
|
1.0
|
ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive CI Failure - Seems to be semi-consistently failing:
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/345/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=oraclelinux/343/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/341/console
- https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+periodic/615/console
However, I was unable to reproduce any of these failures on either of my two machines. :(
```
gradle :core:test -Dtests.seed=A1BF5ED4DA80E472 -Dtests.class=org.elasticsearch.threadpool.ScalingThreadPoolTests -Dtests.method="testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive" -Des.logger.level=WARN -Dtests.security.manager=true -Dtests.locale=ar-LB -Dtests.timezone=Asia/Anadyr
FAILURE 0.14s J1 | ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive <<< FAILURES!
> Throwable #1: java.lang.AssertionError:
> Expected: <128L>
> but: was <127L>
> at __randomizedtesting.SeedInfo.seed([A1BF5ED4DA80E472:BC4ACB642517FFFA]:0)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive$7(ScalingThreadPoolTests.java:196)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.runScalingThreadPoolTest(ScalingThreadPoolTests.java:244)
> at org.elasticsearch.threadpool.ScalingThreadPoolTests.testScalingThreadPoolThreadsAreTerminatedAfterKeepAlive(ScalingThreadPoolTests.java:167)
> at java.lang.Thread.run(Thread.java:745)
1> [2016-04-29 19:31:31,374][WARN ][org.elasticsearch.common.settings] [testResizingScalingThreadPoolQueue] failed to apply settings
1> java.lang.IllegalArgumentException: thread pool [warmer] of type scaling can not have its queue re-sized but was [518323132]
1> at org.elasticsearch.threadpool.ThreadPool.rebuild(ThreadPool.java:528)
1> at org.elasticsearch.threadpool.ThreadPool.updateSettings(ThreadPool.java:628)
1> at org.elasticsearch.common.settings.Setting$3$1.apply(Setting.java:727)
1> at org.elasticsearch.common.settings.Setting$3$1.apply(Setting.java:702)
1> at org.elasticsearch.common.settings.AbstractScopedSettings$SettingUpdater.lambda$updater$0(AbstractScopedSettings.java:319)
1> at org.elasticsearch.common.settings.AbstractScopedSettings.applySettings(AbstractScopedSettings.java:165)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$null$9(ScalingThreadPoolTests.java:226)
1> at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2677)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.lambda$testResizingScalingThreadPoolQueue$10(ScalingThreadPoolTests.java:224)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.runScalingThreadPoolTest(ScalingThreadPoolTests.java:244)
1> at org.elasticsearch.threadpool.ScalingThreadPoolTests.testResizingScalingThreadPoolQueue(ScalingThreadPoolTests.java:222)
1> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
1> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
1> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
1> at java.lang.reflect.Method.invoke(Method.java:498)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
1> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
1> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
1> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
1> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
1> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
1> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
1> at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
1> at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
1> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
1> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
1> at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
1> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
1> at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
1> at java.lang.Thread.run(Thread.java:745)
2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+multijob-os-compatibility/os/opensuse/core/build/testrun/test/J1/temp/org.elasticsearch.threadpool.ScalingThreadPoolTests_A1BF5ED4DA80E472-001
2> NOTE: test params are: codec=Asserting(Lucene60): {}, docValues:{}, maxPointsInLeafNode=1221, maxMBSortInHeap=6.699272557998749, sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=ar-LB, timezone=Asia/Anadyr
2> NOTE: Linux 3.16.7-29-default amd64/Oracle Corporation 1.8.0_72 (64-bit)/cpus=4,threads=1,free=431645528,total=517996544
2> NOTE: All tests run in this JVM: [JoinProcessorTests, URIPatternTests, PrimaryElectionRoutingTests, BootstrapSettingsTests, RecoveryStatusTests, ExceptionSerializationTests, DateFormatTests, AwarenessAllocationTests, LegacyLongFieldTypeTests, PipelineExecutionServiceTests, NodeVersionAllocationDeciderTests, FsBlobStoreContainerTests, ParentFieldTypeTests, TransportAnalyzeActionTests, SimpleMapperTests, EnvironmentTests, RebalanceAfterActiveTests, FailProcessorTests, TermVectorsUnitTests, SimulateProcessorResultTests, PercentilesTests, LockedRecyclerTests, MaxMapCountCheckTests, QueryPhaseTests, ByteUtilsTests, DiskThresholdDeciderTests, LongNestedSortingTests, ShardRoutingTests, ClusterChangedEventTests, YamlFilteringGeneratorTests, GeoQueryContextTests, BlockingClusterStatePublishResponseHandlerTests, GeoDistanceQueryBuilderTests, IndicesStatsTests, CompoundProcessorTests, HistogramTests, CamelCaseFieldNameTests, FieldDataCacheTests, LaplaceModelTests, FileInfoTests, SpanNearQueryBuilderTests, FuzzinessTests, CardinalityTests, MinTests, ScoreSortBuilderTests, MatchPhrasePrefixQueryBuilderTests, QueryShardContextTests, IndicesRequestCacheTests, Murmur3HashFunctionTests, EnvelopeBuilderTests, RenameProcessorTests, NodeClientHeadersTests, SearchSlowLogTests, WildcardExpressionResolverTests, ReverseNestedTests, KeepFilterFactoryTests, IndexSearcherWrapperTests, GeoUtilsTests, ScriptContextRegistryTests, ScalingThreadPoolTests]
```
```
BUILD INFO
Build 20160429192716-1B1E4023
Log https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-os-compatibility/os=opensuse/345/console
Duration 5m 41s (341210ms)
Started 2016-04-29T19:27:16.547Z
Ended 2016-04-29T19:32:57.757Z
Exit Code 1
Host slave-f1f8d528 (up 116 days)
OS OpenSuSE 13.2, Linux 3.16.7-29-default
Specs 4 CPUs, 15.45GB RAM
java.version 1.8.0_72
java.vm.name OpenJDK 64-Bit Server VM
java.vm.version 25.72-b15
java.runtime.version 1.8.0_72-b15
java.home /usr/lib64/jvm/java-1.8.0-openjdk-1.8.0
```
|
test
|
scalingthreadpooltests testscalingthreadpoolthreadsareterminatedafterkeepalive ci failure seems to be semi consistently failing however i was unable to reproduce any of these failures on either of my two machines gradle core test dtests seed dtests class org elasticsearch threadpool scalingthreadpooltests dtests method testscalingthreadpoolthreadsareterminatedafterkeepalive des logger level warn dtests security manager true dtests locale ar lb dtests timezone asia anadyr failure scalingthreadpooltests testscalingthreadpoolthreadsareterminatedafterkeepalive failures throwable java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch threadpool scalingthreadpooltests lambda testscalingthreadpoolthreadsareterminatedafterkeepalive scalingthreadpooltests java at org elasticsearch threadpool scalingthreadpooltests runscalingthreadpooltest scalingthreadpooltests java at org elasticsearch threadpool scalingthreadpooltests testscalingthreadpoolthreadsareterminatedafterkeepalive scalingthreadpooltests java at java lang thread run thread java failed to apply settings java lang illegalargumentexception thread pool of type scaling can not have its queue re sized but was at org elasticsearch threadpool threadpool rebuild threadpool java at org elasticsearch threadpool threadpool updatesettings threadpool java at org elasticsearch common settings setting apply setting java at org elasticsearch common settings setting apply setting java at org elasticsearch common settings abstractscopedsettings settingupdater lambda updater abstractscopedsettings java at org elasticsearch common settings abstractscopedsettings applysettings abstractscopedsettings java at org elasticsearch threadpool scalingthreadpooltests lambda null scalingthreadpooltests java at org apache lucene util lucenetestcase expectthrows lucenetestcase java at org elasticsearch threadpool scalingthreadpooltests lambda testresizingscalingthreadpoolqueue scalingthreadpooltests java at org elasticsearch threadpool scalingthreadpooltests runscalingthreadpooltest scalingthreadpooltests java at org elasticsearch threadpool scalingthreadpooltests testresizingscalingthreadpoolqueue scalingthreadpooltests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at java lang thread run thread java note leaving temporary files on disk at var lib jenkins workspace elastic elasticsearch master multijob os compatibility os opensuse core build testrun test temp org elasticsearch threadpool scalingthreadpooltests note test params are codec asserting docvalues maxpointsinleafnode maxmbsortinheap sim randomsimilarity querynorm false coord yes locale ar lb timezone asia anadyr note linux default oracle corporation bit cpus threads free total note all tests run in this jvm build info build log duration started ended exit code host slave up days os opensuse linux default specs cpus ram java version java vm name openjdk bit server vm java vm version java runtime version java home usr jvm java openjdk
| 1
|
154,297
| 12,199,107,627
|
IssuesEvent
|
2020-04-30 00:39:08
|
kwk/test-llvm-bz-import-5
|
https://api.github.com/repos/kwk/test-llvm-bz-import-5
|
closed
|
[not-a-bug, powerpc-darwin8] bad_alloc during assembly of RecursiveASTVisitorTest.cpp
|
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: WONTFIX Test Suite/Programs Tests dummy import from bugzilla
|
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=15825.
|
2.0
|
[not-a-bug, powerpc-darwin8] bad_alloc during assembly of RecursiveASTVisitorTest.cpp - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=15825.
|
test
|
bad alloc during assembly of recursiveastvisitortest cpp this issue was imported from bugzilla
| 1
|
22,411
| 11,731,275,482
|
IssuesEvent
|
2020-03-10 23:35:20
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
Should archives and forks be excluded by default?
|
search team/core-services
|
A user suggested making archives and forks excluded from search results by default. I definitely agree with this, but I remember there was some conversation about a code host where this might be a bad idea. So my question is: Should archives and forks be excluded by default? why / why not?
cc product @christinaforney and @sourcegraph/core-services
Code: https://sourcegraph.com/github.com/sourcegraph/sourcegraph@9d2c526/-/blob/cmd/frontend/graphqlbackend/search.go#L353-356
|
1.0
|
Should archives and forks be excluded by default? - A user suggested making archives and forks excluded from search results by default. I definitely agree with this, but I remember there was some conversation about a code host where this might be a bad idea. So my question is: Should archives and forks be excluded by default? why / why not?
cc product @christinaforney and @sourcegraph/core-services
Code: https://sourcegraph.com/github.com/sourcegraph/sourcegraph@9d2c526/-/blob/cmd/frontend/graphqlbackend/search.go#L353-356
|
non_test
|
should archives and forks be excluded by default a user suggested making archives and forks excluded from search results by default i definitely agree with this but i remember there was some conversation about a code host where this might be a bad idea so my question is should archives and forks be excluded by default why why not cc product christinaforney and sourcegraph core services code
| 0
|
179,910
| 13,910,853,068
|
IssuesEvent
|
2020-10-20 16:35:58
|
vmware-tanzu/octant
|
https://api.github.com/repos/vmware-tanzu/octant
|
opened
|
Streamer test flake
|
testing
|
This happens infrequently. Keeping logs here as a reference in case someone wants to tackle this.
```
=== RUN TestStreamer_Stream/in_general
streamer_test.go:55:
Error Trace: streamer_test.go:55
asm_amd64.s:1374
Error: Not equal:
expected: event.Event{Type:"event.octant.dev/app-logs", Data:[]log.Message{log.Message{ID:"1", Date:0, LogLevel:"", Location:"", Text:"", JSON:"", StackTrace:""}}, Err:error(nil)}
actual : event.Event{Type:"event.octant.dev/app-logs", Data:[]log.Message(nil), Err:error(nil)}
Diff:
--- Expected
+++ Actual
@@ -2,13 +2,3 @@
Type: (event.EventType) (len=25) "event.octant.dev/app-logs",
- Data: ([]log.Message) (len=1) {
- (log.Message) {
- ID: (string) (len=1) "1",
- Date: (int64) 0,
- LogLevel: (string) "",
- Location: (string) "",
- Text: (string) "",
- JSON: (string) "",
- StackTrace: (string) ""
- }
- },
+ Data: ([]log.Message) <nil>,
Err: (error) <nil>
Test: TestStreamer_Stream/in_general
panic: test timed out after 10m0s
goroutine 43 [running]:
testing.(*M).startAlarm.func1()
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1628 +0xe5
created by time.goFunc
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/time/sleep.go:167 +0x45
goroutine 1 [chan receive]:
testing.(*T).Run(0xc0003a9080, 0x1e3c7dc, 0x13, 0x1ec85e0, 0x1092a01)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1179 +0x3ad
testing.runTests.func1(0xc000303380)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1449 +0x78
testing.tRunner(0xc000303380, 0xc00020bde0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
testing.runTests(0xc000376400, 0x2697d80, 0x8, 0x8, 0xbfdbdd2b1341ac18, 0x8bb3288b01, 0x26a7ec0, 0x100f9f0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1447 +0x2e8
testing.(*M).Run(0xc000201b80, 0x0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1357 +0x245
main.main()
_testmain.go:57 +0x138
goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x26a8140)
/Users/runner/work/octant/octant/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/Users/runner/work/octant/octant/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
goroutine 38 [chan receive]:
testing.(*T).Run(0xc0003a9200, 0x1e34ea5, 0xa, 0x1ec85d8, 0xc00005d770)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1179 +0x3ad
github.com/vmware-tanzu/octant/internal/log.TestStreamer_Stream(0xc0003a9080)
/Users/runner/work/octant/octant/internal/log/streamer_test.go:34 +0x8b
testing.tRunner(0xc0003a9080, 0x1ec85e0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
created by testing.(*T).Run
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1178 +0x386
goroutine 37 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.NewStreamer.func1(0xc000372d20, 0xc00033cc00)
/Users/runner/work/octant/octant/internal/log/streamer.go:59 +0x156
created by github.com/vmware-tanzu/octant/internal/log.NewStreamer
/Users/runner/work/octant/octant/internal/log/streamer.go:58 +0x106
goroutine 39 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.TestStreamer_Stream.func1(0xc0003a9200)
/Users/runner/work/octant/octant/internal/log/streamer_test.go:61 +0x22e
testing.tRunner(0xc0003a9200, 0x1ec85d8)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
created by testing.(*T).Run
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1178 +0x386
goroutine 40 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.NewStreamer.func1(0xc000372d80, 0xc00033cc40)
/Users/runner/work/octant/octant/internal/log/streamer.go:59 +0x156
created by github.com/vmware-tanzu/octant/internal/log.NewStreamer
/Users/runner/work/octant/octant/internal/log/streamer.go:58 +0x106
FAIL github.com/vmware-tanzu/octant/internal/log 600.388s
? github.com/vmware-tanzu/octant/internal/mime [no test files]
```
- Octant version (use `octant version`): 0.16.1
|
1.0
|
Streamer test flake - This happens infrequently. Keeping logs here as a reference in case someone wants to tackle this.
```
=== RUN TestStreamer_Stream/in_general
streamer_test.go:55:
Error Trace: streamer_test.go:55
asm_amd64.s:1374
Error: Not equal:
expected: event.Event{Type:"event.octant.dev/app-logs", Data:[]log.Message{log.Message{ID:"1", Date:0, LogLevel:"", Location:"", Text:"", JSON:"", StackTrace:""}}, Err:error(nil)}
actual : event.Event{Type:"event.octant.dev/app-logs", Data:[]log.Message(nil), Err:error(nil)}
Diff:
--- Expected
+++ Actual
@@ -2,13 +2,3 @@
Type: (event.EventType) (len=25) "event.octant.dev/app-logs",
- Data: ([]log.Message) (len=1) {
- (log.Message) {
- ID: (string) (len=1) "1",
- Date: (int64) 0,
- LogLevel: (string) "",
- Location: (string) "",
- Text: (string) "",
- JSON: (string) "",
- StackTrace: (string) ""
- }
- },
+ Data: ([]log.Message) <nil>,
Err: (error) <nil>
Test: TestStreamer_Stream/in_general
panic: test timed out after 10m0s
goroutine 43 [running]:
testing.(*M).startAlarm.func1()
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1628 +0xe5
created by time.goFunc
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/time/sleep.go:167 +0x45
goroutine 1 [chan receive]:
testing.(*T).Run(0xc0003a9080, 0x1e3c7dc, 0x13, 0x1ec85e0, 0x1092a01)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1179 +0x3ad
testing.runTests.func1(0xc000303380)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1449 +0x78
testing.tRunner(0xc000303380, 0xc00020bde0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
testing.runTests(0xc000376400, 0x2697d80, 0x8, 0x8, 0xbfdbdd2b1341ac18, 0x8bb3288b01, 0x26a7ec0, 0x100f9f0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1447 +0x2e8
testing.(*M).Run(0xc000201b80, 0x0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1357 +0x245
main.main()
_testmain.go:57 +0x138
goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x26a8140)
/Users/runner/work/octant/octant/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/Users/runner/work/octant/octant/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
goroutine 38 [chan receive]:
testing.(*T).Run(0xc0003a9200, 0x1e34ea5, 0xa, 0x1ec85d8, 0xc00005d770)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1179 +0x3ad
github.com/vmware-tanzu/octant/internal/log.TestStreamer_Stream(0xc0003a9080)
/Users/runner/work/octant/octant/internal/log/streamer_test.go:34 +0x8b
testing.tRunner(0xc0003a9080, 0x1ec85e0)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
created by testing.(*T).Run
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1178 +0x386
goroutine 37 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.NewStreamer.func1(0xc000372d20, 0xc00033cc00)
/Users/runner/work/octant/octant/internal/log/streamer.go:59 +0x156
created by github.com/vmware-tanzu/octant/internal/log.NewStreamer
/Users/runner/work/octant/octant/internal/log/streamer.go:58 +0x106
goroutine 39 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.TestStreamer_Stream.func1(0xc0003a9200)
/Users/runner/work/octant/octant/internal/log/streamer_test.go:61 +0x22e
testing.tRunner(0xc0003a9200, 0x1ec85d8)
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1127 +0xef
created by testing.(*T).Run
/Users/runner/hostedtoolcache/go/1.15.2/x64/src/testing/testing.go:1178 +0x386
goroutine 40 [chan receive]:
github.com/vmware-tanzu/octant/internal/log.NewStreamer.func1(0xc000372d80, 0xc00033cc40)
/Users/runner/work/octant/octant/internal/log/streamer.go:59 +0x156
created by github.com/vmware-tanzu/octant/internal/log.NewStreamer
/Users/runner/work/octant/octant/internal/log/streamer.go:58 +0x106
FAIL github.com/vmware-tanzu/octant/internal/log 600.388s
? github.com/vmware-tanzu/octant/internal/mime [no test files]
```
- Octant version (use `octant version`): 0.16.1
|
test
|
streamer test flake this happens infrequently keeping logs here as a reference in case someone wants to tackle this run teststreamer stream in general streamer test go error trace streamer test go asm s error not equal expected event event type event octant dev app logs data log message log message id date loglevel location text json stacktrace err error nil actual event event type event octant dev app logs data log message nil err error nil diff expected actual type event eventtype len event octant dev app logs data log message len log message id string len date loglevel string location string text string json string stacktrace string data log message err error test teststreamer stream in general panic test timed out after goroutine testing m startalarm users runner hostedtoolcache go src testing testing go created by time gofunc users runner hostedtoolcache go src time sleep go goroutine testing t run users runner hostedtoolcache go src testing testing go testing runtests users runner hostedtoolcache go src testing testing go testing trunner users runner hostedtoolcache go src testing testing go testing runtests users runner hostedtoolcache go src testing testing go testing m run users runner hostedtoolcache go src testing testing go main main testmain go goroutine io klog loggingt flushdaemon users runner work octant octant vendor io klog klog go created by io klog init users runner work octant octant vendor io klog klog go goroutine testing t run users runner hostedtoolcache go src testing testing go github com vmware tanzu octant internal log teststreamer stream users runner work octant octant internal log streamer test go testing trunner users runner hostedtoolcache go src testing testing go created by testing t run users runner hostedtoolcache go src testing testing go goroutine github com vmware tanzu octant internal log newstreamer users runner work octant octant internal log streamer go created by github com vmware tanzu octant internal log newstreamer users runner work octant octant internal log streamer go goroutine github com vmware tanzu octant internal log teststreamer stream users runner work octant octant internal log streamer test go testing trunner users runner hostedtoolcache go src testing testing go created by testing t run users runner hostedtoolcache go src testing testing go goroutine github com vmware tanzu octant internal log newstreamer users runner work octant octant internal log streamer go created by github com vmware tanzu octant internal log newstreamer users runner work octant octant internal log streamer go fail github com vmware tanzu octant internal log github com vmware tanzu octant internal mime octant version use octant version
| 1
|
226,927
| 7,524,806,043
|
IssuesEvent
|
2018-04-13 08:29:43
|
GovReady/govready-q
|
https://api.github.com/repos/GovReady/govready-q
|
closed
|
Tables do not seem to display in Markdown templates
|
enhancement essential priority
|
Using markdown syntax for tables does not appear to generate tables in templates.
|
1.0
|
Tables do not seem to display in Markdown templates - Using markdown syntax for tables does not appear to generate tables in templates.
|
non_test
|
tables do not seem to display in markdown templates using markdown syntax for tables does not appear to generate tables in templates
| 0
|
429,136
| 30,025,837,430
|
IssuesEvent
|
2023-06-27 06:00:32
|
containers/podman-desktop
|
https://api.github.com/repos/containers/podman-desktop
|
closed
|
Confusion between engine provider and machine provider
|
kind/enhancement 👋 area/documentation 📖
|
### Is your enhancement related to a problem? Please describe
Currently the documentation* mixes up the extensions:
> Podman Desktop can control various container engines, such as:
>
> Docker
> Lima
> Podman
\* https://podman-desktop.io/docs/Installation
Some of them provide a container engine, like Docker and Podman...
Some of them provide a virtual machine, like Lima and Podman (Machine).
### Describe the solution you'd like
Maybe it could be made a more clear distinction between the extensions...
Then it could also offer more control over the virtual machine, like start/stop etc ?
Currently the Podman Machine is missing from Linux, only available on Mac/Win.
You can run Podman Engine on the host, and that is what the extension connects to.
The Lima extension does not have anyway to install or start the Lima virtual machine.
If you already have one, it can connect to the unix socket of either Podman or Docker.
### Describe alternatives you've considered
The default engine of Lima is containerd, but Podman Desktop can't communicate with it...
Currently containerd/buildkitd requires file system access, and does not have a remote API.
### Additional context
There currently doesn't seem to be any way to provision a Docker Machine?
(the links* only go to Moby, but that project does not feature a machine to run)
\* https://github.com/containers/podman-desktop#multiple-container-engine-support
At least not using Open Source tools, but you can run with Docker Desktop...
With the recently added feature, you can now use Lima to provider a `docker` VM.
|
1.0
|
Confusion between engine provider and machine provider - ### Is your enhancement related to a problem? Please describe
Currently the documentation* mixes up the extensions:
> Podman Desktop can control various container engines, such as:
>
> Docker
> Lima
> Podman
\* https://podman-desktop.io/docs/Installation
Some of them provide a container engine, like Docker and Podman...
Some of them provide a virtual machine, like Lima and Podman (Machine).
### Describe the solution you'd like
Maybe it could be made a more clear distinction between the extensions...
Then it could also offer more control over the virtual machine, like start/stop etc ?
Currently the Podman Machine is missing from Linux, only available on Mac/Win.
You can run Podman Engine on the host, and that is what the extension connects to.
The Lima extension does not have anyway to install or start the Lima virtual machine.
If you already have one, it can connect to the unix socket of either Podman or Docker.
### Describe alternatives you've considered
The default engine of Lima is containerd, but Podman Desktop can't communicate with it...
Currently containerd/buildkitd requires file system access, and does not have a remote API.
### Additional context
There currently doesn't seem to be any way to provision a Docker Machine?
(the links* only go to Moby, but that project does not feature a machine to run)
\* https://github.com/containers/podman-desktop#multiple-container-engine-support
At least not using Open Source tools, but you can run with Docker Desktop...
With the recently added feature, you can now use Lima to provider a `docker` VM.
|
non_test
|
confusion between engine provider and machine provider is your enhancement related to a problem please describe currently the documentation mixes up the extensions podman desktop can control various container engines such as docker lima podman some of them provide a container engine like docker and podman some of them provide a virtual machine like lima and podman machine describe the solution you d like maybe it could be made a more clear distinction between the extensions then it could also offer more control over the virtual machine like start stop etc currently the podman machine is missing from linux only available on mac win you can run podman engine on the host and that is what the extension connects to the lima extension does not have anyway to install or start the lima virtual machine if you already have one it can connect to the unix socket of either podman or docker describe alternatives you ve considered the default engine of lima is containerd but podman desktop can t communicate with it currently containerd buildkitd requires file system access and does not have a remote api additional context there currently doesn t seem to be any way to provision a docker machine the links only go to moby but that project does not feature a machine to run at least not using open source tools but you can run with docker desktop with the recently added feature you can now use lima to provider a docker vm
| 0
|
329,529
| 28,280,761,806
|
IssuesEvent
|
2023-04-08 01:32:04
|
bckohan/enum-properties
|
https://api.github.com/repos/bckohan/enum-properties
|
closed
|
Address python 3.11+ deprecation warnings.
|
enhancement test
|
/Users/bckohan/Development/enum-properties/enum_properties/__init__.py:370: DeprecationWarning: In 3.13 classes created inside an enum will not become a member. Use the `member` decorator to keep the current behavior.
|
1.0
|
Address python 3.11+ deprecation warnings. - /Users/bckohan/Development/enum-properties/enum_properties/__init__.py:370: DeprecationWarning: In 3.13 classes created inside an enum will not become a member. Use the `member` decorator to keep the current behavior.
|
test
|
address python deprecation warnings users bckohan development enum properties enum properties init py deprecationwarning in classes created inside an enum will not become a member use the member decorator to keep the current behavior
| 1
|
198,045
| 14,959,862,331
|
IssuesEvent
|
2021-01-27 04:19:08
|
nasa/cFE
|
https://api.github.com/repos/nasa/cFE
|
closed
|
UT_CheckForOpenSockets prototype duplicated
|
bug removed unit-test
|
**Is your feature request related to a problem? Please describe.**
Prototype defined in both cFE and OSAL.
https://github.com/nasa/cFE/blob/983157db90bd205977c52762506ccbf2132837f3/fsw/cfe-core/unit-test/ut_support.h#L656-L671
https://github.com/nasa/osal/blob/f12d42ba58837a645d05eda3479d5f613ebad6c4/src/ut-stubs/utstub-helpers.h#L111-L115
Implemented here:
https://github.com/nasa/osal/blob/f12d42ba58837a645d05eda3479d5f613ebad6c4/src/ut-stubs/utstub-helpers.c#L195-L217
Also violation of magic number use in the implementation, and doesn't seem to actually do what it says (I don't see the close).
**Describe the solution you'd like**
Maybe remove if not useful? If not, at least use the correctly scoped prototype and remove the second definition.
**Describe alternatives you've considered**
None.
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC
|
1.0
|
UT_CheckForOpenSockets prototype duplicated - **Is your feature request related to a problem? Please describe.**
Prototype defined in both cFE and OSAL.
https://github.com/nasa/cFE/blob/983157db90bd205977c52762506ccbf2132837f3/fsw/cfe-core/unit-test/ut_support.h#L656-L671
https://github.com/nasa/osal/blob/f12d42ba58837a645d05eda3479d5f613ebad6c4/src/ut-stubs/utstub-helpers.h#L111-L115
Implemented here:
https://github.com/nasa/osal/blob/f12d42ba58837a645d05eda3479d5f613ebad6c4/src/ut-stubs/utstub-helpers.c#L195-L217
Also violation of magic number use in the implementation, and doesn't seem to actually do what it says (I don't see the close).
**Describe the solution you'd like**
Maybe remove if not useful? If not, at least use the correctly scoped prototype and remove the second definition.
**Describe alternatives you've considered**
None.
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC
|
test
|
ut checkforopensockets prototype duplicated is your feature request related to a problem please describe prototype defined in both cfe and osal implemented here also violation of magic number use in the implementation and doesn t seem to actually do what it says i don t see the close describe the solution you d like maybe remove if not useful if not at least use the correctly scoped prototype and remove the second definition describe alternatives you ve considered none additional context none requester info jacob hageman nasa gsfc
| 1
|
284,866
| 24,624,831,049
|
IssuesEvent
|
2022-10-16 11:35:25
|
dromara/hertzbeat
|
https://api.github.com/repos/dromara/hertzbeat
|
opened
|
[Task] <Unit Test Case> manager/service/AppServiceTest.java
|
status: volunteer wanted unit test case
|
### Description
Help us impl Unit Test For [manager/service/AppServiceTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/AppServiceTest.java)
You can learn and refer to the previous test cases impl.
1. controller example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/AccountControllerTest.java
2. service example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/TagServiceTest.java
3. jpa sql dao example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/dao/MonitorDaoTest.java
### Task List
- [ ] Impl Unit Test For [manager/service/AppServiceTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/AppServiceTest.java)
|
1.0
|
[Task] <Unit Test Case> manager/service/AppServiceTest.java - ### Description
Help us impl Unit Test For [manager/service/AppServiceTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/AppServiceTest.java)
You can learn and refer to the previous test cases impl.
1. controller example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/AccountControllerTest.java
2. service example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/TagServiceTest.java
3. jpa sql dao example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/dao/MonitorDaoTest.java
### Task List
- [ ] Impl Unit Test For [manager/service/AppServiceTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/AppServiceTest.java)
|
test
|
manager service appservicetest java description help us impl unit test for you can learn and refer to the previous test cases impl controller example unit case service example unit case jpa sql dao example unit case task list impl unit test for
| 1
|
62,552
| 26,036,303,302
|
IssuesEvent
|
2022-12-22 05:32:28
|
QRjet/monitor-dev
|
https://api.github.com/repos/QRjet/monitor-dev
|
closed
|
🛑 [Service - Dev ] Analytics Web App is down
|
status service-dev-analytics-web-app
|
In [`ef7a155`](https://github.com/QRjet/monitor-dev/commit/ef7a15573c031d75c964a48f51d4c09f1371e3ef
), [Service - Dev ] Analytics Web App (https://api.dev.truetale.io/api/analytics/health-check) was **down**:
- HTTP code: 503
- Response time: 6 ms
|
1.0
|
🛑 [Service - Dev ] Analytics Web App is down - In [`ef7a155`](https://github.com/QRjet/monitor-dev/commit/ef7a15573c031d75c964a48f51d4c09f1371e3ef
), [Service - Dev ] Analytics Web App (https://api.dev.truetale.io/api/analytics/health-check) was **down**:
- HTTP code: 503
- Response time: 6 ms
|
non_test
|
🛑 analytics web app is down in analytics web app was down http code response time ms
| 0
|
143,601
| 11,570,454,730
|
IssuesEvent
|
2020-02-20 19:32:49
|
CBICA/CaPTk
|
https://api.github.com/repos/CBICA/CaPTk
|
closed
|
Loading CaPTk on cluster
|
Testathon-Feb-2020 wontfix
|
**Describe the bug**
When trying to load CaPTk on the cluster using: module load captk/1.7.6, an error message appears saying no such file or directory
**To Reproduce**
Steps to reproduce the behavior:
1. On cluster type: module load captk/1.7.6
2. type: captk
**Expected behavior**
captk should load
**Error message **
WARNING: Trying to run CaPTk GUI using software rendering - this might not work on all systems and in those cases, only the CLI will be available.
[0220/113047.396168:WARNING:stack_trace_posix.cc(699)] Failed to open file: /scratch/chitalir/#5898244 (deleted)
Error: No such file or directory
qt.qpa.xcb: QXcbConnection: XCB error: 145 (Unknown), sequence: 175, resource id: 0, major code: 139 (Unknown), minor code: 20
ApplicationPreferences::DisplayPreferences()
font = ""
theme = ""
ApplicationPreferences::DisplayPreferences()
font = "Sans Serif,9,-1,5,50,0,0,0,0,0"
theme = "Dark"
ApplicationPreferences::DisplayPreferences()
font = "Sans Serif,9,-1,5,50,0,0,0,0,0"
theme = "Dark"
p11-kit: couldn't list directory: /etc/pki/ca-trust/source/anchors: Permission denied
The X11 connection broke: I/O error (code 1)
XIO: fatal IO error 0 (Success) on X server "170.166.98.52:16.0"
after 710 requests (710 known processed) with 0 events remaining.
**CaPTk Version**
1.7.6
**Desktop (please complete the following information):**
Windows
|
1.0
|
Loading CaPTk on cluster - **Describe the bug**
When trying to load CaPTk on the cluster using: module load captk/1.7.6, an error message appears saying no such file or directory
**To Reproduce**
Steps to reproduce the behavior:
1. On cluster type: module load captk/1.7.6
2. type: captk
**Expected behavior**
captk should load
**Error message **
WARNING: Trying to run CaPTk GUI using software rendering - this might not work on all systems and in those cases, only the CLI will be available.
[0220/113047.396168:WARNING:stack_trace_posix.cc(699)] Failed to open file: /scratch/chitalir/#5898244 (deleted)
Error: No such file or directory
qt.qpa.xcb: QXcbConnection: XCB error: 145 (Unknown), sequence: 175, resource id: 0, major code: 139 (Unknown), minor code: 20
ApplicationPreferences::DisplayPreferences()
font = ""
theme = ""
ApplicationPreferences::DisplayPreferences()
font = "Sans Serif,9,-1,5,50,0,0,0,0,0"
theme = "Dark"
ApplicationPreferences::DisplayPreferences()
font = "Sans Serif,9,-1,5,50,0,0,0,0,0"
theme = "Dark"
p11-kit: couldn't list directory: /etc/pki/ca-trust/source/anchors: Permission denied
The X11 connection broke: I/O error (code 1)
XIO: fatal IO error 0 (Success) on X server "170.166.98.52:16.0"
after 710 requests (710 known processed) with 0 events remaining.
**CaPTk Version**
1.7.6
**Desktop (please complete the following information):**
Windows
|
test
|
loading captk on cluster describe the bug when trying to load captk on the cluster using module load captk an error message appears saying no such file or directory to reproduce steps to reproduce the behavior on cluster type module load captk type captk expected behavior captk should load error message warning trying to run captk gui using software rendering this might not work on all systems and in those cases only the cli will be available failed to open file scratch chitalir deleted error no such file or directory qt qpa xcb qxcbconnection xcb error unknown sequence resource id major code unknown minor code applicationpreferences displaypreferences font theme applicationpreferences displaypreferences font sans serif theme dark applicationpreferences displaypreferences font sans serif theme dark kit couldn t list directory etc pki ca trust source anchors permission denied the connection broke i o error code xio fatal io error success on x server after requests known processed with events remaining captk version desktop please complete the following information windows
| 1
|
261,612
| 8,243,970,158
|
IssuesEvent
|
2018-09-11 03:26:35
|
rpiambulance/website
|
https://api.github.com/repos/rpiambulance/website
|
closed
|
Form validation for Edit Member
|
BUG Priority 1
|
We need form validation for the edit member page as people are submitting forms without password or dobs and it causes them to not appear in the edit member pages. This needs to be fixed ASAP as it makes it hard to get many new members emails. This also applied to Add Member!
|
1.0
|
Form validation for Edit Member - We need form validation for the edit member page as people are submitting forms without password or dobs and it causes them to not appear in the edit member pages. This needs to be fixed ASAP as it makes it hard to get many new members emails. This also applied to Add Member!
|
non_test
|
form validation for edit member we need form validation for the edit member page as people are submitting forms without password or dobs and it causes them to not appear in the edit member pages this needs to be fixed asap as it makes it hard to get many new members emails this also applied to add member
| 0
|
102,839
| 16,590,751,700
|
IssuesEvent
|
2021-06-01 07:24:46
|
Yoavmartin/vulnerable-node
|
https://api.github.com/repos/Yoavmartin/vulnerable-node
|
closed
|
CVE-2017-15010 (High) detected in https://source.codeaurora.org/quic/chrome4sdp/nodejs/node/v0.12.18, io.jsv4.6.1 - autoclosed
|
security vulnerability
|
## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/quic/chrome4sdp/nodejs/node/v0.12.18</b>, <b>io.jsv4.6.1</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-15010","vulnerabilityDetails":"A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-15010 (High) detected in https://source.codeaurora.org/quic/chrome4sdp/nodejs/node/v0.12.18, io.jsv4.6.1 - autoclosed - ## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/quic/chrome4sdp/nodejs/node/v0.12.18</b>, <b>io.jsv4.6.1</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-15010","vulnerabilityDetails":"A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in io autoclosed cve high severity vulnerability vulnerable libraries io vulnerability details a redos regular expression denial of service flaw was found in the tough cookie module before for node js an attacker that is able to make an http request using a specially crafted cookie may cause the application to consume an excessive amount of cpu publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages basebranches vulnerabilityidentifier cve vulnerabilitydetails a redos regular expression denial of service flaw was found in the tough cookie module before for node js an attacker that is able to make an http request using a specially crafted cookie may cause the application to consume an excessive amount of cpu vulnerabilityurl
| 0
|
358,697
| 25,200,768,553
|
IssuesEvent
|
2022-11-13 04:09:48
|
e-dant/watcher
|
https://api.github.com/repos/e-dant/watcher
|
closed
|
API Changes
|
documentation enhancement
|
This library has become somewhat popular. I expect at least one other user is actively using this project. For them/those users, we need to publish upcoming API changes.
They are not fully fleshed out yet, however, at least these two things may be affected:
1. When we implemented heuristics, the template parameter on `watch` for `delay_ms` may be removed.
2. To fully support rename events, either the `event.where` will be changed to a tuple or another field will be added to the event object.
This will happen before the 1.0 release.
|
1.0
|
API Changes - This library has become somewhat popular. I expect at least one other user is actively using this project. For them/those users, we need to publish upcoming API changes.
They are not fully fleshed out yet, however, at least these two things may be affected:
1. When we implemented heuristics, the template parameter on `watch` for `delay_ms` may be removed.
2. To fully support rename events, either the `event.where` will be changed to a tuple or another field will be added to the event object.
This will happen before the 1.0 release.
|
non_test
|
api changes this library has become somewhat popular i expect at least one other user is actively using this project for them those users we need to publish upcoming api changes they are not fully fleshed out yet however at least these two things may be affected when we implemented heuristics the template parameter on watch for delay ms may be removed to fully support rename events either the event where will be changed to a tuple or another field will be added to the event object this will happen before the release
| 0
|
341,700
| 30,596,712,305
|
IssuesEvent
|
2023-07-21 23:22:25
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Merge locality error
|
Error Messages Error Explanation testable
|
# Error Text
An error has occurred!
May we suggest:
On search pages, make sure you do not have values in fields that you do not mean to be there. Or try filling in fewer fields, the more likely to get a match.
You may not have sufficient privileges to perform that operation. Consult with your supervisor or Arctos mentor.
ERROR_ID | 00F09D7C-1143-4DA9-9AA7E9BA2139A89F
-- | --
ERROR_TYPE | SQL
ERROR_MESSAGE | ERROR: syntax error at or near "<" Position: 645
ERROR_DETAIL |
ERROR_SQL | select LOCALITY_ID, GEOG_AUTH_REC_ID, SPEC_LOCALITY, DEC_LAT, DEC_LONG, MINIMUM_ELEVATION, MAXIMUM_ELEVATION, ORIG_ELEV_UNITS, MIN_DEPTH, MAX_DEPTH, DEPTH_UNITS, MAX_ERROR_DISTANCE, MAX_ERROR_UNITS, DATUM, LOCALITY_REMARKS, GEOREFERENCE_SOURCE, GEOREFERENCE_PROTOCOL, LOCALITY_NAME, getLocalityAttributesAsJson(locality_id)::varchar localityAttrs, ST_AsText(locality_footprint) as locality_footprint, primary_spatial_data from locality where locality_id != 10937835 and GEOG_AUTH_REC_ID=1039 and 1=1 "> limit 1001
# Where it happened
https://arctos.database.museum/duplicateLocality.cfm
# Steps to get there
Attempting to merge 8 variations of 7385 Beryl Lane (MT: Missoula County). I get an error similar to above when the page first loads, before I change anything. When I try to search the pith (7385 Beryl) in SPEC_LOCALITY, I get a similar error regardless of whether I change the remaining fields to ignore, NULL, or leave them empty. (Trying to follow the Arctos georef video https://www.youtube.com/watch?v=YuP-hr6yvCU, merge localities starting at 29:30.) Similar errors with other localities I'm wanting to merge.
# Problem
**Community response to describe the problem that caused the error**
# Solution
**Community response with directions for how to correct the problem**
|
1.0
|
Merge locality error - # Error Text
An error has occurred!
May we suggest:
On search pages, make sure you do not have values in fields that you do not mean to be there. Or try filling in fewer fields, the more likely to get a match.
You may not have sufficient privileges to perform that operation. Consult with your supervisor or Arctos mentor.
ERROR_ID | 00F09D7C-1143-4DA9-9AA7E9BA2139A89F
-- | --
ERROR_TYPE | SQL
ERROR_MESSAGE | ERROR: syntax error at or near "<" Position: 645
ERROR_DETAIL |
ERROR_SQL | select LOCALITY_ID, GEOG_AUTH_REC_ID, SPEC_LOCALITY, DEC_LAT, DEC_LONG, MINIMUM_ELEVATION, MAXIMUM_ELEVATION, ORIG_ELEV_UNITS, MIN_DEPTH, MAX_DEPTH, DEPTH_UNITS, MAX_ERROR_DISTANCE, MAX_ERROR_UNITS, DATUM, LOCALITY_REMARKS, GEOREFERENCE_SOURCE, GEOREFERENCE_PROTOCOL, LOCALITY_NAME, getLocalityAttributesAsJson(locality_id)::varchar localityAttrs, ST_AsText(locality_footprint) as locality_footprint, primary_spatial_data from locality where locality_id != 10937835 and GEOG_AUTH_REC_ID=1039 and 1=1 "> limit 1001
# Where it happened
https://arctos.database.museum/duplicateLocality.cfm
# Steps to get there
Attempting to merge 8 variations of 7385 Beryl Lane (MT: Missoula County). I get an error similar to above when the page first loads, before I change anything. When I try to search the pith (7385 Beryl) in SPEC_LOCALITY, I get a similar error regardless of whether I change the remaining fields to ignore, NULL, or leave them empty. (Trying to follow the Arctos georef video https://www.youtube.com/watch?v=YuP-hr6yvCU, merge localities starting at 29:30.) Similar errors with other localities I'm wanting to merge.
# Problem
**Community response to describe the problem that caused the error**
# Solution
**Community response with directions for how to correct the problem**
|
test
|
merge locality error error text an error has occurred may we suggest on search pages make sure you do not have values in fields that you do not mean to be there or try filling in fewer fields the more likely to get a match you may not have sufficient privileges to perform that operation consult with your supervisor or arctos mentor error id error type sql error message error syntax error at or near position error detail error sql select locality id geog auth rec id spec locality dec lat dec long minimum elevation maximum elevation orig elev units min depth max depth depth units max error distance max error units datum locality remarks georeference source georeference protocol locality name getlocalityattributesasjson locality id varchar localityattrs st astext locality footprint as locality footprint primary spatial data from locality where locality id and geog auth rec id and limit where it happened steps to get there attempting to merge variations of beryl lane mt missoula county i get an error similar to above when the page first loads before i change anything when i try to search the pith beryl in spec locality i get a similar error regardless of whether i change the remaining fields to ignore null or leave them empty trying to follow the arctos georef video merge localities starting at similar errors with other localities i m wanting to merge problem community response to describe the problem that caused the error solution community response with directions for how to correct the problem
| 1
|
7,779
| 5,200,362,258
|
IssuesEvent
|
2017-01-23 23:36:57
|
tbs-sct/gcconnex
|
https://api.github.com/repos/tbs-sct/gcconnex
|
closed
|
GSA results to display title and description in same language (related to language split)
|
bug search Usability
|
As a frequent user, when I search for content (where the language of the content has been split in English and French), I need the GSA search result to display the title and description in the same language so that I can find what I'm looking for quicker and in my language of choice.
ex: I search "OutilsGC" in the GSA (GCconnex), the results show me content with "GCTools" as the title, and a French description (see screen shot).

There is also an issue when using the group filter on the group page (https://gcconnex.gc.ca/groups/all?filter=yours).
For content where the language has been split, I get different search results when filtering groups using keyword, depending on if I'm navigating GCconnex in English or in French. Furthermore, I also get different results when filtering groups and using the English work and the French equivalence (GCTools vs OutilsGC).
Example:
Navigating in ENGLISH and filtering groups using English word vs French word.


Navigating in FRENCH and filtering groups using English word vs French word.


|
True
|
GSA results to display title and description in same language (related to language split) - As a frequent user, when I search for content (where the language of the content has been split in English and French), I need the GSA search result to display the title and description in the same language so that I can find what I'm looking for quicker and in my language of choice.
ex: I search "OutilsGC" in the GSA (GCconnex), the results show me content with "GCTools" as the title, and a French description (see screen shot).

There is also an issue when using the group filter on the group page (https://gcconnex.gc.ca/groups/all?filter=yours).
For content where the language has been split, I get different search results when filtering groups using keyword, depending on if I'm navigating GCconnex in English or in French. Furthermore, I also get different results when filtering groups and using the English work and the French equivalence (GCTools vs OutilsGC).
Example:
Navigating in ENGLISH and filtering groups using English word vs French word.


Navigating in FRENCH and filtering groups using English word vs French word.


|
non_test
|
gsa results to display title and description in same language related to language split as a frequent user when i search for content where the language of the content has been split in english and french i need the gsa search result to display the title and description in the same language so that i can find what i m looking for quicker and in my language of choice ex i search outilsgc in the gsa gcconnex the results show me content with gctools as the title and a french description see screen shot there is also an issue when using the group filter on the group page for content where the language has been split i get different search results when filtering groups using keyword depending on if i m navigating gcconnex in english or in french furthermore i also get different results when filtering groups and using the english work and the french equivalence gctools vs outilsgc example navigating in english and filtering groups using english word vs french word navigating in french and filtering groups using english word vs french word
| 0
|
89,201
| 8,196,894,042
|
IssuesEvent
|
2018-08-31 11:30:24
|
owncloud/QA
|
https://api.github.com/repos/owncloud/QA
|
closed
|
Refactor Acceptance tests to use a globally configurable guzzle
|
2 - Developing Acceptance tests QA-team
|
Related to #581 - when using self-signed certificates, guzzle requires the option `new Client(['defaults' => [ 'verify' => false ]]);`
Instead of copy+pasting it all over the place, we should aim for creating a `ClientFactory` that receives settings from behat
|
1.0
|
Refactor Acceptance tests to use a globally configurable guzzle - Related to #581 - when using self-signed certificates, guzzle requires the option `new Client(['defaults' => [ 'verify' => false ]]);`
Instead of copy+pasting it all over the place, we should aim for creating a `ClientFactory` that receives settings from behat
|
test
|
refactor acceptance tests to use a globally configurable guzzle related to when using self signed certificates guzzle requires the option new client instead of copy pasting it all over the place we should aim for creating a clientfactory that receives settings from behat
| 1
|
274,448
| 23,840,041,381
|
IssuesEvent
|
2022-09-06 09:28:48
|
hoppscotch/hoppscotch
|
https://api.github.com/repos/hoppscotch/hoppscotch
|
closed
|
[bug]: Can't migrate my GitHub account with my Google account
|
bug need testing
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
I already have the account on Hoppscotch, one with my Google account (my main) and another one with my GitHub account.
I would like to migrate my GitHub account into my Google settings. But can't.

When I click to 'Yes' and connect to my Google account, seems not working, because after disconnected and try with my GitHub account, this message show again.
### Steps to reproduce
1. Try to connect with GitHub
2. Got the message (see the screenshot)
3. Click on 'Yes' to connect
4. Connect with Google account
5. Connected to the main account
6. Disconnect
7. Retry with GitHub account
8. Got the same error
### Environment
Production
### Version
Cloud
|
1.0
|
[bug]: Can't migrate my GitHub account with my Google account - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
I already have the account on Hoppscotch, one with my Google account (my main) and another one with my GitHub account.
I would like to migrate my GitHub account into my Google settings. But can't.

When I click to 'Yes' and connect to my Google account, seems not working, because after disconnected and try with my GitHub account, this message show again.
### Steps to reproduce
1. Try to connect with GitHub
2. Got the message (see the screenshot)
3. Click on 'Yes' to connect
4. Connect with Google account
5. Connected to the main account
6. Disconnect
7. Retry with GitHub account
8. Got the same error
### Environment
Production
### Version
Cloud
|
test
|
can t migrate my github account with my google account is there an existing issue for this i have searched the existing issues current behavior i already have the account on hoppscotch one with my google account my main and another one with my github account i would like to migrate my github account into my google settings but can t when i click to yes and connect to my google account seems not working because after disconnected and try with my github account this message show again steps to reproduce try to connect with github got the message see the screenshot click on yes to connect connect with google account connected to the main account disconnect retry with github account got the same error environment production version cloud
| 1
|
43,951
| 5,578,518,634
|
IssuesEvent
|
2017-03-28 12:38:00
|
zimmerman-zimmerman/OIPA
|
https://api.github.com/repos/zimmerman-zimmerman/OIPA
|
closed
|
Docstore
|
API In Progress Test
|
https://github.com/zimmerman-zimmerman/OIPA/wiki/roadmap
Work on the Nr. 1 Docstore:
Many IATI actvities consist of MetaData descriptions based on the IATI standard (Title, ActivityId, Country etc.etc.). One specific field allows a publishers to *reference* to a file (you can not actually include a binary file inside of the XML). This reference (a document-link element) can be found here:
http://iatistandard.org/202/organisation-standard/iati-organisations/iati-organisation/document-link/
This document-link is described according to the standard as:
"A link to an online, publicly accessible web page or document." and has the attributes @url and @format
https://www.oipa.nl/api/activities/ - see "document_link url, category and title narratives"
**Additional documentation**
IATI Document category: http://iatistandard.org/202/codelists/DocumentCategory/
IATI Document File Format: http://iatistandard.org/202/codelists/FileFormat/
Also see additional documentation: http://iatistandard.org/202/organisation-standard/overview/documents/
|
1.0
|
Docstore - https://github.com/zimmerman-zimmerman/OIPA/wiki/roadmap
Work on the Nr. 1 Docstore:
Many IATI actvities consist of MetaData descriptions based on the IATI standard (Title, ActivityId, Country etc.etc.). One specific field allows a publishers to *reference* to a file (you can not actually include a binary file inside of the XML). This reference (a document-link element) can be found here:
http://iatistandard.org/202/organisation-standard/iati-organisations/iati-organisation/document-link/
This document-link is described according to the standard as:
"A link to an online, publicly accessible web page or document." and has the attributes @url and @format
https://www.oipa.nl/api/activities/ - see "document_link url, category and title narratives"
**Additional documentation**
IATI Document category: http://iatistandard.org/202/codelists/DocumentCategory/
IATI Document File Format: http://iatistandard.org/202/codelists/FileFormat/
Also see additional documentation: http://iatistandard.org/202/organisation-standard/overview/documents/
|
test
|
docstore work on the nr docstore many iati actvities consist of metadata descriptions based on the iati standard title activityid country etc etc one specific field allows a publishers to reference to a file you can not actually include a binary file inside of the xml this reference a document link element can be found here this document link is described according to the standard as a link to an online publicly accessible web page or document and has the attributes url and format see document link url category and title narratives additional documentation iati document category iati document file format also see additional documentation
| 1
|
720,759
| 24,805,505,665
|
IssuesEvent
|
2022-10-25 03:52:53
|
spidernet-io/spiderpool
|
https://api.github.com/repos/spidernet-io/spiderpool
|
opened
|
Edit subnet is rejected
|
issue/not-assign priority/important-soon kind/bug
|
Describe the version
version about:
spiderpool
- v0.2.2
**Describe the bug**
A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips
**Output of the failure**
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid:
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
#
apiVersion: spiderpool.spidernet.io/v1
kind: SpiderSubnet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}}
creationTimestamp: "2022-10-25T03:23:49Z"
finalizers:
- spiderpool.spidernet.io
generation: 1
name: v4-ss-10
resourceVersion: "127189"
uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be
spec:
gateway: 10.118.88.1
ipVersion: 4
ips:
- 10.118.88.2-10.118.88.100
subnet: 10.118.88.0/24
vlan: 0
status:
allocatedIPCount: 0
totalIPCount: 200
```
|
1.0
|
Edit subnet is rejected - Describe the version
version about:
spiderpool
- v0.2.2
**Describe the bug**
A Subnet, without any IP assigned to the IPPool,But it is not possible to change the size of the ips
**Output of the failure**
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# spidersubnets.spiderpool.spidernet.io "v4-ss-10" was not valid:
# * spec.ips: Forbidden: remove some IP ranges [10.118.88.101-10.118.88.201] that is being used, total IP addresses of an Subnet are jointly determined by 'spec.ips' and 'spec.excludeIPs'
#
apiVersion: spiderpool.spidernet.io/v1
kind: SpiderSubnet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"spiderpool.spidernet.io/v1","kind":"SpiderSubnet","metadata":{"annotations":{},"deletionGracePeriodSeconds":0,"finalizers":["spiderpool.spidernet.io"],"generation":2,"name":"v4-ss-10","resourceVersion":"43769"},"spec":{"gateway":"10.118.88.1","ipVersion":4,"ips":["10.118.88.2-10.118.88.201"],"subnet":"10.118.88.0/24","vlan":0}}
creationTimestamp: "2022-10-25T03:23:49Z"
finalizers:
- spiderpool.spidernet.io
generation: 1
name: v4-ss-10
resourceVersion: "127189"
uid: 1f9b1561-a7a8-4a14-80ba-105bf77198be
spec:
gateway: 10.118.88.1
ipVersion: 4
ips:
- 10.118.88.2-10.118.88.100
subnet: 10.118.88.0/24
vlan: 0
status:
allocatedIPCount: 0
totalIPCount: 200
```
|
non_test
|
edit subnet is rejected describe the version version about spiderpool describe the bug a subnet without any ip assigned to the ippool,but it is not possible to change the size of the ips output of the failure please edit the object below lines beginning with a will be ignored and an empty file will abort the edit if an error occurs while saving this file will be reopened with the relevant failures spidersubnets spiderpool spidernet io ss was not valid spec ips forbidden remove some ip ranges that is being used total ip addresses of an subnet are jointly determined by spec ips and spec excludeips apiversion spiderpool spidernet io kind spidersubnet metadata annotations kubectl kubernetes io last applied configuration apiversion spiderpool spidernet io kind spidersubnet metadata annotations deletiongraceperiodseconds finalizers generation name ss resourceversion spec gateway ipversion ips subnet vlan creationtimestamp finalizers spiderpool spidernet io generation name ss resourceversion uid spec gateway ipversion ips subnet vlan status allocatedipcount totalipcount
| 0
|
179,695
| 21,580,294,154
|
IssuesEvent
|
2022-05-02 17:58:05
|
vincenzodistasio97/excel-to-json
|
https://api.github.com/repos/vincenzodistasio97/excel-to-json
|
opened
|
CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz
|
security vulnerability
|
## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (dns-packet): 1.3.2</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz - ## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.1.tgz (Root Library)
- webpack-dev-server-2.9.4.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/e367d4db4134dc676344b2b9fb2443300bd3c9c7">e367d4db4134dc676344b2b9fb2443300bd3c9c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (dns-packet): 1.3.2</p>
<p>Direct dependency fix Resolution (react-scripts): 1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in dns packet tgz cve medium severity vulnerability vulnerable library dns packet tgz an abstract encoding compliant module for encoding decoding dns packets library home page a href path to dependency file client package json path to vulnerable library client node modules dns packet package json dependency hierarchy react scripts tgz root library webpack dev server tgz bonjour tgz multicast dns tgz x dns packet tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package dns packet before it creates buffers with allocunsafe and does not always fill them before forming network packets this can expose internal application memory over unencrypted network when querying crafted invalid domain names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dns packet direct dependency fix resolution react scripts step up your open source security game with whitesource
| 0
|
166,183
| 12,906,013,042
|
IssuesEvent
|
2020-07-15 00:13:48
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts - Visualize feature controls security global visualize read-only privileges "after all" hook for "allows clearing the currently loaded saved query"
|
Team:KibanaApp failed-test test-cloud test-ece
|
**Version: 7.7.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts**
**Stack Trace:**
[Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp2/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/x-pack/test/functional/apps/visualize/feature_controls/visualize_security.ts)]
_Platform: cloud_
_Build Num: 79_
|
3.0
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts - Visualize feature controls security global visualize read-only privileges "after all" hook for "allows clearing the currently loaded saved query" - **Version: 7.7.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts**
**Stack Trace:**
[Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp2/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/x-pack/test/functional/apps/visualize/feature_controls/visualize_security.ts)]
_Platform: cloud_
_Build Num: 79_
|
test
|
chrome x pack ui functional x pack test functional apps visualize feature controls visualize security·ts visualize feature controls security global visualize read only privileges after all hook for allows clearing the currently loaded saved query version class chrome x pack ui functional x pack test functional apps visualize feature controls visualize security·ts stack trace platform cloud build num
| 1
|
18,544
| 3,696,625,283
|
IssuesEvent
|
2016-02-27 03:44:04
|
softlayer/sl-ember-components
|
https://api.github.com/repos/softlayer/sl-ember-components
|
closed
|
Unit | Component | sl modal footer: There are no references to Ember.$, $ or jQuery
|
sl-modal-footer tests
|
```
not ok 533 PhantomJS 1.9 - Unit | Component | sl modal footer: There are no references to Ember.$, $ or jQuery
---
actual: >
null
message: >
Died on test #1 at test (http://localhost:7357/assets/test-support.js:3025)
at testWrapper (http://localhost:7357/assets/test-support.js:6192)
at test (http://localhost:7357/assets/test-support.js:6205)
at http://localhost:7357/assets/tests.js:25322
at http://localhost:7357/assets/vendor.js:152
at tryFinally (http://localhost:7357/assets/vendor.js:33)
at http://localhost:7357/assets/vendor.js:158
at http://localhost:7357/assets/test-loader.js:60
at http://localhost:7357/assets/test-loader.js:51
at http://localhost:7357/assets/test-loader.js:82
at http://localhost:7357/assets/test-support.js:6024: Attempted to wrap $ which is already wrapped
Log: |
...
```
|
1.0
|
Unit | Component | sl modal footer: There are no references to Ember.$, $ or jQuery - ```
not ok 533 PhantomJS 1.9 - Unit | Component | sl modal footer: There are no references to Ember.$, $ or jQuery
---
actual: >
null
message: >
Died on test #1 at test (http://localhost:7357/assets/test-support.js:3025)
at testWrapper (http://localhost:7357/assets/test-support.js:6192)
at test (http://localhost:7357/assets/test-support.js:6205)
at http://localhost:7357/assets/tests.js:25322
at http://localhost:7357/assets/vendor.js:152
at tryFinally (http://localhost:7357/assets/vendor.js:33)
at http://localhost:7357/assets/vendor.js:158
at http://localhost:7357/assets/test-loader.js:60
at http://localhost:7357/assets/test-loader.js:51
at http://localhost:7357/assets/test-loader.js:82
at http://localhost:7357/assets/test-support.js:6024: Attempted to wrap $ which is already wrapped
Log: |
...
```
|
test
|
unit component sl modal footer there are no references to ember or jquery not ok phantomjs unit component sl modal footer there are no references to ember or jquery actual null message died on test at test at testwrapper at test at at at tryfinally at at at at at attempted to wrap which is already wrapped log
| 1
|
22,366
| 6,245,801,203
|
IssuesEvent
|
2017-07-13 01:06:48
|
xceedsoftware/wpftoolkit
|
https://api.github.com/repos/xceedsoftware/wpftoolkit
|
closed
|
DropDownButton fails to auto-close when inside a Popup with StaysOpen="False"
|
CodePlex
|
<b>kentcb[CodePlex]</b> <br />When a DropDownButton is inside a Popup that has StaysOpen=quotFalsequot, it fails to close when you click outside of the DropDownContent area. This is due to faulty mouse capture logic. Here is a simple repro to show the problem in action:
nbsp
ltWindow x:Class=quotXCTKTest.MainWindowquot
xmlns=quothttp://schemas.microsoft.com/winfx/2006/xaml/presentationquot
xmlns:x=quothttp://schemas.microsoft.com/winfx/2006/xamlquot
xmlns:xctk=quothttp://schemas.xceed.com/wpf/xaml/toolkitquot
Title=quotMainWindowquot Height=quot350quot Width=quot525quotgt
ltWindow.Resourcesgt
ltStyle TargetType=quotTextBlockquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotCheckBoxquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotToggleButtonquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotxctk:DropDownButtonquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
lt/Window.Resourcesgt
ltStackPanelgt
ltTextBlock TextWrapping=quotWrapquotgt
Below is a toggle button that will open up a Popup when clicked. Inside the Popup is a DropDownButton. If the Popup has StaysOpen=quotTruequot then everything works
fine. However, if the Popup has StaysOpen=quotFalsequot then closing the DropDownButton requires clicking on it again, instead of just being able to click anywhere
outside of it.
lt/TextBlockgt
ltCheckBox x:Name=quotstaysOpenCheckBoxquot IsChecked=quotTruequotgtPopup.StaysOpenlt/CheckBoxgt
ltToggleButton x:Name=quottoggleButtonquotgt
Click to Open Popup
lt/ToggleButtongt
ltPopup IsOpen=quot{Binding IsChecked, ElementName=toggleButton}quot Placement=quotBottomquot PlacementTarget=quot{Binding ElementName=toggleButton}quot AllowsTransparency=quotTruequot StaysOpen=quot{Binding IsChecked, ElementName=staysOpenCheckBox}quotgt
ltBorder BorderBrush=quotBlackquot CornerRadius=quot3quot BorderThickness=quot2quot Padding=quot5quot Background=quotLightGrayquotgt
ltDockPanelgt
ltTextBlock DockPanel.Dock=quotTopquotgt
This is the Popup. This text gives you somewhere to click outside the DropDownButton, but inside the Popup content.
lt/TextBlockgt
nbsp
ltxctk:DropDownButton DockPanel.Dock=quotTopquotgt
ltxctk:DropDownButton.Contentgt
ltTextBlockgt
This is the DropDownButton Content.
lt/TextBlockgt
lt/xctk:DropDownButton.Contentgt
ltxctk:DropDownButton.DropDownContentgt
ltStackPanelgt
ltTextBlockgt
This is the DropDownButton DropDownContent.
lt/TextBlockgt
ltCheckBoxgtA CheckBoxlt/CheckBoxgt
lt/StackPanelgt
lt/xctk:DropDownButton.DropDownContentgt
lt/xctk:DropDownButtongt
nbsp
ltTextBlock DockPanel.Dock=quotTopquotgt
Here is a ComboBox to show it behaves correctly regardless.
lt/TextBlockgt
nbsp
ltComboBox DockPanel.Dock=quotTopquotgt
ltComboBoxItemgtOnelt/ComboBoxItemgt
ltComboBoxItemgtTwolt/ComboBoxItemgt
ltComboBoxItemgtThreelt/ComboBoxItemgt
lt/ComboBoxgt
lt/DockPanelgt
lt/Bordergt
lt/Popupgt
ltTextBlock TextWrapping=quotWrapquotgt
Here is the same DropDownButton outside a Popup.
lt/TextBlockgt
ltxctk:DropDownButtongt
ltxctk:DropDownButton.Contentgt
ltTextBlockgt
This is the DropDownButton Content.
lt/TextBlockgt
lt/xctk:DropDownButton.Contentgt
ltxctk:DropDownButton.DropDownContentgt
ltStackPanelgt
ltTextBlockgt
This is the DropDownButton DropDownContent.
lt/TextBlockgt
ltCheckBoxgtA CheckBoxlt/CheckBoxgt
lt/StackPanelgt
lt/xctk:DropDownButton.DropDownContentgt
lt/xctk:DropDownButtongt
lt/StackPanelgt
lt/Windowgt
nbsp
I have a fix that I will attach as a comment.
|
1.0
|
DropDownButton fails to auto-close when inside a Popup with StaysOpen="False" - <b>kentcb[CodePlex]</b> <br />When a DropDownButton is inside a Popup that has StaysOpen=quotFalsequot, it fails to close when you click outside of the DropDownContent area. This is due to faulty mouse capture logic. Here is a simple repro to show the problem in action:
nbsp
ltWindow x:Class=quotXCTKTest.MainWindowquot
xmlns=quothttp://schemas.microsoft.com/winfx/2006/xaml/presentationquot
xmlns:x=quothttp://schemas.microsoft.com/winfx/2006/xamlquot
xmlns:xctk=quothttp://schemas.xceed.com/wpf/xaml/toolkitquot
Title=quotMainWindowquot Height=quot350quot Width=quot525quotgt
ltWindow.Resourcesgt
ltStyle TargetType=quotTextBlockquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotCheckBoxquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotToggleButtonquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
ltStyle TargetType=quotxctk:DropDownButtonquotgt
ltSetter Property=quotMarginquot Value=quot3quot/gt
lt/Stylegt
lt/Window.Resourcesgt
ltStackPanelgt
ltTextBlock TextWrapping=quotWrapquotgt
Below is a toggle button that will open up a Popup when clicked. Inside the Popup is a DropDownButton. If the Popup has StaysOpen=quotTruequot then everything works
fine. However, if the Popup has StaysOpen=quotFalsequot then closing the DropDownButton requires clicking on it again, instead of just being able to click anywhere
outside of it.
lt/TextBlockgt
ltCheckBox x:Name=quotstaysOpenCheckBoxquot IsChecked=quotTruequotgtPopup.StaysOpenlt/CheckBoxgt
ltToggleButton x:Name=quottoggleButtonquotgt
Click to Open Popup
lt/ToggleButtongt
ltPopup IsOpen=quot{Binding IsChecked, ElementName=toggleButton}quot Placement=quotBottomquot PlacementTarget=quot{Binding ElementName=toggleButton}quot AllowsTransparency=quotTruequot StaysOpen=quot{Binding IsChecked, ElementName=staysOpenCheckBox}quotgt
ltBorder BorderBrush=quotBlackquot CornerRadius=quot3quot BorderThickness=quot2quot Padding=quot5quot Background=quotLightGrayquotgt
ltDockPanelgt
ltTextBlock DockPanel.Dock=quotTopquotgt
This is the Popup. This text gives you somewhere to click outside the DropDownButton, but inside the Popup content.
lt/TextBlockgt
nbsp
ltxctk:DropDownButton DockPanel.Dock=quotTopquotgt
ltxctk:DropDownButton.Contentgt
ltTextBlockgt
This is the DropDownButton Content.
lt/TextBlockgt
lt/xctk:DropDownButton.Contentgt
ltxctk:DropDownButton.DropDownContentgt
ltStackPanelgt
ltTextBlockgt
This is the DropDownButton DropDownContent.
lt/TextBlockgt
ltCheckBoxgtA CheckBoxlt/CheckBoxgt
lt/StackPanelgt
lt/xctk:DropDownButton.DropDownContentgt
lt/xctk:DropDownButtongt
nbsp
ltTextBlock DockPanel.Dock=quotTopquotgt
Here is a ComboBox to show it behaves correctly regardless.
lt/TextBlockgt
nbsp
ltComboBox DockPanel.Dock=quotTopquotgt
ltComboBoxItemgtOnelt/ComboBoxItemgt
ltComboBoxItemgtTwolt/ComboBoxItemgt
ltComboBoxItemgtThreelt/ComboBoxItemgt
lt/ComboBoxgt
lt/DockPanelgt
lt/Bordergt
lt/Popupgt
ltTextBlock TextWrapping=quotWrapquotgt
Here is the same DropDownButton outside a Popup.
lt/TextBlockgt
ltxctk:DropDownButtongt
ltxctk:DropDownButton.Contentgt
ltTextBlockgt
This is the DropDownButton Content.
lt/TextBlockgt
lt/xctk:DropDownButton.Contentgt
ltxctk:DropDownButton.DropDownContentgt
ltStackPanelgt
ltTextBlockgt
This is the DropDownButton DropDownContent.
lt/TextBlockgt
ltCheckBoxgtA CheckBoxlt/CheckBoxgt
lt/StackPanelgt
lt/xctk:DropDownButton.DropDownContentgt
lt/xctk:DropDownButtongt
lt/StackPanelgt
lt/Windowgt
nbsp
I have a fix that I will attach as a comment.
|
non_test
|
dropdownbutton fails to auto close when inside a popup with staysopen false kentcb when a dropdownbutton is inside a popup that has staysopen quotfalsequot it fails to close when you click outside of the dropdowncontent area this is due to faulty mouse capture logic here is a simple repro to show the problem in action nbsp ltwindow x class quotxctktest mainwindowquot xmlns quot xmlns x quot xmlns xctk quot title quotmainwindowquot height width ltwindow resourcesgt ltstyle targettype quottextblockquotgt ltsetter property quotmarginquot value gt lt stylegt ltstyle targettype quotcheckboxquotgt ltsetter property quotmarginquot value gt lt stylegt ltstyle targettype quottogglebuttonquotgt ltsetter property quotmarginquot value gt lt stylegt ltstyle targettype quotxctk dropdownbuttonquotgt ltsetter property quotmarginquot value gt lt stylegt lt window resourcesgt ltstackpanelgt lttextblock textwrapping quotwrapquotgt below is a toggle button that will open up a popup when clicked inside the popup is a dropdownbutton if the popup has staysopen quottruequot then everything works fine however if the popup has staysopen quotfalsequot then closing the dropdownbutton requires clicking on it again instead of just being able to click anywhere outside of it lt textblockgt ltcheckbox x name quotstaysopencheckboxquot ischecked quottruequotgtpopup staysopenlt checkboxgt lttogglebutton x name quottogglebuttonquotgt click to open popup lt togglebuttongt ltpopup isopen quot binding ischecked elementname togglebutton quot placement quotbottomquot placementtarget quot binding elementname togglebutton quot allowstransparency quottruequot staysopen quot binding ischecked elementname staysopencheckbox quotgt ltborder borderbrush quotblackquot cornerradius borderthickness padding background quotlightgrayquotgt ltdockpanelgt lttextblock dockpanel dock quottopquotgt this is the popup this text gives you somewhere to click outside the dropdownbutton but inside the popup content lt textblockgt nbsp ltxctk dropdownbutton dockpanel dock quottopquotgt ltxctk dropdownbutton contentgt lttextblockgt this is the dropdownbutton content lt textblockgt lt xctk dropdownbutton contentgt ltxctk dropdownbutton dropdowncontentgt ltstackpanelgt lttextblockgt this is the dropdownbutton dropdowncontent lt textblockgt ltcheckboxgta checkboxlt checkboxgt lt stackpanelgt lt xctk dropdownbutton dropdowncontentgt lt xctk dropdownbuttongt nbsp lttextblock dockpanel dock quottopquotgt here is a combobox to show it behaves correctly regardless lt textblockgt nbsp ltcombobox dockpanel dock quottopquotgt ltcomboboxitemgtonelt comboboxitemgt ltcomboboxitemgttwolt comboboxitemgt ltcomboboxitemgtthreelt comboboxitemgt lt comboboxgt lt dockpanelgt lt bordergt lt popupgt lttextblock textwrapping quotwrapquotgt here is the same dropdownbutton outside a popup lt textblockgt ltxctk dropdownbuttongt ltxctk dropdownbutton contentgt lttextblockgt this is the dropdownbutton content lt textblockgt lt xctk dropdownbutton contentgt ltxctk dropdownbutton dropdowncontentgt ltstackpanelgt lttextblockgt this is the dropdownbutton dropdowncontent lt textblockgt ltcheckboxgta checkboxlt checkboxgt lt stackpanelgt lt xctk dropdownbutton dropdowncontentgt lt xctk dropdownbuttongt lt stackpanelgt lt windowgt nbsp i have a fix that i will attach as a comment
| 0
|
96,480
| 12,132,141,340
|
IssuesEvent
|
2020-04-23 06:38:52
|
LeventErkok/sbv
|
https://api.github.com/repos/LeventErkok/sbv
|
closed
|
SMTValue class might be deprecated
|
Design exploration
|
It appears we might be able to completely remove the `SMTValue` class since it's functionality is not needed by a regular call to `getValue`. This could lead to some major clean-up in code. Need to investigate.
|
1.0
|
SMTValue class might be deprecated - It appears we might be able to completely remove the `SMTValue` class since it's functionality is not needed by a regular call to `getValue`. This could lead to some major clean-up in code. Need to investigate.
|
non_test
|
smtvalue class might be deprecated it appears we might be able to completely remove the smtvalue class since it s functionality is not needed by a regular call to getvalue this could lead to some major clean up in code need to investigate
| 0
|
244,020
| 20,603,160,563
|
IssuesEvent
|
2022-03-06 15:32:08
|
theAgingApprentice/hexbotCompiler
|
https://api.github.com/repos/theAgingApprentice/hexbotCompiler
|
opened
|
Need commn command set that works with both compilers
|
type: testing
|
We need to establish a common set of library calls that use syntax supported by both OSX and Windows compilers.
|
1.0
|
Need commn command set that works with both compilers - We need to establish a common set of library calls that use syntax supported by both OSX and Windows compilers.
|
test
|
need commn command set that works with both compilers we need to establish a common set of library calls that use syntax supported by both osx and windows compilers
| 1
|
244,179
| 7,871,537,504
|
IssuesEvent
|
2018-06-25 08:14:46
|
minio/minio-java
|
https://api.github.com/repos/minio/minio-java
|
closed
|
Concurrent Threads to putObject,some threads take about 2.x sec/thread
|
priority: medium
|
## scene
i want test the speed of concurrent threads's putObject. then i set 20 threads and 100 objects/thread. and the upload time is a bit long. some threads take about 2.x sec, even 3 sec. i found client request not cost long time, but wait response always cost more. and i found the thread dump has more okhttp's connectionPool thread.
## my code
```java
@Test
public void stabilityTest3_thread() throws InvalidPortException, InvalidEndpointException, InterruptedException {
int threadnums =20;
final int num = 100;
final int timeout = 2000;
final String filePath = "C:\\文件.jpg";
final List<Long> times = new ArrayList<>();
final CountDownLatch countDownLatch = new CountDownLatch(threadnums);
AtomicInteger countR = new AtomicInteger(0);
AtomicInteger countE = new AtomicInteger(0);
long startTime1 = System.currentTimeMillis();
for(int i = 0 ;i< threadnums; i++){
final int index =i;
new Thread(()->{
for(int j = 0; j< num; j++) {
long startTime2 = System.currentTimeMillis();
try {
FileInputStream fileInputStream = new FileInputStream(filePath);
MinioClient client1 = new MinioClient("http://ip",port,"accessKey", "secretKey");
client1.putObject("zuxp", "test/hello/minio/hei/文件"+(index*100+j)+".jpg",
fileInputStream, Files.probeContentType(Paths.get(filePath)));
if (fileInputStream != null) {
fileInputStream.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally{
long perTime = System.currentTimeMillis() - startTime2;
times.add(perTime);
if(perTime < timeout) {
countR.addAndGet(1);
}else{
countE.addAndGet(1);
System.out.println("time out per upload time:"+perTime);
}
}
}
countDownLatch.countDown();
}).start();
}
countDownLatch.await();
Collections.sort(times);
System.out.println(String.format("upload total time:%s,timeout upload num:%s, good upload num:%s,min time:%s, max time:%s",(System.currentTimeMillis()-startTime1),
countE.get(), countR.get(), times.get(0), times.get(times.size()-1)));
}
```
## result
> set timeout:2000, upload total time:118360,timeout upload num:100, good upload num:1900,min time:278, max time:3012
## env
### client
- java-client:minio-3.0.7.jar
- machine net speed:1000Mb/s
- jdk:1.7
- test pic size: 1.5M
### server
- server-version:2017-11-22T19:55:46Z
- MEMORY
Used: 7.5 MB | Allocated: 3.3 TB | Used-Heap: 7.5 MB | Allocated-Heap: 196 MB
- PLATFORM
Host: localhost.localdomain | OS: linux | Arch: amd64
- RUNTIME
Version: go1.9.1 | CPUs: 4
- machine net speed:1000Mb/s
|
1.0
|
Concurrent Threads to putObject,some threads take about 2.x sec/thread - ## scene
i want test the speed of concurrent threads's putObject. then i set 20 threads and 100 objects/thread. and the upload time is a bit long. some threads take about 2.x sec, even 3 sec. i found client request not cost long time, but wait response always cost more. and i found the thread dump has more okhttp's connectionPool thread.
## my code
```java
@Test
public void stabilityTest3_thread() throws InvalidPortException, InvalidEndpointException, InterruptedException {
int threadnums =20;
final int num = 100;
final int timeout = 2000;
final String filePath = "C:\\文件.jpg";
final List<Long> times = new ArrayList<>();
final CountDownLatch countDownLatch = new CountDownLatch(threadnums);
AtomicInteger countR = new AtomicInteger(0);
AtomicInteger countE = new AtomicInteger(0);
long startTime1 = System.currentTimeMillis();
for(int i = 0 ;i< threadnums; i++){
final int index =i;
new Thread(()->{
for(int j = 0; j< num; j++) {
long startTime2 = System.currentTimeMillis();
try {
FileInputStream fileInputStream = new FileInputStream(filePath);
MinioClient client1 = new MinioClient("http://ip",port,"accessKey", "secretKey");
client1.putObject("zuxp", "test/hello/minio/hei/文件"+(index*100+j)+".jpg",
fileInputStream, Files.probeContentType(Paths.get(filePath)));
if (fileInputStream != null) {
fileInputStream.close();
}
} catch (Exception e) {
e.printStackTrace();
} finally{
long perTime = System.currentTimeMillis() - startTime2;
times.add(perTime);
if(perTime < timeout) {
countR.addAndGet(1);
}else{
countE.addAndGet(1);
System.out.println("time out per upload time:"+perTime);
}
}
}
countDownLatch.countDown();
}).start();
}
countDownLatch.await();
Collections.sort(times);
System.out.println(String.format("upload total time:%s,timeout upload num:%s, good upload num:%s,min time:%s, max time:%s",(System.currentTimeMillis()-startTime1),
countE.get(), countR.get(), times.get(0), times.get(times.size()-1)));
}
```
## result
> set timeout:2000, upload total time:118360,timeout upload num:100, good upload num:1900,min time:278, max time:3012
## env
### client
- java-client:minio-3.0.7.jar
- machine net speed:1000Mb/s
- jdk:1.7
- test pic size: 1.5M
### server
- server-version:2017-11-22T19:55:46Z
- MEMORY
Used: 7.5 MB | Allocated: 3.3 TB | Used-Heap: 7.5 MB | Allocated-Heap: 196 MB
- PLATFORM
Host: localhost.localdomain | OS: linux | Arch: amd64
- RUNTIME
Version: go1.9.1 | CPUs: 4
- machine net speed:1000Mb/s
|
non_test
|
concurrent threads to putobject some threads take about x sec thread scene i want test the speed of concurrent threads s putobject then i set threads and objects thread and the upload time is a bit long some threads take about x sec even sec i found client request not cost long time but wait response always cost more and i found the thread dump has more okhttp s connectionpool thread my code java test public void thread throws invalidportexception invalidendpointexception interruptedexception int threadnums final int num final int timeout final string filepath c 文件 jpg final list times new arraylist final countdownlatch countdownlatch new countdownlatch threadnums atomicinteger countr new atomicinteger atomicinteger counte new atomicinteger long system currenttimemillis for int i i threadnums i final int index i new thread for int j j num j long system currenttimemillis try fileinputstream fileinputstream new fileinputstream filepath minioclient new minioclient secretkey putobject zuxp test hello minio hei 文件 index j jpg fileinputstream files probecontenttype paths get filepath if fileinputstream null fileinputstream close catch exception e e printstacktrace finally long pertime system currenttimemillis times add pertime if pertime timeout countr addandget else counte addandget system out println time out per upload time pertime countdownlatch countdown start countdownlatch await collections sort times system out println string format upload total time s timeout upload num s good upload num s min time s max time s system currenttimemillis counte get countr get times get times get times size result set timeout upload total time timeout upload num good upload num min time max time env client java client minio jar machine net speed s jdk test pic size server server version memory used mb allocated tb used heap mb allocated heap mb platform host localhost localdomain os linux arch runtime version cpus machine net speed s
| 0
|
21,666
| 3,911,757,065
|
IssuesEvent
|
2016-04-20 07:41:32
|
nicolargo/glances
|
https://api.github.com/repos/nicolargo/glances
|
closed
|
Wrong formatting for tree columns
|
bug needs test
|
When I run `glances --tree` the tree menu is messed up. It looks like `W/s` and `Command` columns are swapped for some of child processes. If I run glances command multiple times, this always happens for same processes.

* Glances: 2.6.1
* PSutil: 4.1.0
* Operating System: ArchLinux
|
1.0
|
Wrong formatting for tree columns - When I run `glances --tree` the tree menu is messed up. It looks like `W/s` and `Command` columns are swapped for some of child processes. If I run glances command multiple times, this always happens for same processes.

* Glances: 2.6.1
* PSutil: 4.1.0
* Operating System: ArchLinux
|
test
|
wrong formatting for tree columns when i run glances tree the tree menu is messed up it looks like w s and command columns are swapped for some of child processes if i run glances command multiple times this always happens for same processes glances psutil operating system archlinux
| 1
|
278,365
| 8,640,060,322
|
IssuesEvent
|
2018-11-24 00:16:15
|
borgbase/vorta
|
https://api.github.com/repos/borgbase/vorta
|
closed
|
Run check on selected archive only (if one is selected)
|
priority:low type:enhancement
|
hmm, is there no repo (or repo+archives) check, just single archives?
|
1.0
|
Run check on selected archive only (if one is selected) - hmm, is there no repo (or repo+archives) check, just single archives?
|
non_test
|
run check on selected archive only if one is selected hmm is there no repo or repo archives check just single archives
| 0
|
658,318
| 21,884,599,480
|
IssuesEvent
|
2022-05-19 17:17:46
|
apluslms/a-plus-rst-tools
|
https://api.github.com/repos/apluslms/a-plus-rst-tools
|
closed
|
Automatically add file paths from the exercise container.mounts setting to the file list to be uploaded to the grader
|
type: bug area: config priority: high effort: hours experience: beginner
|
If exercise config.yaml uses the `container.mounts` field, then those file paths must be sent to the grader so that the grader could really mount those files to the exercise grading container. Currently, the `container.mount` field is handled here:
https://github.com/apluslms/a-plus-rst-tools/blob/84b2611e7bea2de50d9c33720aee8d42ebadb4af/directives/submit.py#L183-L184
Add the file path mapping from `container.mounts` there as well.
|
1.0
|
Automatically add file paths from the exercise container.mounts setting to the file list to be uploaded to the grader - If exercise config.yaml uses the `container.mounts` field, then those file paths must be sent to the grader so that the grader could really mount those files to the exercise grading container. Currently, the `container.mount` field is handled here:
https://github.com/apluslms/a-plus-rst-tools/blob/84b2611e7bea2de50d9c33720aee8d42ebadb4af/directives/submit.py#L183-L184
Add the file path mapping from `container.mounts` there as well.
|
non_test
|
automatically add file paths from the exercise container mounts setting to the file list to be uploaded to the grader if exercise config yaml uses the container mounts field then those file paths must be sent to the grader so that the grader could really mount those files to the exercise grading container currently the container mount field is handled here add the file path mapping from container mounts there as well
| 0
|
211,333
| 7,200,364,373
|
IssuesEvent
|
2018-02-05 18:48:50
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Server CPU Utilization much higher 7
|
High Priority
|
Prior to 7.0.0 beta, the 7.0.0 staging versions had a much lower average CPU utilization. Average CPU usage with 3-5 players was between 20 and 40%. The screenshot below is from 7.0.2 with only 1 player on.

|
1.0
|
Server CPU Utilization much higher 7 - Prior to 7.0.0 beta, the 7.0.0 staging versions had a much lower average CPU utilization. Average CPU usage with 3-5 players was between 20 and 40%. The screenshot below is from 7.0.2 with only 1 player on.

|
non_test
|
server cpu utilization much higher prior to beta the staging versions had a much lower average cpu utilization average cpu usage with players was between and the screenshot below is from with only player on
| 0
|
63,806
| 6,885,009,552
|
IssuesEvent
|
2017-11-21 14:53:49
|
NativeScript/nativescript-cli
|
https://api.github.com/repos/NativeScript/nativescript-cli
|
closed
|
build-*-bundle gets stuck at nativescript-unit-test-runner hook in NS 3.2
|
bug unit testing
|
### Tell us about the problem
The commands `npm run build-android-bundle` and `npm run build-ios-bundle` get stuck at the nativescript-unit-test-runner hook and never fail or complete in NS 3.2:
```
npm run build-ios-bundle
> @ build-ios-bundle /Users/davidbenninger/Desktop/Test
> npm run ns-bundle --ios --build-app
> @ ns-bundle /Users/davidbenninger/Desktop/Test
> ns-bundle
Running tns prepare...
Executing before-prepare hook from /Users/davidbenninger/Desktop/Test/hooks/before-prepare/nativescript-dev-typescript.js
Preparing project...
Project successfully prepared (ios)
Executing after-prepare hook from /Users/davidbenninger/Desktop/Test/hooks/after-prepare/nativescript-unit-test-runner.js
```
a normal `tns run ...` seems to work fine.
### Which platform(s) does your issue occur on?
with iOS and Android
### Please provide the following version numbers that your issue occurs with:
- CLI: 3.2.1
- Cross-platform modules: 3.2.0
- Runtime(s): 3.2.0
- Plugin(s): "nativescript-unit-test-runner": "^0.3.4"
### To reproduce:
1. tns create Test --template ng
2. cd Test
3. npm install --save-dev nativescript-dev-webpack
4. npm install
5. tns test init -> select "jasmine"
6. npm run build-android-bundle
|
1.0
|
build-*-bundle gets stuck at nativescript-unit-test-runner hook in NS 3.2 - ### Tell us about the problem
The commands `npm run build-android-bundle` and `npm run build-ios-bundle` get stuck at the nativescript-unit-test-runner hook and never fail or complete in NS 3.2:
```
npm run build-ios-bundle
> @ build-ios-bundle /Users/davidbenninger/Desktop/Test
> npm run ns-bundle --ios --build-app
> @ ns-bundle /Users/davidbenninger/Desktop/Test
> ns-bundle
Running tns prepare...
Executing before-prepare hook from /Users/davidbenninger/Desktop/Test/hooks/before-prepare/nativescript-dev-typescript.js
Preparing project...
Project successfully prepared (ios)
Executing after-prepare hook from /Users/davidbenninger/Desktop/Test/hooks/after-prepare/nativescript-unit-test-runner.js
```
a normal `tns run ...` seems to work fine.
### Which platform(s) does your issue occur on?
with iOS and Android
### Please provide the following version numbers that your issue occurs with:
- CLI: 3.2.1
- Cross-platform modules: 3.2.0
- Runtime(s): 3.2.0
- Plugin(s): "nativescript-unit-test-runner": "^0.3.4"
### To reproduce:
1. tns create Test --template ng
2. cd Test
3. npm install --save-dev nativescript-dev-webpack
4. npm install
5. tns test init -> select "jasmine"
6. npm run build-android-bundle
|
test
|
build bundle gets stuck at nativescript unit test runner hook in ns tell us about the problem the commands npm run build android bundle and npm run build ios bundle get stuck at the nativescript unit test runner hook and never fail or complete in ns npm run build ios bundle build ios bundle users davidbenninger desktop test npm run ns bundle ios build app ns bundle users davidbenninger desktop test ns bundle running tns prepare executing before prepare hook from users davidbenninger desktop test hooks before prepare nativescript dev typescript js preparing project project successfully prepared ios executing after prepare hook from users davidbenninger desktop test hooks after prepare nativescript unit test runner js a normal tns run seems to work fine which platform s does your issue occur on with ios and android please provide the following version numbers that your issue occurs with cli cross platform modules runtime s plugin s nativescript unit test runner to reproduce tns create test template ng cd test npm install save dev nativescript dev webpack npm install tns test init select jasmine npm run build android bundle
| 1
|
78,017
| 7,612,738,298
|
IssuesEvent
|
2018-05-01 18:38:07
|
chapel-lang/chapel
|
https://api.github.com/repos/chapel-lang/chapel
|
opened
|
ZMQ interoperability test times out with gasnet
|
area: Modules area: Tests type: Bug
|
### Summary of Problem
The ZMQ `interop-py` test (added in #7049) currently times out with gasnet configuration. This test was failing silently on nightly gasnet testing until #9072, which surfaced the test timing out. The configuration will be skipped until this issue is resolved.
Something about the gasnet communication layer is interfering with Chapel's ability to connect to the process it spawns. When running the `client.py` outside of the Chapel program, the connection succeeds and no timeout occurs.
It's unclear to me whether this is an issue with the test or the module so far.
Note that I have only tested gasnet locally (`GASNET_SPAWNFN=L`) on OS X so far.
### Steps to Reproduce
**Associated Future Test(s):**
[`test/library/packages/ZMQ/interop-py`](https://github.com/chapel-lang/chapel/blob/master/test/library/packages/ZMQ/interop-py)
### Configuration Information
- Output of `chpl --version`: `chpl version 1.18.0 pre-release (ed162f8)`
- Output of `$CHPL_HOME/util/printchplenv --anonymize`:
```
CHPL_TARGET_PLATFORM: darwin
CHPL_TARGET_COMPILER: clang
CHPL_TARGET_ARCH: unknown
CHPL_LOCALE_MODEL: flat
CHPL_COMM: gasnet *
CHPL_COMM_SUBSTRATE: udp
CHPL_GASNET_SEGMENT: everything
CHPL_TASKS: qthreads
CHPL_LAUNCHER: amudprun
CHPL_TIMERS: generic
CHPL_UNWIND: none
CHPL_MEM: jemalloc
CHPL_ATOMICS: intrinsics
CHPL_NETWORK_ATOMICS: none
CHPL_GMP: none *
CHPL_HWLOC: hwloc
CHPL_REGEXP: re2 *
CHPL_AUX_FILESYS: none
```
|
1.0
|
ZMQ interoperability test times out with gasnet - ### Summary of Problem
The ZMQ `interop-py` test (added in #7049) currently times out with gasnet configuration. This test was failing silently on nightly gasnet testing until #9072, which surfaced the test timing out. The configuration will be skipped until this issue is resolved.
Something about the gasnet communication layer is interfering with Chapel's ability to connect to the process it spawns. When running the `client.py` outside of the Chapel program, the connection succeeds and no timeout occurs.
It's unclear to me whether this is an issue with the test or the module so far.
Note that I have only tested gasnet locally (`GASNET_SPAWNFN=L`) on OS X so far.
### Steps to Reproduce
**Associated Future Test(s):**
[`test/library/packages/ZMQ/interop-py`](https://github.com/chapel-lang/chapel/blob/master/test/library/packages/ZMQ/interop-py)
### Configuration Information
- Output of `chpl --version`: `chpl version 1.18.0 pre-release (ed162f8)`
- Output of `$CHPL_HOME/util/printchplenv --anonymize`:
```
CHPL_TARGET_PLATFORM: darwin
CHPL_TARGET_COMPILER: clang
CHPL_TARGET_ARCH: unknown
CHPL_LOCALE_MODEL: flat
CHPL_COMM: gasnet *
CHPL_COMM_SUBSTRATE: udp
CHPL_GASNET_SEGMENT: everything
CHPL_TASKS: qthreads
CHPL_LAUNCHER: amudprun
CHPL_TIMERS: generic
CHPL_UNWIND: none
CHPL_MEM: jemalloc
CHPL_ATOMICS: intrinsics
CHPL_NETWORK_ATOMICS: none
CHPL_GMP: none *
CHPL_HWLOC: hwloc
CHPL_REGEXP: re2 *
CHPL_AUX_FILESYS: none
```
|
test
|
zmq interoperability test times out with gasnet summary of problem the zmq interop py test added in currently times out with gasnet configuration this test was failing silently on nightly gasnet testing until which surfaced the test timing out the configuration will be skipped until this issue is resolved something about the gasnet communication layer is interfering with chapel s ability to connect to the process it spawns when running the client py outside of the chapel program the connection succeeds and no timeout occurs it s unclear to me whether this is an issue with the test or the module so far note that i have only tested gasnet locally gasnet spawnfn l on os x so far steps to reproduce associated future test s configuration information output of chpl version chpl version pre release output of chpl home util printchplenv anonymize chpl target platform darwin chpl target compiler clang chpl target arch unknown chpl locale model flat chpl comm gasnet chpl comm substrate udp chpl gasnet segment everything chpl tasks qthreads chpl launcher amudprun chpl timers generic chpl unwind none chpl mem jemalloc chpl atomics intrinsics chpl network atomics none chpl gmp none chpl hwloc hwloc chpl regexp chpl aux filesys none
| 1
|
186,896
| 14,426,868,266
|
IssuesEvent
|
2020-12-06 00:28:35
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
kubevirt/kubernetes-device-plugins: vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go; 3 LoC
|
fresh test tiny vendored
|
Found a possible issue in [kubevirt/kubernetes-device-plugins](https://www.github.com/kubevirt/kubernetes-device-plugins) at [vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go#L122-L124)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to pod at line 123 may start a goroutine
[Click here to see the code in its original context.](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go#L122-L124)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, pod := range newPodList(pendingPods, v1.PodPending, job) {
podIndexer.Add(&pod)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2439489f2cd0b3ddc00c5779dd5129680f0c2dcd
|
1.0
|
kubevirt/kubernetes-device-plugins: vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go; 3 LoC -
Found a possible issue in [kubevirt/kubernetes-device-plugins](https://www.github.com/kubevirt/kubernetes-device-plugins) at [vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go#L122-L124)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to pod at line 123 may start a goroutine
[Click here to see the code in its original context.](https://github.com/kubevirt/kubernetes-device-plugins/blob/2439489f2cd0b3ddc00c5779dd5129680f0c2dcd/vendor/k8s.io/kubernetes/pkg/controller/job/job_controller_test.go#L122-L124)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, pod := range newPodList(pendingPods, v1.PodPending, job) {
podIndexer.Add(&pod)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 2439489f2cd0b3ddc00c5779dd5129680f0c2dcd
|
test
|
kubevirt kubernetes device plugins vendor io kubernetes pkg controller job job controller test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to pod at line may start a goroutine click here to show the line s of go which triggered the analyzer go for pod range newpodlist pendingpods podpending job podindexer add pod leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
69,268
| 22,305,277,556
|
IssuesEvent
|
2022-06-13 12:31:02
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
opened
|
Stuck msgs if you bg the app before they send
|
T-Defect
|
### Steps to reproduce
1. Send some messages on slow connectivity
2. Bg the app rapidly
3. Reopen later
4. Discover that the messages are stuck at the bottom of the timeline, but are not shown as unsent
5. You have no idea if they actually sent or not (it did, though), so you end up resend it and end up with dups.
### Outcome
#### What did you expect?
Messages to not get stuck, even if the echo is delayed, or even if didn’t get a 200 from the send.
#### What happened instead?
Reliably flakey stuck messages since ~1.8.17
### Your phone model
12 pro max
### Operating system version
15.5
### Application version
1.8.18 but since 1.8.17 or so
### Homeserver
m.org
### Will you send logs?
No
|
1.0
|
Stuck msgs if you bg the app before they send - ### Steps to reproduce
1. Send some messages on slow connectivity
2. Bg the app rapidly
3. Reopen later
4. Discover that the messages are stuck at the bottom of the timeline, but are not shown as unsent
5. You have no idea if they actually sent or not (it did, though), so you end up resend it and end up with dups.
### Outcome
#### What did you expect?
Messages to not get stuck, even if the echo is delayed, or even if didn’t get a 200 from the send.
#### What happened instead?
Reliably flakey stuck messages since ~1.8.17
### Your phone model
12 pro max
### Operating system version
15.5
### Application version
1.8.18 but since 1.8.17 or so
### Homeserver
m.org
### Will you send logs?
No
|
non_test
|
stuck msgs if you bg the app before they send steps to reproduce send some messages on slow connectivity bg the app rapidly reopen later discover that the messages are stuck at the bottom of the timeline but are not shown as unsent you have no idea if they actually sent or not it did though so you end up resend it and end up with dups outcome what did you expect messages to not get stuck even if the echo is delayed or even if didn’t get a from the send what happened instead reliably flakey stuck messages since your phone model pro max operating system version application version but since or so homeserver m org will you send logs no
| 0
|
298,986
| 25,874,085,394
|
IssuesEvent
|
2022-12-14 06:17:35
|
openBackhaul/ApplicationLayerTopology
|
https://api.github.com/repos/openBackhaul/ApplicationLayerTopology
|
opened
|
Update address attribute with domain-name along with ipv-4-address : Service Layer
|
testsuite_to_be_changed
|
In test-suites, In multiple scenarios, we are reading data and writing them from and to the config file. In which, multiple services have an attribute ipv-4-address (either tcp-client or tcp-server) which will be parsed from chosen object. In v1.0.0, ipv-4-address is only attribute present in tcp-server or tcp-client instances to depict address on which an application is hosted, but in v2.0.0, it is added with domain-name attribute. So, the path needs to be updated wherever applicable
- [ ] /v1/bequeath-your-data-and-die
- [ ] /v1/regard-application
- [ ] /v1/notify-link-updates
|
1.0
|
Update address attribute with domain-name along with ipv-4-address : Service Layer - In test-suites, In multiple scenarios, we are reading data and writing them from and to the config file. In which, multiple services have an attribute ipv-4-address (either tcp-client or tcp-server) which will be parsed from chosen object. In v1.0.0, ipv-4-address is only attribute present in tcp-server or tcp-client instances to depict address on which an application is hosted, but in v2.0.0, it is added with domain-name attribute. So, the path needs to be updated wherever applicable
- [ ] /v1/bequeath-your-data-and-die
- [ ] /v1/regard-application
- [ ] /v1/notify-link-updates
|
test
|
update address attribute with domain name along with ipv address service layer in test suites in multiple scenarios we are reading data and writing them from and to the config file in which multiple services have an attribute ipv address either tcp client or tcp server which will be parsed from chosen object in ipv address is only attribute present in tcp server or tcp client instances to depict address on which an application is hosted but in it is added with domain name attribute so the path needs to be updated wherever applicable bequeath your data and die regard application notify link updates
| 1
|
508,361
| 14,698,842,600
|
IssuesEvent
|
2021-01-04 07:23:28
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
tidb hangs with unable to connect
|
component/server need-more-info priority/awaiting-more-evidence type/bug
|
## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
All tikv crashed with OOM, after tikv recovered, one of our tidb can't connected, dashboard can't show related info before restart.
tiup cluster shows everything is ok.
### 2. What did you expect to see? (Required)
All works well.
### 3. What did you see instead (Required)
TiDB can not establish new connection.
### 4. What is your TiDB version? (Required)
v4.0.5
debug info before tidb restart.
[debug.zip](https://github.com/pingcap/tidb/files/5185920/debug.zip)
|
1.0
|
tidb hangs with unable to connect - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
All tikv crashed with OOM, after tikv recovered, one of our tidb can't connected, dashboard can't show related info before restart.
tiup cluster shows everything is ok.
### 2. What did you expect to see? (Required)
All works well.
### 3. What did you see instead (Required)
TiDB can not establish new connection.
### 4. What is your TiDB version? (Required)
v4.0.5
debug info before tidb restart.
[debug.zip](https://github.com/pingcap/tidb/files/5185920/debug.zip)
|
non_test
|
tidb hangs with unable to connect bug report please answer these questions before submitting your issue thanks minimal reproduce step required all tikv crashed with oom after tikv recovered one of our tidb can t connected dashboard can t show related info before restart tiup cluster shows everything is ok what did you expect to see required all works well what did you see instead required tidb can not establish new connection what is your tidb version required debug info before tidb restart
| 0
|
87,304
| 8,071,670,164
|
IssuesEvent
|
2018-08-06 13:52:41
|
ONRR/doi-extractives-data
|
https://api.github.com/repos/ONRR/doi-extractives-data
|
closed
|
Plan usability study for home page
|
Home Page p1 research workflow:testing
|
- [x] Create study plan
- [x] Create guide
- [x] Make sure prototypes match guide and are ready for testing
|
1.0
|
Plan usability study for home page - - [x] Create study plan
- [x] Create guide
- [x] Make sure prototypes match guide and are ready for testing
|
test
|
plan usability study for home page create study plan create guide make sure prototypes match guide and are ready for testing
| 1
|
293,444
| 25,292,985,170
|
IssuesEvent
|
2022-11-17 02:50:40
|
milvus-io/milvus
|
https://api.github.com/repos/milvus-io/milvus
|
closed
|
[Bug]: Load collection timeout after many pod kill and pod failure chaos test when Kafka is as MQ for Milvus
|
kind/bug priority/critical-urgent severity/critical test/chaos triage/accepted
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:master-20221006-e1124765
- Deployment mode(standalone or cluster): cluster
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
```
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:32 - INFO - ci_test]: assert index entities: 23020 (test_all_collections_after_chaos.py:70)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:32 - DEBUG - ci_test]: (api_request) : [Collection.create_index] args: ['float_vector', {'index_type': 'HNSW', 'metric_type': 'L2', 'params': {'M': 48, 'efConstruction': 500}}], kwargs: {'name': 'test_nLBGhh88', 'timeout': 40, 'index_name': '_default_idx'} (api_request.py:56)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - DEBUG - ci_test]: (api_response) : Status(code=0, message='') (api_request.py:31)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - INFO - ci_test]: [test][2022-10-07T20:23:32Z] [1.99804586s] Checker__vQjDwsCg create_index -> Status(code=0, message='') (wrapper.py:30)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - INFO - ci_test]: assert index: 1.9982595443725586 (test_all_collections_after_chaos.py:77)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - DEBUG - ci_test]: (api_request) : [Collection.load] args: [None, 1, 20], kwargs: {} (api_request.py:56)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - pymilvus.decorators]: RPC error: [wait_for_loading_collection], <MilvusException: (code=-1, message=wait for loading collection timeout)>, <Time:{'RPC start': '2022-10-07 20:23:34.767031', 'RPC error': '2022-10-07 20:24:08.817218'}> (decorators.py:112)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - WARNING - pymilvus.decorators]: Retry timeout: 20s (decorators.py:80)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - pymilvus.decorators]: RPC error: [load_collection], <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)>, <Time:{'RPC start': '2022-10-07 20:23:34.723081', 'RPC error': '2022-10-07 20:24:08.817612'}> (decorators.py:112)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - ci_test]: Traceback (most recent call last):
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 51, in handler
[2022-10-07T20:24:57.877Z] return func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 656, in load_collection
[2022-10-07T20:24:57.877Z] self.wait_for_loading_collection(collection_name, timeout)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 113, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 109, in handler
[2022-10-07T20:24:57.877Z] return func(*args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 139, in handler
[2022-10-07T20:24:57.877Z] ret = func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 89, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 51, in handler
[2022-10-07T20:24:57.877Z] return func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 665, in wait_for_loading_collection
[2022-10-07T20:24:57.877Z] return self._wait_for_loading_collection(collection_name, timeout)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 699, in _wait_for_loading_collection
[2022-10-07T20:24:57.877Z] raise MilvusException(-1, "wait for loading collection timeout")
[2022-10-07T20:24:57.877Z] pymilvus.exceptions.MilvusException: <MilvusException: (code=-1, message=wait for loading collection timeout)>
[2022-10-07T20:24:57.877Z]
[2022-10-07T20:24:57.877Z] During handling of the above exception, another exception occurred:
[2022-10-07T20:24:57.877Z]
[2022-10-07T20:24:57.877Z] Traceback (most recent call last):
[2022-10-07T20:24:57.877Z] File "/home/jenkins/agent/workspace/tests/python_client/utils/api_request.py", line 26, in inner_wrapper
[2022-10-07T20:24:57.877Z] res = func(*args, **_kwargs)
[2022-10-07T20:24:57.877Z] File "/home/jenkins/agent/workspace/tests/python_client/utils/api_request.py", line 57, in api_request
[2022-10-07T20:24:57.877Z] return func(*arg, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/orm/collection.py", line 474, in load
[2022-10-07T20:24:57.877Z] conn.load_collection(self._name, replica_number=replica_number, timeout=timeout, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 113, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 109, in handler
[2022-10-07T20:24:57.877Z] return func(*args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 139, in handler
[2022-10-07T20:24:57.877Z] ret = func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 81, in handler
[2022-10-07T20:24:57.877Z] raise MilvusException(e.code, f"{timeout_msg}, message={e.message}")
[2022-10-07T20:24:57.877Z] pymilvus.exceptions.MilvusException: <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)>
[2022-10-07T20:24:57.877Z] (api_request.py:39)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - ci_test]: (api_response) : <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)> (api_request.py:40)
```
### Expected Behavior
all test cases passed
### Steps To Reproduce
_No response_
### Milvus Log
failed job: https://qa-jenkins.milvus.io/blue/organizations/jenkins/chaos-test-kafka/detail/chaos-test-kafka/1780/pipeline
log:
[artifacts-querynode-pod-failure-1780-server-logs.tar.gz](https://github.com/milvus-io/milvus/files/9738826/artifacts-querynode-pod-failure-1780-server-logs.tar.gz)
[artifacts-querynode-pod-failure-1780-pytest-logs.tar.gz](https://github.com/milvus-io/milvus/files/9738827/artifacts-querynode-pod-failure-1780-pytest-logs.tar.gz)
### Anything else?

|
1.0
|
[Bug]: Load collection timeout after many pod kill and pod failure chaos test when Kafka is as MQ for Milvus - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:master-20221006-e1124765
- Deployment mode(standalone or cluster): cluster
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
```
### Current Behavior
```
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:32 - INFO - ci_test]: assert index entities: 23020 (test_all_collections_after_chaos.py:70)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:32 - DEBUG - ci_test]: (api_request) : [Collection.create_index] args: ['float_vector', {'index_type': 'HNSW', 'metric_type': 'L2', 'params': {'M': 48, 'efConstruction': 500}}], kwargs: {'name': 'test_nLBGhh88', 'timeout': 40, 'index_name': '_default_idx'} (api_request.py:56)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - DEBUG - ci_test]: (api_response) : Status(code=0, message='') (api_request.py:31)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - INFO - ci_test]: [test][2022-10-07T20:23:32Z] [1.99804586s] Checker__vQjDwsCg create_index -> Status(code=0, message='') (wrapper.py:30)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - INFO - ci_test]: assert index: 1.9982595443725586 (test_all_collections_after_chaos.py:77)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:23:34 - DEBUG - ci_test]: (api_request) : [Collection.load] args: [None, 1, 20], kwargs: {} (api_request.py:56)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - pymilvus.decorators]: RPC error: [wait_for_loading_collection], <MilvusException: (code=-1, message=wait for loading collection timeout)>, <Time:{'RPC start': '2022-10-07 20:23:34.767031', 'RPC error': '2022-10-07 20:24:08.817218'}> (decorators.py:112)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - WARNING - pymilvus.decorators]: Retry timeout: 20s (decorators.py:80)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - pymilvus.decorators]: RPC error: [load_collection], <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)>, <Time:{'RPC start': '2022-10-07 20:23:34.723081', 'RPC error': '2022-10-07 20:24:08.817612'}> (decorators.py:112)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - ci_test]: Traceback (most recent call last):
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 51, in handler
[2022-10-07T20:24:57.877Z] return func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 656, in load_collection
[2022-10-07T20:24:57.877Z] self.wait_for_loading_collection(collection_name, timeout)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 113, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 109, in handler
[2022-10-07T20:24:57.877Z] return func(*args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 139, in handler
[2022-10-07T20:24:57.877Z] ret = func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 89, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 51, in handler
[2022-10-07T20:24:57.877Z] return func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 665, in wait_for_loading_collection
[2022-10-07T20:24:57.877Z] return self._wait_for_loading_collection(collection_name, timeout)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/client/grpc_handler.py", line 699, in _wait_for_loading_collection
[2022-10-07T20:24:57.877Z] raise MilvusException(-1, "wait for loading collection timeout")
[2022-10-07T20:24:57.877Z] pymilvus.exceptions.MilvusException: <MilvusException: (code=-1, message=wait for loading collection timeout)>
[2022-10-07T20:24:57.877Z]
[2022-10-07T20:24:57.877Z] During handling of the above exception, another exception occurred:
[2022-10-07T20:24:57.877Z]
[2022-10-07T20:24:57.877Z] Traceback (most recent call last):
[2022-10-07T20:24:57.877Z] File "/home/jenkins/agent/workspace/tests/python_client/utils/api_request.py", line 26, in inner_wrapper
[2022-10-07T20:24:57.877Z] res = func(*args, **_kwargs)
[2022-10-07T20:24:57.877Z] File "/home/jenkins/agent/workspace/tests/python_client/utils/api_request.py", line 57, in api_request
[2022-10-07T20:24:57.877Z] return func(*arg, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/orm/collection.py", line 474, in load
[2022-10-07T20:24:57.877Z] conn.load_collection(self._name, replica_number=replica_number, timeout=timeout, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 113, in handler
[2022-10-07T20:24:57.877Z] raise e
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 109, in handler
[2022-10-07T20:24:57.877Z] return func(*args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 139, in handler
[2022-10-07T20:24:57.877Z] ret = func(self, *args, **kwargs)
[2022-10-07T20:24:57.877Z] File "/usr/local/lib/python3.7/dist-packages/pymilvus/decorators.py", line 81, in handler
[2022-10-07T20:24:57.877Z] raise MilvusException(e.code, f"{timeout_msg}, message={e.message}")
[2022-10-07T20:24:57.877Z] pymilvus.exceptions.MilvusException: <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)>
[2022-10-07T20:24:57.877Z] (api_request.py:39)
[2022-10-07T20:24:57.877Z] [2022-10-07 20:24:08 - ERROR - ci_test]: (api_response) : <MilvusException: (code=-1, message=Retry timeout: 20s, message=wait for loading collection timeout)> (api_request.py:40)
```
### Expected Behavior
all test cases passed
### Steps To Reproduce
_No response_
### Milvus Log
failed job: https://qa-jenkins.milvus.io/blue/organizations/jenkins/chaos-test-kafka/detail/chaos-test-kafka/1780/pipeline
log:
[artifacts-querynode-pod-failure-1780-server-logs.tar.gz](https://github.com/milvus-io/milvus/files/9738826/artifacts-querynode-pod-failure-1780-server-logs.tar.gz)
[artifacts-querynode-pod-failure-1780-pytest-logs.tar.gz](https://github.com/milvus-io/milvus/files/9738827/artifacts-querynode-pod-failure-1780-pytest-logs.tar.gz)
### Anything else?

|
test
|
load collection timeout after many pod kill and pod failure chaos test when kafka is as mq for milvus is there an existing issue for this i have searched the existing issues environment markdown milvus version master deployment mode standalone or cluster cluster sdk version e g pymilvus os ubuntu or centos cpu memory gpu others current behavior assert index entities test all collections after chaos py api request args kwargs name test timeout index name default idx api request py api response status code message api request py checker vqjdwscg create index status code message wrapper py assert index test all collections after chaos py api request args kwargs api request py rpc error decorators py retry timeout decorators py rpc error decorators py traceback most recent call last file usr local lib dist packages pymilvus decorators py line in handler return func self args kwargs file usr local lib dist packages pymilvus client grpc handler py line in load collection self wait for loading collection collection name timeout file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func args kwargs file usr local lib dist packages pymilvus decorators py line in handler ret func self args kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func self args kwargs file usr local lib dist packages pymilvus client grpc handler py line in wait for loading collection return self wait for loading collection collection name timeout file usr local lib dist packages pymilvus client grpc handler py line in wait for loading collection raise milvusexception wait for loading collection timeout pymilvus exceptions milvusexception during handling of the above exception another exception occurred traceback most recent call last file home jenkins agent workspace tests python client utils api request py line in inner wrapper res func args kwargs file home jenkins agent workspace tests python client utils api request py line in api request return func arg kwargs file usr local lib dist packages pymilvus orm collection py line in load conn load collection self name replica number replica number timeout timeout kwargs file usr local lib dist packages pymilvus decorators py line in handler raise e file usr local lib dist packages pymilvus decorators py line in handler return func args kwargs file usr local lib dist packages pymilvus decorators py line in handler ret func self args kwargs file usr local lib dist packages pymilvus decorators py line in handler raise milvusexception e code f timeout msg message e message pymilvus exceptions milvusexception api request py api response api request py expected behavior all test cases passed steps to reproduce no response milvus log failed job log anything else
| 1
|
231,993
| 18,838,808,937
|
IssuesEvent
|
2021-11-11 06:39:08
|
Cookie-AutoDelete/Cookie-AutoDelete
|
https://api.github.com/repos/Cookie-AutoDelete/Cookie-AutoDelete
|
opened
|
[Bug] Discord cookies are not deleted after enabling local storage cleanup option.
|
untested bug/issue
|
### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
1. Go to discord.com and make a account and login
2. Close the tab and again go to discord.com
3. Click on the login button and it will automatically detect your account.
### To Reproduce
Same as Describe the bug
### Expected Behavior
I expected that discord wouldn't recognize my old account.
### Screenshots
_No response_
### System Info - Operating System (OS)
Arch Linux linux 5.14.16.arch1-1
### System Info - Browser Info
Firefox 94.0.1 (x64) latest
### System Info - CookieAutoDelete Version
3.6.0
### Additional Context
_No response_
|
1.0
|
[Bug] Discord cookies are not deleted after enabling local storage cleanup option. - ### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
1. Go to discord.com and make a account and login
2. Close the tab and again go to discord.com
3. Click on the login button and it will automatically detect your account.
### To Reproduce
Same as Describe the bug
### Expected Behavior
I expected that discord wouldn't recognize my old account.
### Screenshots
_No response_
### System Info - Operating System (OS)
Arch Linux linux 5.14.16.arch1-1
### System Info - Browser Info
Firefox 94.0.1 (x64) latest
### System Info - CookieAutoDelete Version
3.6.0
### Additional Context
_No response_
|
test
|
discord cookies are not deleted after enabling local storage cleanup option acknowledgements i acknowledge that i have read the above items describe the bug go to discord com and make a account and login close the tab and again go to discord com click on the login button and it will automatically detect your account to reproduce same as describe the bug expected behavior i expected that discord wouldn t recognize my old account screenshots no response system info operating system os arch linux linux system info browser info firefox latest system info cookieautodelete version additional context no response
| 1
|
80,700
| 7,754,239,705
|
IssuesEvent
|
2018-05-31 05:39:49
|
Spooky-Action-Developers/Project-Ironclad
|
https://api.github.com/repos/Spooky-Action-Developers/Project-Ironclad
|
closed
|
List Access Grants
|
backend enhancement low requires test user story
|
As a Mozilla Employee, I want to be able to list the currently usable Access Grants, so that I can see what access grants are still in use.
|
1.0
|
List Access Grants - As a Mozilla Employee, I want to be able to list the currently usable Access Grants, so that I can see what access grants are still in use.
|
test
|
list access grants as a mozilla employee i want to be able to list the currently usable access grants so that i can see what access grants are still in use
| 1
|
55,109
| 13,964,224,053
|
IssuesEvent
|
2020-10-25 17:16:42
|
g-clef/Todo
|
https://api.github.com/repos/g-clef/Todo
|
opened
|
Possible to do segmentation learning on pcaps
|
ML Security
|
Teach a deep learning segmentation learner what protocols look like, and what applications inside those protocols look like with segmentation masks. Have it learn to identify protocols on other ports, also what mis-behaving data looks like (tcp on dns, etc)
|
True
|
Possible to do segmentation learning on pcaps - Teach a deep learning segmentation learner what protocols look like, and what applications inside those protocols look like with segmentation masks. Have it learn to identify protocols on other ports, also what mis-behaving data looks like (tcp on dns, etc)
|
non_test
|
possible to do segmentation learning on pcaps teach a deep learning segmentation learner what protocols look like and what applications inside those protocols look like with segmentation masks have it learn to identify protocols on other ports also what mis behaving data looks like tcp on dns etc
| 0
|
80,795
| 7,757,370,240
|
IssuesEvent
|
2018-05-31 16:06:35
|
pouchdb/pouchdb
|
https://api.github.com/repos/pouchdb/pouchdb
|
closed
|
PouchDB.defaults({prefix}) misses `/` separator unless port is set
|
bug has test case pinned
|
### Issue
Write the description of the issue here
### Info
- Environment: Node.js (tested on v6.9.1)
- Adapter: http
- Server: n/a
### Reproduce
This logs `Error: getaddrinfo ENOTFOUND example.comfoo example.comfoo:80`
``` js
const PouchDB = require('pouchdb').defaults({
prefix: 'http://example.com'
})
const db = new PouchDB('foo')
db.info().catch(console.log)
```
If I replace `http://example.com` with `http://example.com:80` the error does not occur
|
1.0
|
PouchDB.defaults({prefix}) misses `/` separator unless port is set - ### Issue
Write the description of the issue here
### Info
- Environment: Node.js (tested on v6.9.1)
- Adapter: http
- Server: n/a
### Reproduce
This logs `Error: getaddrinfo ENOTFOUND example.comfoo example.comfoo:80`
``` js
const PouchDB = require('pouchdb').defaults({
prefix: 'http://example.com'
})
const db = new PouchDB('foo')
db.info().catch(console.log)
```
If I replace `http://example.com` with `http://example.com:80` the error does not occur
|
test
|
pouchdb defaults prefix misses separator unless port is set issue write the description of the issue here info environment node js tested on adapter http server n a reproduce this logs error getaddrinfo enotfound example comfoo example comfoo js const pouchdb require pouchdb defaults prefix const db new pouchdb foo db info catch console log if i replace with the error does not occur
| 1
|
10,751
| 8,706,302,552
|
IssuesEvent
|
2018-12-06 02:08:31
|
square/misk-web
|
https://api.github.com/repos/square/misk-web
|
closed
|
Build initial new-tab.sh
|
infrastructure
|
1. Make it work through `curl | bash -s` so a local shell file doesn't have to be updated
1. Prompt for new name in formats: `foo-bar`, `fooBar`, `FooBar`
1. Download .zip of palette src code from github
1. Unzips to pwd
1. Renames folder to `foo-bar
1. [Recursive find/replace](https://stackoverflow.com/questions/11392478/how-to-replace-a-string-in-multiple-files-in-linux-command-line) in directory of `palette` -> `fooBar`, `Palette` -> `FooBar`
|
1.0
|
Build initial new-tab.sh - 1. Make it work through `curl | bash -s` so a local shell file doesn't have to be updated
1. Prompt for new name in formats: `foo-bar`, `fooBar`, `FooBar`
1. Download .zip of palette src code from github
1. Unzips to pwd
1. Renames folder to `foo-bar
1. [Recursive find/replace](https://stackoverflow.com/questions/11392478/how-to-replace-a-string-in-multiple-files-in-linux-command-line) in directory of `palette` -> `fooBar`, `Palette` -> `FooBar`
|
non_test
|
build initial new tab sh make it work through curl bash s so a local shell file doesn t have to be updated prompt for new name in formats foo bar foobar foobar download zip of palette src code from github unzips to pwd renames folder to foo bar in directory of palette foobar palette foobar
| 0
|
122,696
| 12,157,684,231
|
IssuesEvent
|
2020-04-25 23:23:12
|
mckib2/pygrappa
|
https://api.github.com/repos/mckib2/pygrappa
|
closed
|
Online docs
|
documentation enhancement
|
Probably a static readthedocs Sphinx documentation
- Use numpy style docstrings
|
1.0
|
Online docs - Probably a static readthedocs Sphinx documentation
- Use numpy style docstrings
|
non_test
|
online docs probably a static readthedocs sphinx documentation use numpy style docstrings
| 0
|
140,071
| 11,301,561,571
|
IssuesEvent
|
2020-01-17 15:50:10
|
ValveSoftware/Proton
|
https://api.github.com/repos/ValveSoftware/Proton
|
closed
|
Steering Wheel Force Feedback broke in 4.11-7 (blanket issue)
|
Need Retest
|
It appears there are a number of games where this is confirmed.
It may not be specific to the Logitech G29, this is the only wheel I have to confirm with.
Examples:
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-548608453 (Project Cars 1+2, RACE 07, GT Legends)
- https://github.com/ValveSoftware/Proton/issues/758 (Wreckfest)
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-517915694 (was working) (F1 2019)
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-548608453 (but then lost) (F1 2019)
- https://github.com/ValveSoftware/Proton/issues/2366 (DiRT Rally 2.0)
- https://github.com/ValveSoftware/Proton/issues/244#issuecomment-548611930 (Project Cars)
|
1.0
|
Steering Wheel Force Feedback broke in 4.11-7 (blanket issue) - It appears there are a number of games where this is confirmed.
It may not be specific to the Logitech G29, this is the only wheel I have to confirm with.
Examples:
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-548608453 (Project Cars 1+2, RACE 07, GT Legends)
- https://github.com/ValveSoftware/Proton/issues/758 (Wreckfest)
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-517915694 (was working) (F1 2019)
- https://github.com/ValveSoftware/Proton/issues/2881#issuecomment-548608453 (but then lost) (F1 2019)
- https://github.com/ValveSoftware/Proton/issues/2366 (DiRT Rally 2.0)
- https://github.com/ValveSoftware/Proton/issues/244#issuecomment-548611930 (Project Cars)
|
test
|
steering wheel force feedback broke in blanket issue it appears there are a number of games where this is confirmed it may not be specific to the logitech this is the only wheel i have to confirm with examples project cars race gt legends wreckfest was working but then lost dirt rally project cars
| 1
|
68,210
| 7,089,516,475
|
IssuesEvent
|
2018-01-12 03:13:29
|
litehelpers/Cordova-sqlite-help
|
https://api.github.com/repos/litehelpers/Cordova-sqlite-help
|
opened
|
Dealing with possible database corruption
|
doc-pitfall doc-todo question testing user community help
|
Reports of sqlite corruption on Cordova have been extremely rare but I have a customer who is dealing with this kind of issue right now. In general I recommend that app developers upgrade to a very recent version of the plugin before diving into much deeper investigation.
In case of an app that seems vulnerable to database corruption it is recommended to do `PRAGMA integrity_check` at certain points and do the following in case it does not report "OK":
- log and report error (sentry.io may be a good friend)
- dump or otherwise capture the users data from the database if possible (sqlite is designed to recover from integrity_check failures, at least to a certain extent)
- obtain a copy of the sqlite database that fails integrity_check (if possible) for further analysis
- remove (delete) the database that fails the integrity check
I found and recommend the following links, despite what looks like some conflicting information:
- <http://www.sqlite.org/howtocorrupt.html>
- <http://sqlite.1065341.n5.nabble.com/Integrity-Check-Failure-Handling-td70289.html>
- <http://blog.niklasottosson.com/?p=852>
- <http://www.froebe.net/blog/2015/05/27/error-sqlite-database-is-malformed-solved/>
|
1.0
|
Dealing with possible database corruption - Reports of sqlite corruption on Cordova have been extremely rare but I have a customer who is dealing with this kind of issue right now. In general I recommend that app developers upgrade to a very recent version of the plugin before diving into much deeper investigation.
In case of an app that seems vulnerable to database corruption it is recommended to do `PRAGMA integrity_check` at certain points and do the following in case it does not report "OK":
- log and report error (sentry.io may be a good friend)
- dump or otherwise capture the users data from the database if possible (sqlite is designed to recover from integrity_check failures, at least to a certain extent)
- obtain a copy of the sqlite database that fails integrity_check (if possible) for further analysis
- remove (delete) the database that fails the integrity check
I found and recommend the following links, despite what looks like some conflicting information:
- <http://www.sqlite.org/howtocorrupt.html>
- <http://sqlite.1065341.n5.nabble.com/Integrity-Check-Failure-Handling-td70289.html>
- <http://blog.niklasottosson.com/?p=852>
- <http://www.froebe.net/blog/2015/05/27/error-sqlite-database-is-malformed-solved/>
|
test
|
dealing with possible database corruption reports of sqlite corruption on cordova have been extremely rare but i have a customer who is dealing with this kind of issue right now in general i recommend that app developers upgrade to a very recent version of the plugin before diving into much deeper investigation in case of an app that seems vulnerable to database corruption it is recommended to do pragma integrity check at certain points and do the following in case it does not report ok log and report error sentry io may be a good friend dump or otherwise capture the users data from the database if possible sqlite is designed to recover from integrity check failures at least to a certain extent obtain a copy of the sqlite database that fails integrity check if possible for further analysis remove delete the database that fails the integrity check i found and recommend the following links despite what looks like some conflicting information
| 1
|
312,531
| 26,870,522,286
|
IssuesEvent
|
2023-02-04 12:04:31
|
ethereum/solidity
|
https://api.github.com/repos/ethereum/solidity
|
closed
|
[Testing] Add semantic tests for imports from multiple files
|
testing :hammer: closed-due-inactivity stale
|
<!--## Prerequisites
- First, many thanks for taking part in the community. We really appreciate that.
- We realize there is a lot of data requested here. We ask only that you do your best to provide as much information as possible so we can better help you.
- Support questions are better asked in one of the following locations:
- [Solidity chat](https://gitter.im/ethereum/solidity)
- [Stack Overflow](https://ethereum.stackexchange.com/)
- Ensure the issue isn't already reported (check `feature` and `language design` labels).
*Delete the above section and the instructions in the sections below before submitting*
-->
## Abstract
<!--
Please describe by example what problem you see in the current Solidity language
and reason about it.
-->
## Motivation
<!--
In this section you describe how you propose to address the problem you described earlier,
including by giving one or more exemplary source code snippets for demonstration.
-->
## Specification
<!--
The technical specification should describe the syntax and semantics of any new feature. The
specification should be detailed enough to allow any developer to implement the functionality.
-->
## Backwards Compatibility
<!--
All language changes that introduce backwards incompatibilities must include a section describing
these incompatibilities and their severity.
Please describe how you propose to deal with these incompatibilities.
-->
|
1.0
|
[Testing] Add semantic tests for imports from multiple files - <!--## Prerequisites
- First, many thanks for taking part in the community. We really appreciate that.
- We realize there is a lot of data requested here. We ask only that you do your best to provide as much information as possible so we can better help you.
- Support questions are better asked in one of the following locations:
- [Solidity chat](https://gitter.im/ethereum/solidity)
- [Stack Overflow](https://ethereum.stackexchange.com/)
- Ensure the issue isn't already reported (check `feature` and `language design` labels).
*Delete the above section and the instructions in the sections below before submitting*
-->
## Abstract
<!--
Please describe by example what problem you see in the current Solidity language
and reason about it.
-->
## Motivation
<!--
In this section you describe how you propose to address the problem you described earlier,
including by giving one or more exemplary source code snippets for demonstration.
-->
## Specification
<!--
The technical specification should describe the syntax and semantics of any new feature. The
specification should be detailed enough to allow any developer to implement the functionality.
-->
## Backwards Compatibility
<!--
All language changes that introduce backwards incompatibilities must include a section describing
these incompatibilities and their severity.
Please describe how you propose to deal with these incompatibilities.
-->
|
test
|
add semantic tests for imports from multiple files prerequisites first many thanks for taking part in the community we really appreciate that we realize there is a lot of data requested here we ask only that you do your best to provide as much information as possible so we can better help you support questions are better asked in one of the following locations ensure the issue isn t already reported check feature and language design labels delete the above section and the instructions in the sections below before submitting abstract please describe by example what problem you see in the current solidity language and reason about it motivation in this section you describe how you propose to address the problem you described earlier including by giving one or more exemplary source code snippets for demonstration specification the technical specification should describe the syntax and semantics of any new feature the specification should be detailed enough to allow any developer to implement the functionality backwards compatibility all language changes that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity please describe how you propose to deal with these incompatibilities
| 1
|
21,434
| 3,710,601,765
|
IssuesEvent
|
2016-03-02 05:38:11
|
www-purple/Mixxy
|
https://api.github.com/repos/www-purple/Mixxy
|
closed
|
Define use cases for the creation and management of comics
|
design
|
This is ignoring the social features; this is really just about drawing and management
|
1.0
|
Define use cases for the creation and management of comics - This is ignoring the social features; this is really just about drawing and management
|
non_test
|
define use cases for the creation and management of comics this is ignoring the social features this is really just about drawing and management
| 0
|
103,412
| 16,602,505,534
|
IssuesEvent
|
2021-06-01 21:41:50
|
gms-ws-sandbox/nibrs
|
https://api.github.com/repos/gms-ws-sandbox/nibrs
|
opened
|
CVE-2020-13934 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-13934 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-8.5.20.jar</b>, <b>tomcat-embed-core-9.0.19.jar</b>, <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-8.5.20.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.20/tomcat-embed-core-8.5.20.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-8.5.20.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.19.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.19/tomcat-embed-core-9.0.19.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.19.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/web/nibrs-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.20","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tomcat.embed:tomcat-embed-core:8.5.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.19","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.1.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.34","packageFilePaths":["/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.0.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.0.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.34","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-13934","vulnerabilityDetails":"An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-13934 (High) detected in multiple libraries - ## CVE-2020-13934 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-8.5.20.jar</b>, <b>tomcat-embed-core-9.0.19.jar</b>, <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-8.5.20.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.20/tomcat-embed-core-8.5.20.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-8.5.20.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.19.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.19/tomcat-embed-core-9.0.19.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.19.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/web/nibrs-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.20","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tomcat.embed:tomcat-embed-core:8.5.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.19","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.1.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.34","packageFilePaths":["/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.0.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.0.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.34","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-13934","vulnerabilityDetails":"An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib tomcat embed core jar dependency hierarchy x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs web nibrs web pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar nibrs tools nibrs route target nibrs route web inf lib tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details an direct connection to apache tomcat to to and to did not release the http processor after the upgrade to http if a sufficient number of such requests were made an outofmemoryexception could occur leading to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core packagetype java groupid org apache tomcat embed packagename tomcat embed core packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core packagetype java groupid org apache tomcat embed packagename tomcat embed core packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core basebranches vulnerabilityidentifier cve vulnerabilitydetails an direct connection to apache tomcat to to and to did not release the http processor after the upgrade to http if a sufficient number of such requests were made an outofmemoryexception could occur leading to a denial of service vulnerabilityurl
| 0
|
25,424
| 4,317,500,611
|
IssuesEvent
|
2016-07-23 10:38:55
|
networkx/networkx
|
https://api.github.com/repos/networkx/networkx
|
closed
|
The minimum_edge_cut() function returns empty set([]) in some case
|
Defect Documentation
|
I've noticed that sometimes the function minimum_edge_cut( digraphA) return a empty set([]) in my examples.For example:
import networkx as nx
a = nx.DiGraph()
a.add_edges_from([(1,2),(1,3),(1,4),(3,2),(4,2)])
print nx.minimum_edge_cut(a)
###The following would be printed
set([])
So would be glad to know if you guys know is this correct ? For a weakly connected DiGraph more than 2 nodes, an edge cut should always exist. So may be the minimum_edge_cut should never return an empty set().
|
1.0
|
The minimum_edge_cut() function returns empty set([]) in some case - I've noticed that sometimes the function minimum_edge_cut( digraphA) return a empty set([]) in my examples.For example:
import networkx as nx
a = nx.DiGraph()
a.add_edges_from([(1,2),(1,3),(1,4),(3,2),(4,2)])
print nx.minimum_edge_cut(a)
###The following would be printed
set([])
So would be glad to know if you guys know is this correct ? For a weakly connected DiGraph more than 2 nodes, an edge cut should always exist. So may be the minimum_edge_cut should never return an empty set().
|
non_test
|
the minimum edge cut function returns empty set in some case i ve noticed that sometimes the function minimum edge cut digrapha return a empty set in my examples for example import networkx as nx a nx digraph a add edges from print nx minimum edge cut a the following would be printed set so would be glad to know if you guys know is this correct for a weakly connected digraph more than nodes an edge cut should always exist so may be the minimum edge cut should never return an empty set
| 0
|
194,314
| 14,675,515,036
|
IssuesEvent
|
2020-12-30 17:47:35
|
apache/buildstream
|
https://api.github.com/repos/apache/buildstream
|
closed
|
Integration tests dont work by default, and write outside of repo
|
bug tests
|
[See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/267)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Feb 23, 2018, 12:43
When running the following command, integration tests dont work:
```
./setup.py test --addopts '--integration tests/integration/shell.py'
```
This results in tests trying to write to the tmpfs in `/tmp` on my host, where ostree fails to set extended attributes.
Which leads to the second part of this, that the integration tests leave behind debris on the host by default; that should not happen.
I later discovered the env var `INTEGRATION_CACHE`, which fixes things when setting it to the `tmp/` subdir of BuildStream repo.
Instead of defaulting to a place which likely wont work and leaving behind debris, it's better to default to somewhere local to the repository. If it's desirable to set `INTEGRATION_CACHE` to somewhere else, for performance reasons; this should be documented in the `HACKING.rst` file.
|
1.0
|
Integration tests dont work by default, and write outside of repo - [See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/267)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Feb 23, 2018, 12:43
When running the following command, integration tests dont work:
```
./setup.py test --addopts '--integration tests/integration/shell.py'
```
This results in tests trying to write to the tmpfs in `/tmp` on my host, where ostree fails to set extended attributes.
Which leads to the second part of this, that the integration tests leave behind debris on the host by default; that should not happen.
I later discovered the env var `INTEGRATION_CACHE`, which fixes things when setting it to the `tmp/` subdir of BuildStream repo.
Instead of defaulting to a place which likely wont work and leaving behind debris, it's better to default to somewhere local to the repository. If it's desirable to set `INTEGRATION_CACHE` to somewhere else, for performance reasons; this should be documented in the `HACKING.rst` file.
|
test
|
integration tests dont work by default and write outside of repo in gitlab by on feb when running the following command integration tests dont work setup py test addopts integration tests integration shell py this results in tests trying to write to the tmpfs in tmp on my host where ostree fails to set extended attributes which leads to the second part of this that the integration tests leave behind debris on the host by default that should not happen i later discovered the env var integration cache which fixes things when setting it to the tmp subdir of buildstream repo instead of defaulting to a place which likely wont work and leaving behind debris it s better to default to somewhere local to the repository if it s desirable to set integration cache to somewhere else for performance reasons this should be documented in the hacking rst file
| 1
|
80,667
| 23,275,727,710
|
IssuesEvent
|
2022-08-05 06:57:55
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[wasm] Wasm.Build.Tests build failing with `Workload ID wasm-tools is not recognized`
|
arch-wasm area-Build-mono in-pr
|
Wasm.Build.Tests CI builds are [failing](https://dev.azure.com/dnceng/public/_build/results?buildId=1920824&view=logs&jobId=6f051c1a-c3e7-5cde-411e-64125624a208&j=6f051c1a-c3e7-5cde-411e-64125624a208&t=b731ef43-7f67-5a47-0aca-433fa8d3212b) with:
```
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) Workload ID wasm-tools is not recognized.
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) Workload ID wasm-tools is not recognized.
Exit code: 1
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : workload install failed with exit code 1: Welcome to .NET 7.0! [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : --------------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : SDK Version: 7.0.100-rc.1.22403.4 [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : ---------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Installed an ASP.NET Core HTTPS development certificate. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : To trust the certificate run 'dotnet dev-certs https --trust' (Windows and macOS only). [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Learn about HTTPS: https://aka.ms/dotnet-https [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : ---------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Write your first app: https://aka.ms/dotnet-hello-world [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Find out what's new: https://aka.ms/dotnet-whats-new [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Explore documentation: https://aka.ms/dotnet-docs [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Report issues and find source on GitHub: https://github.com/dotnet/core [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : -------------------------------------------------------------------------------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) workload install failed with exit code 1: Welcome to .NET 7.0!
```
This is with the latest sdk - `7.0.100-rc.1.22403.4 `.
Sdk side issue: https://github.com/dotnet/sdk/issues/26967
|
1.0
|
[wasm] Wasm.Build.Tests build failing with `Workload ID wasm-tools is not recognized` - Wasm.Build.Tests CI builds are [failing](https://dev.azure.com/dnceng/public/_build/results?buildId=1920824&view=logs&jobId=6f051c1a-c3e7-5cde-411e-64125624a208&j=6f051c1a-c3e7-5cde-411e-64125624a208&t=b731ef43-7f67-5a47-0aca-433fa8d3212b) with:
```
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) Workload ID wasm-tools is not recognized.
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) Workload ID wasm-tools is not recognized.
Exit code: 1
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : workload install failed with exit code 1: Welcome to .NET 7.0! [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : --------------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : SDK Version: 7.0.100-rc.1.22403.4 [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : ---------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Installed an ASP.NET Core HTTPS development certificate. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : To trust the certificate run 'dotnet dev-certs https --trust' (Windows and macOS only). [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Learn about HTTPS: https://aka.ms/dotnet-https [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : ---------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Write your first app: https://aka.ms/dotnet-hello-world [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Find out what's new: https://aka.ms/dotnet-whats-new [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Explore documentation: https://aka.ms/dotnet-docs [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Report issues and find source on GitHub: https://github.com/dotnet/core [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : -------------------------------------------------------------------------------------- [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
/__w/1/s/eng/testing/workloads-testing.targets(218,5): error : Workload ID wasm-tools is not recognized. [/__w/1/s/src/tests/BuildWasmApps/Wasm.Build.Tests/Wasm.Build.Tests.csproj]
##[error]eng/testing/workloads-testing.targets(218,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) workload install failed with exit code 1: Welcome to .NET 7.0!
```
This is with the latest sdk - `7.0.100-rc.1.22403.4 `.
Sdk side issue: https://github.com/dotnet/sdk/issues/26967
|
non_test
|
wasm build tests build failing with workload id wasm tools is not recognized wasm build tests ci builds are with w s eng testing workloads testing targets error workload id wasm tools is not recognized eng testing workloads testing targets error netcore engineering telemetry build workload id wasm tools is not recognized w s eng testing workloads testing targets error workload id wasm tools is not recognized eng testing workloads testing targets error netcore engineering telemetry build workload id wasm tools is not recognized exit code w s eng testing workloads testing targets error workload install failed with exit code welcome to net w s eng testing workloads testing targets error w s eng testing workloads testing targets error sdk version rc w s eng testing workloads testing targets error w s eng testing workloads testing targets error installed an asp net core https development certificate w s eng testing workloads testing targets error to trust the certificate run dotnet dev certs https trust windows and macos only w s eng testing workloads testing targets error learn about https w s eng testing workloads testing targets error w s eng testing workloads testing targets error write your first app w s eng testing workloads testing targets error find out what s new w s eng testing workloads testing targets error explore documentation w s eng testing workloads testing targets error report issues and find source on github w s eng testing workloads testing targets error use dotnet help to see available commands or visit w s eng testing workloads testing targets error w s eng testing workloads testing targets error workload id wasm tools is not recognized eng testing workloads testing targets error netcore engineering telemetry build workload install failed with exit code welcome to net this is with the latest sdk rc sdk side issue
| 0
|
63,524
| 6,849,099,503
|
IssuesEvent
|
2017-11-13 20:51:43
|
SCIInstitute/SCIRun
|
https://api.github.com/repos/SCIInstitute/SCIRun
|
closed
|
Forward problem network validation
|
IBBM needs project Testing User Testing
|
The Forward/Inverse toolkit network forward_problem needs validation.
|
2.0
|
Forward problem network validation - The Forward/Inverse toolkit network forward_problem needs validation.
|
test
|
forward problem network validation the forward inverse toolkit network forward problem needs validation
| 1
|
249,331
| 21,158,656,335
|
IssuesEvent
|
2022-04-07 07:18:51
|
zephyrproject-rtos/test_results
|
https://api.github.com/repos/zephyrproject-rtos/test_results
|
opened
|
TCP Setup Verify SYN-RCVD state remembers last state; passive open. error
|
area: Tests
|
**Describe the bug**
Verify SYN-RCVD state remembers last state; passive open. test is Fail on Zephyr3.0.0 on qemu_x86
**References**
RFC 1122: section 4.2.2.11 {SYN-RCVD remembers last state}
**Results**
FAIL: tcp.v4 did not send expected SYN+ACK response.; FAIL: tcp.v4 did not send expected SYN+ACK response.
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: Zephyr3.0.0
|
1.0
|
TCP Setup Verify SYN-RCVD state remembers last state; passive open. error
-
**Describe the bug**
Verify SYN-RCVD state remembers last state; passive open. test is Fail on Zephyr3.0.0 on qemu_x86
**References**
RFC 1122: section 4.2.2.11 {SYN-RCVD remembers last state}
**Results**
FAIL: tcp.v4 did not send expected SYN+ACK response.; FAIL: tcp.v4 did not send expected SYN+ACK response.
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: Zephyr3.0.0
|
test
|
tcp setup verify syn rcvd state remembers last state passive open error describe the bug verify syn rcvd state remembers last state passive open test is fail on on qemu references rfc section syn rcvd remembers last state results fail tcp did not send expected syn ack response fail tcp did not send expected syn ack response environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used
| 1
|
737,664
| 25,525,634,131
|
IssuesEvent
|
2022-11-29 01:51:01
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
opened
|
[DocDB] Benchmark copying table info map vs holding the catalog manager mutex for tablet splitting
|
kind/enhancement area/docdb priority/medium
|
### Description
We should benchmark how long it takes to copy the table info map for automatic tablet splitting, vs just holding the catalog manager mutex for the whole thing (the latter would be a bigger change, as it would require exposing the catalog manager mutex to the tablet split manager) to allow access to certain functions.
|
1.0
|
[DocDB] Benchmark copying table info map vs holding the catalog manager mutex for tablet splitting - ### Description
We should benchmark how long it takes to copy the table info map for automatic tablet splitting, vs just holding the catalog manager mutex for the whole thing (the latter would be a bigger change, as it would require exposing the catalog manager mutex to the tablet split manager) to allow access to certain functions.
|
non_test
|
benchmark copying table info map vs holding the catalog manager mutex for tablet splitting description we should benchmark how long it takes to copy the table info map for automatic tablet splitting vs just holding the catalog manager mutex for the whole thing the latter would be a bigger change as it would require exposing the catalog manager mutex to the tablet split manager to allow access to certain functions
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.