Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,972
| 5,206,467,320
|
IssuesEvent
|
2017-01-24 20:42:51
|
c2corg/v6_ui
|
https://api.github.com/repos/c2corg/v6_ui
|
closed
|
On route, links to material article are not migrated
|
Association fixed and ready for testing
|
**on v5**, route http://www.camptocamp.org/routes/784566/fr/la-pyramide-traversee-toit-pyramide-petite-traversee
The route has a connection, in material section, to the article "material" http://www.camptocamp.org/articles/185384/fr/le-contenu-du-sac-alpinisme-rocheux-de-f-a-ad
**on v6** the article exists
https://www.demov6.camptocamp.org/articles/185384/fr/le-contenu-du-sac-alpinisme-rocheux-de-f-a-ad
but the route has no connection to the article
https://www.demov6.camptocamp.org/routes/784566/fr/la-pyramide-traversee-toit-pyramide-petite-traversee, only "sangles" was migrated
|
1.0
|
On route, links to material article are not migrated - **on v5**, route http://www.camptocamp.org/routes/784566/fr/la-pyramide-traversee-toit-pyramide-petite-traversee
The route has a connection, in material section, to the article "material" http://www.camptocamp.org/articles/185384/fr/le-contenu-du-sac-alpinisme-rocheux-de-f-a-ad
**on v6** the article exists
https://www.demov6.camptocamp.org/articles/185384/fr/le-contenu-du-sac-alpinisme-rocheux-de-f-a-ad
but the route has no connection to the article
https://www.demov6.camptocamp.org/routes/784566/fr/la-pyramide-traversee-toit-pyramide-petite-traversee, only "sangles" was migrated
|
test
|
on route links to material article are not migrated on route the route has a connection in material section to the article material on the article exists but the route has no connection to the article only sangles was migrated
| 1
|
59,761
| 6,662,864,283
|
IssuesEvent
|
2017-10-02 14:32:53
|
openbmc/openbmc-test-automation
|
https://api.github.com/repos/openbmc/openbmc-test-automation
|
closed
|
Code update : Reboot BMC during PNOR activation in progress
|
Test
|
- [x] Upload PNOR
- [x] Start activation and reset BMC
- [x] Verify the PNOR failed
|
1.0
|
Code update : Reboot BMC during PNOR activation in progress - - [x] Upload PNOR
- [x] Start activation and reset BMC
- [x] Verify the PNOR failed
|
test
|
code update reboot bmc during pnor activation in progress upload pnor start activation and reset bmc verify the pnor failed
| 1
|
16,487
| 3,535,009,529
|
IssuesEvent
|
2016-01-16 05:35:36
|
WormBase/website
|
https://api.github.com/repos/WormBase/website
|
closed
|
Great site, thank you! But:
pop-up window with seq...
|
bug HelpDesk source: offline chat UI-data display Under testing
|
*Help Desk query collected when no chat operators were online. Follow up required.*
Great site, thank you! But:
pop-up window with sequences cuts area on the right side, and there is no way to scroll, so that 3-4 letters on the right are not visible.
Ubuntu 14.04 LTS 64-bit
Chrome Version 47.0.2526.106 (64-bit)
**Reported by:** Leon******** (leon******************)
**Submitted from:** <a target="_blank" href="http://www.wormbase.org/http://www.wormbase.org/species/c_elegans/gene/WBGene00009706#0b1-9ed2f68c457gha3-10">http://www.wormbase.org/species/c_elegans/gene/WBGene00009706#0b1-9ed2f68c457gha3-10</a>
**Browser:** Chrome 47.0.2526.106
|
1.0
|
Great site, thank you! But:
pop-up window with seq... -
*Help Desk query collected when no chat operators were online. Follow up required.*
Great site, thank you! But:
pop-up window with sequences cuts area on the right side, and there is no way to scroll, so that 3-4 letters on the right are not visible.
Ubuntu 14.04 LTS 64-bit
Chrome Version 47.0.2526.106 (64-bit)
**Reported by:** Leon******** (leon******************)
**Submitted from:** <a target="_blank" href="http://www.wormbase.org/http://www.wormbase.org/species/c_elegans/gene/WBGene00009706#0b1-9ed2f68c457gha3-10">http://www.wormbase.org/species/c_elegans/gene/WBGene00009706#0b1-9ed2f68c457gha3-10</a>
**Browser:** Chrome 47.0.2526.106
|
test
|
great site thank you but pop up window with seq help desk query collected when no chat operators were online follow up required great site thank you but pop up window with sequences cuts area on the right side and there is no way to scroll so that letters on the right are not visible ubuntu lts bit chrome version bit reported by leon leon submitted from a target blank href browser chrome
| 1
|
64,323
| 26,688,867,370
|
IssuesEvent
|
2023-01-27 01:30:49
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Release VZ v1.30.0 (Iguana Cir)
|
Service: Dev Workgroup: VZ Product: Vision Zero Crash Data System Product: Vision Zero Viewer
|
**To-do's for upcoming release**
- [x] Schedule release party - @patrickm02L
- [ ] Advise users of downtime - @patrickm02L
- [x] Create a [release PR](https://github.com/cityofaustin/atd-vz-data/pull/1165) - @mddilley
- [x] Propose + vote on release names - @patrickm02L
- [x] Refine [release notes](https://github.com/cityofaustin/atd-vz-data/releases/tag/v1.30.0) @patrickm02L
- [x] Bump VZ **staging** version to `v1.31.0` and VZ production to `v1.30.0`- @mddilley
- [ ] Send out release notes - @patrickm02L
|
1.0
|
Release VZ v1.30.0 (Iguana Cir) - **To-do's for upcoming release**
- [x] Schedule release party - @patrickm02L
- [ ] Advise users of downtime - @patrickm02L
- [x] Create a [release PR](https://github.com/cityofaustin/atd-vz-data/pull/1165) - @mddilley
- [x] Propose + vote on release names - @patrickm02L
- [x] Refine [release notes](https://github.com/cityofaustin/atd-vz-data/releases/tag/v1.30.0) @patrickm02L
- [x] Bump VZ **staging** version to `v1.31.0` and VZ production to `v1.30.0`- @mddilley
- [ ] Send out release notes - @patrickm02L
|
non_test
|
release vz iguana cir to do s for upcoming release schedule release party advise users of downtime create a mddilley propose vote on release names refine bump vz staging version to and vz production to mddilley send out release notes
| 0
|
65,855
| 6,976,578,121
|
IssuesEvent
|
2017-12-12 11:35:25
|
LiskHQ/lisk
|
https://api.github.com/repos/LiskHQ/lisk
|
opened
|
Fix test/unit/logic
|
*medium test
|
Parent: #972
Adjust `test/unit/logic` in order to work with new database schema.
- [ ] account.js
- [ ] block.js
- [ ] blockReward.js
- [ ] dapp.js
- [ ] delegate.js
- [ ] inTransfer.js
- [ ] multisignature.js
- [ ] outTransfer.js
- [ ] peer.js
- [ ] peers.js
- [ ] signature.js
- [ ] transaction.js
- [ ] transactionPool.js
- [ ] transactions/pool.js
- [ ] transfer.js
- [ ] vote.js
|
1.0
|
Fix test/unit/logic - Parent: #972
Adjust `test/unit/logic` in order to work with new database schema.
- [ ] account.js
- [ ] block.js
- [ ] blockReward.js
- [ ] dapp.js
- [ ] delegate.js
- [ ] inTransfer.js
- [ ] multisignature.js
- [ ] outTransfer.js
- [ ] peer.js
- [ ] peers.js
- [ ] signature.js
- [ ] transaction.js
- [ ] transactionPool.js
- [ ] transactions/pool.js
- [ ] transfer.js
- [ ] vote.js
|
test
|
fix test unit logic parent adjust test unit logic in order to work with new database schema account js block js blockreward js dapp js delegate js intransfer js multisignature js outtransfer js peer js peers js signature js transaction js transactionpool js transactions pool js transfer js vote js
| 1
|
244,376
| 26,392,712,068
|
IssuesEvent
|
2023-01-12 16:49:49
|
some-natalie/kubernoodles
|
https://api.github.com/repos/some-natalie/kubernoodles
|
closed
|
[SECURITY] - Cert Manager Mandatory for Openshift 4.X
|
security
|
## Describe the problem
A clear and concise description of the problem.
We are trying to setup ARC on Openshift 4.X. Can we install the ARC without using the cert-manager for openshift 4.X?
|
True
|
[SECURITY] - Cert Manager Mandatory for Openshift 4.X - ## Describe the problem
A clear and concise description of the problem.
We are trying to setup ARC on Openshift 4.X. Can we install the ARC without using the cert-manager for openshift 4.X?
|
non_test
|
cert manager mandatory for openshift x describe the problem a clear and concise description of the problem we are trying to setup arc on openshift x can we install the arc without using the cert manager for openshift x
| 0
|
67,614
| 14,881,915,698
|
IssuesEvent
|
2021-01-20 11:08:40
|
jimbob88/wheelers-wort-works
|
https://api.github.com/repos/jimbob88/wheelers-wort-works
|
opened
|
CVE-2020-11022 (Medium) detected in jquery-1.12.4.min.js
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to dependency file: wheelers-wort-works/docs/_layouts/default.html</p>
<p>Path to vulnerable library: wheelers-wort-works/docs/_layouts/default.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jimbob88/wheelers-wort-works/commit/25796e4e26d9fe24b168a4b755e433c8f35c9b2a">25796e4e26d9fe24b168a4b755e433c8f35c9b2a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.12.4.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.12.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.min.js</a></p>
<p>Path to dependency file: wheelers-wort-works/docs/_layouts/default.html</p>
<p>Path to vulnerable library: wheelers-wort-works/docs/_layouts/default.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.12.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jimbob88/wheelers-wort-works/commit/25796e4e26d9fe24b168a4b755e433c8f35c9b2a">25796e4e26d9fe24b168a4b755e433c8f35c9b2a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file wheelers wort works docs layouts default html path to vulnerable library wheelers wort works docs layouts default html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
75,468
| 7,472,902,962
|
IssuesEvent
|
2018-04-03 14:00:11
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
concordion/concordion ExpectedToFail annotation causes ClassCastException in RunResultsCache.addResults()
|
ADANCX bug faulty impl. test
|
Issue: `https://github.com/concordion/concordion/issues/188`
PR: `https://github.com/concordion/concordion/pull/190`
Fix: `https://github.com/concordion/concordion/commit/b211a763d51eaa22b535a671359a0fb8a314e8ec`
|
1.0
|
concordion/concordion ExpectedToFail annotation causes ClassCastException in RunResultsCache.addResults() - Issue: `https://github.com/concordion/concordion/issues/188`
PR: `https://github.com/concordion/concordion/pull/190`
Fix: `https://github.com/concordion/concordion/commit/b211a763d51eaa22b535a671359a0fb8a314e8ec`
|
test
|
concordion concordion expectedtofail annotation causes classcastexception in runresultscache addresults issue pr fix
| 1
|
612,097
| 18,990,770,689
|
IssuesEvent
|
2021-11-22 06:55:22
|
grafana/grafana
|
https://api.github.com/repos/grafana/grafana
|
closed
|
Unexpected cursor snapping to the previous text box (Firefox only)
|
type/bug priority/important-soon datasource/Prometheus area/frontend area/panel/data
|
<!--
Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefor take a couple of extra minutes to make sure you have provided all info needed.
PROTIP: record your screen and attach it as a gif to showcase the issue.
* Questions should be posted to: https://community.grafana.com
* Use query inspector to troubleshoot issues: https://bit.ly/2XNF6YS
* How to record and attach gif: https://bit.ly/2Mi8T6K
-->
**What happened**:
When clicking from one text box to another, the cursor unexpectedly jumps to the beginning of the first text box.

**What you expected to happen**:
Clicking the second text box would move the cursor to the second text box.
**How to reproduce it (as minimally and precisely as possible)**:
Make two queries (nonempty). Edit text in the first one. Click the second one. The cursor will jump to the beginning of the first one, instead of jumping to where you click.
**Anything else we need to know?**:
**Environment**:
- Grafana version: Grafana v7.1.1 (3039f9c3bd)
- Data source type & version: Prometheus, official docker, 2.19.3
- OS Grafana is installed on: Official docker
- User OS & Browser: Mac OSX 10.15.5, Firefox 79.0
- Grafana plugins: None
- Others:
|
1.0
|
Unexpected cursor snapping to the previous text box (Firefox only) - <!--
Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefor take a couple of extra minutes to make sure you have provided all info needed.
PROTIP: record your screen and attach it as a gif to showcase the issue.
* Questions should be posted to: https://community.grafana.com
* Use query inspector to troubleshoot issues: https://bit.ly/2XNF6YS
* How to record and attach gif: https://bit.ly/2Mi8T6K
-->
**What happened**:
When clicking from one text box to another, the cursor unexpectedly jumps to the beginning of the first text box.

**What you expected to happen**:
Clicking the second text box would move the cursor to the second text box.
**How to reproduce it (as minimally and precisely as possible)**:
Make two queries (nonempty). Edit text in the first one. Click the second one. The cursor will jump to the beginning of the first one, instead of jumping to where you click.
**Anything else we need to know?**:
**Environment**:
- Grafana version: Grafana v7.1.1 (3039f9c3bd)
- Data source type & version: Prometheus, official docker, 2.19.3
- OS Grafana is installed on: Official docker
- User OS & Browser: Mac OSX 10.15.5, Firefox 79.0
- Grafana plugins: None
- Others:
|
non_test
|
unexpected cursor snapping to the previous text box firefox only please use this template to create your bug report by providing as much info as possible you help us understand the issue reproduce it and resolve it for you quicker therefor take a couple of extra minutes to make sure you have provided all info needed protip record your screen and attach it as a gif to showcase the issue questions should be posted to use query inspector to troubleshoot issues how to record and attach gif what happened when clicking from one text box to another the cursor unexpectedly jumps to the beginning of the first text box what you expected to happen clicking the second text box would move the cursor to the second text box how to reproduce it as minimally and precisely as possible make two queries nonempty edit text in the first one click the second one the cursor will jump to the beginning of the first one instead of jumping to where you click anything else we need to know environment grafana version grafana data source type version prometheus official docker os grafana is installed on official docker user os browser mac osx firefox grafana plugins none others
| 0
|
216,478
| 16,766,301,703
|
IssuesEvent
|
2021-06-14 09:13:42
|
harens/checkdigit
|
https://api.github.com/repos/harens/checkdigit
|
opened
|
Separate build tests from linting tests
|
tests
|
Similar to [seaport's tests](https://github.com/harens/seaport/tree/master/scripts).
This can hopefully not only speed up the tests, but also make it easier to see where they've failed.
|
1.0
|
Separate build tests from linting tests - Similar to [seaport's tests](https://github.com/harens/seaport/tree/master/scripts).
This can hopefully not only speed up the tests, but also make it easier to see where they've failed.
|
test
|
separate build tests from linting tests similar to this can hopefully not only speed up the tests but also make it easier to see where they ve failed
| 1
|
19,340
| 3,188,821,872
|
IssuesEvent
|
2015-09-29 00:10:29
|
aBitNomadic/shimeji-ee
|
https://api.github.com/repos/aBitNomadic/shimeji-ee
|
closed
|
shimejis wont appear but the taskbar icon does
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. opening shimeji application
2. choosing any character
3. loading program
What is the expected output? What do you see instead?
i should see the shimeji's drop onto my screen, but instead nothing appears yet
the icon is on the taskbar
What version of the product are you using? On what operating system?
1.0.3, windows 8, most recent java,
Please provide any additional information below.
i did everything right to set it up, i even helped my friend put it on their
computers the same way (they have windows 7) and it worked for theirs but not
mine, and i've tried searching for help videos for shimeji on windows 8 but no
matter what way i try to open it, it still doesn't work. :(
```
Original issue reported on code.google.com by `angelica...@gmail.com` on 25 Mar 2015 at 2:34
|
1.0
|
shimejis wont appear but the taskbar icon does - ```
What steps will reproduce the problem?
1. opening shimeji application
2. choosing any character
3. loading program
What is the expected output? What do you see instead?
i should see the shimeji's drop onto my screen, but instead nothing appears yet
the icon is on the taskbar
What version of the product are you using? On what operating system?
1.0.3, windows 8, most recent java,
Please provide any additional information below.
i did everything right to set it up, i even helped my friend put it on their
computers the same way (they have windows 7) and it worked for theirs but not
mine, and i've tried searching for help videos for shimeji on windows 8 but no
matter what way i try to open it, it still doesn't work. :(
```
Original issue reported on code.google.com by `angelica...@gmail.com` on 25 Mar 2015 at 2:34
|
non_test
|
shimejis wont appear but the taskbar icon does what steps will reproduce the problem opening shimeji application choosing any character loading program what is the expected output what do you see instead i should see the shimeji s drop onto my screen but instead nothing appears yet the icon is on the taskbar what version of the product are you using on what operating system windows most recent java please provide any additional information below i did everything right to set it up i even helped my friend put it on their computers the same way they have windows and it worked for theirs but not mine and i ve tried searching for help videos for shimeji on windows but no matter what way i try to open it it still doesn t work original issue reported on code google com by angelica gmail com on mar at
| 0
|
173,992
| 13,452,068,208
|
IssuesEvent
|
2020-09-08 21:24:08
|
killian-mahe/shareyourproject
|
https://api.github.com/repos/killian-mahe/shareyourproject
|
closed
|
Add technologies
|
controller database orm p:low test
|
Add technologies in the app (for a user and a project).
- [x] Controller
- [x] Factory
- [x] Relationships in model and migrations
- [x] Tests
|
1.0
|
Add technologies - Add technologies in the app (for a user and a project).
- [x] Controller
- [x] Factory
- [x] Relationships in model and migrations
- [x] Tests
|
test
|
add technologies add technologies in the app for a user and a project controller factory relationships in model and migrations tests
| 1
|
46,947
| 11,936,006,342
|
IssuesEvent
|
2020-04-02 09:34:06
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Packer Qemu builder does not support newest network devices
|
bug builder/qemu community-supported plugin
|
#### Overview of the Issue
https://www.packer.io/docs/builders/qemu.html#net_device
net_device supports a list which is not the same as `/usr/libexec/qemu-kvm` in version 2.12.0 :
```
Network devices:
name "e1000", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000-82540em", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000e", bus PCI, desc "Intel 82574L GbE Controller"
name "rtl8139", bus PCI
name "virtio-net-device", bus virtio-bus
name "virtio-net-pci", bus PCI, alias "virtio-net"
```
its missing `virtio-net-device`.
#### Reproduction Steps
To get the list of available devices :
```
/usr/libexec/qemu-kvm --device help
```
### Packer version
Packer 1.4.3
|
1.0
|
Packer Qemu builder does not support newest network devices - #### Overview of the Issue
https://www.packer.io/docs/builders/qemu.html#net_device
net_device supports a list which is not the same as `/usr/libexec/qemu-kvm` in version 2.12.0 :
```
Network devices:
name "e1000", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000-82540em", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000e", bus PCI, desc "Intel 82574L GbE Controller"
name "rtl8139", bus PCI
name "virtio-net-device", bus virtio-bus
name "virtio-net-pci", bus PCI, alias "virtio-net"
```
its missing `virtio-net-device`.
#### Reproduction Steps
To get the list of available devices :
```
/usr/libexec/qemu-kvm --device help
```
### Packer version
Packer 1.4.3
|
non_test
|
packer qemu builder does not support newest network devices overview of the issue net device supports a list which is not the same as usr libexec qemu kvm in version network devices name bus pci desc intel gigabit ethernet name bus pci desc intel gigabit ethernet name bus pci desc intel gbe controller name bus pci name virtio net device bus virtio bus name virtio net pci bus pci alias virtio net its missing virtio net device reproduction steps to get the list of available devices usr libexec qemu kvm device help packer version packer
| 0
|
347,903
| 31,332,828,316
|
IssuesEvent
|
2023-08-24 02:01:34
|
3d-gussner/Prusa-Firmware
|
https://api.github.com/repos/3d-gussner/Prusa-Firmware
|
closed
|
📑Report: Retraction test results
|
Test-report stale-issue
|
# [Update 5 June 2020]: Report moved to [Prusa3d-Test-Object](https://github.com/prusa3d/Prusa3D-Test-Objects/issues/11) please don't comment anymore here.
Please report here your Retraction test results.
Use the 3mf file below and try to pint it with PETG or some other material that tends to be stringy.
[Container_50x20_NoCore_v2.zip](https://github.com/3d-gussner/Prusa-Firmware/files/4585431/Container_50x20_NoCore_v2.zip)
- Printer: MK2.5, MK2.5s, MK3, MK3s
- MMU: MMU1, MMU2, MMU2s
- Firmware version printer and MMU
- Bad to good scale : 0 - 10
Feel free to write additional information you want to share.
I will try to update following table so everyone can see what results we got from the community.
|Printer | MMU | PFW | MFW |K value| Quality | User |
| ------ | ------- | ---------- | ---- | ------- | --------- | -------- |
| MK3s | N/A | 3.8.1 | N/A |66| 2 | Average |
| MK3s | N/A | 3.9.0-RC3 | N/A |66| 0 | Average |
| MK3s | N/A | 3.9.0-RC3 | N/A |0.11| 9 | Average |
| | | | | | | | | | |
| MK3s | N/A | 3.8.1 | N/A |66| 2 | Vossberger |
| MK3s | N/A| 3.9.0-RC3 | N/A |66| 0 | Vossberger |
| MK3s | N/A | 3.9.0-RC3 | N/A |0.11| 9| Vossberger |
|
1.0
|
📑Report: Retraction test results - # [Update 5 June 2020]: Report moved to [Prusa3d-Test-Object](https://github.com/prusa3d/Prusa3D-Test-Objects/issues/11) please don't comment anymore here.
Please report here your Retraction test results.
Use the 3mf file below and try to pint it with PETG or some other material that tends to be stringy.
[Container_50x20_NoCore_v2.zip](https://github.com/3d-gussner/Prusa-Firmware/files/4585431/Container_50x20_NoCore_v2.zip)
- Printer: MK2.5, MK2.5s, MK3, MK3s
- MMU: MMU1, MMU2, MMU2s
- Firmware version printer and MMU
- Bad to good scale : 0 - 10
Feel free to write additional information you want to share.
I will try to update following table so everyone can see what results we got from the community.
|Printer | MMU | PFW | MFW |K value| Quality | User |
| ------ | ------- | ---------- | ---- | ------- | --------- | -------- |
| MK3s | N/A | 3.8.1 | N/A |66| 2 | Average |
| MK3s | N/A | 3.9.0-RC3 | N/A |66| 0 | Average |
| MK3s | N/A | 3.9.0-RC3 | N/A |0.11| 9 | Average |
| | | | | | | | | | |
| MK3s | N/A | 3.8.1 | N/A |66| 2 | Vossberger |
| MK3s | N/A| 3.9.0-RC3 | N/A |66| 0 | Vossberger |
| MK3s | N/A | 3.9.0-RC3 | N/A |0.11| 9| Vossberger |
|
test
|
📑report retraction test results report moved to please don t comment anymore here please report here your retraction test results use the file below and try to pint it with petg or some other material that tends to be stringy printer mmu firmware version printer and mmu bad to good scale feel free to write additional information you want to share i will try to update following table so everyone can see what results we got from the community printer mmu pfw mfw k value quality user n a n a average n a n a average n a n a average n a n a vossberger n a n a vossberger n a n a vossberger
| 1
|
7,468
| 2,905,093,147
|
IssuesEvent
|
2015-06-18 21:35:29
|
sosol/sosol
|
https://api.github.com/repos/sosol/sosol
|
opened
|
Fix SAML for JRuby 1.7.20+
|
testing
|
Tests pass under JRuby 1.7.19, but changing to 1.7.20 or 1.7.20.1 results in a slew of `Java::JavaLang::NullPointerException`s in `RubySamlTest`.
|
1.0
|
Fix SAML for JRuby 1.7.20+ - Tests pass under JRuby 1.7.19, but changing to 1.7.20 or 1.7.20.1 results in a slew of `Java::JavaLang::NullPointerException`s in `RubySamlTest`.
|
test
|
fix saml for jruby tests pass under jruby but changing to or results in a slew of java javalang nullpointerexception s in rubysamltest
| 1
|
202,593
| 15,835,788,230
|
IssuesEvent
|
2021-04-06 18:27:35
|
Spidy-Coder/Forest_Fire_Prevention
|
https://api.github.com/repos/Spidy-Coder/Forest_Fire_Prevention
|
opened
|
Front-end required for this project
|
documentation enhancement
|
This project is yet under _**construction.**_
I will be adding a front-end web page which will demonstrate this **_project using Flask_** for real-time application .
This is very important part of our machine learning model.
`⭐star` this repository for getting updates and further changes made in this project.
|
1.0
|
Front-end required for this project - This project is yet under _**construction.**_
I will be adding a front-end web page which will demonstrate this **_project using Flask_** for real-time application .
This is very important part of our machine learning model.
`⭐star` this repository for getting updates and further changes made in this project.
|
non_test
|
front end required for this project this project is yet under construction i will be adding a front end web page which will demonstrate this project using flask for real time application this is very important part of our machine learning model ⭐star this repository for getting updates and further changes made in this project
| 0
|
73,790
| 7,358,916,906
|
IssuesEvent
|
2018-03-10 00:30:31
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
Add support for death reporting workflow
|
Contacts Priority: 2 - Medium Status: 4 - Acceptance testing Type: Feature
|
This issue adds support for a death reporting and confirmation workflow involving CHWs and managers. We need to support reporting death as well as reversing a death.
The basic workflow for reporting a death is as follows:
- CHW reports a death in the community via SMS or an app form
- This death report triggers a task for the manager in the app OR sends her an SMS notification
- This task opens a second form (death confirmation) that the manager fills out and submits OR the manager submits a death confirmation form via SMS
- If the manager confirms the death, the person's profile updates to show that they are deceased
- NOTE: The manager can also fill in the death confirmation form directly (in app or via SMS) and the person's profile updates to show that they are deceased
The basic workflow for reversing a death is as follows:
- CHW requests a correction via SMS or an app form
- This request correction form triggers a task for the manager in the app OR sends her an SMS notification
- This task opens an undo death form that the manager submits OR the manager submits an undo death form via SMS
- If the manager confirms that the death should be reversed, the person's profile updates to show that they are alive
- NOTE: The manager can also directly undo the death (in app or via SMS), which should update the profile to show that the person is alive
[Design spec is here](https://docs.google.com/document/d/1EKmOdip2cebl_BbIJNbl1kTbG3rYxaxVYmI_mL5y9M8/edit#)
Summary of UI changes (these are applied once a death is confirmed). See the design spec for more info:
- [x] person's icon changes from pink to gray (screenshot 1)
- [x] the word "Deceased" is added either next to or immediately below the person's name on their profile (screenshot 1 & 2 are mobile view, on desktop the word "Deceased" should be immediately next to the person's name, see screenshots 6 & 7)
- [x] on the family page, a new row appears at the end of the list of family members that says "View deceased" (screenshot 3)
- [x] when you click on "View deceased", you are taken to a detail page that lists the deceased family members. These are the same as the content rows on the family page except that they show the age at the time of death and have a relative date of death on the right, e.g. "Died 3 years ago" (screenshot 4)
- [x] in search, deceased people should have gray icons and show up at the bottom of the list (screenshot 5)
**Screenshot 1**

**Screenshot 2**

**Screenshot 3**

**Screenshot 4**

**Screenshot 5**

**Screenshot 6**

**Screenshot 7**

|
1.0
|
Add support for death reporting workflow - This issue adds support for a death reporting and confirmation workflow involving CHWs and managers. We need to support reporting death as well as reversing a death.
The basic workflow for reporting a death is as follows:
- CHW reports a death in the community via SMS or an app form
- This death report triggers a task for the manager in the app OR sends her an SMS notification
- This task opens a second form (death confirmation) that the manager fills out and submits OR the manager submits a death confirmation form via SMS
- If the manager confirms the death, the person's profile updates to show that they are deceased
- NOTE: The manager can also fill in the death confirmation form directly (in app or via SMS) and the person's profile updates to show that they are deceased
The basic workflow for reversing a death is as follows:
- CHW requests a correction via SMS or an app form
- This request correction form triggers a task for the manager in the app OR sends her an SMS notification
- This task opens an undo death form that the manager submits OR the manager submits an undo death form via SMS
- If the manager confirms that the death should be reversed, the person's profile updates to show that they are alive
- NOTE: The manager can also directly undo the death (in app or via SMS), which should update the profile to show that the person is alive
[Design spec is here](https://docs.google.com/document/d/1EKmOdip2cebl_BbIJNbl1kTbG3rYxaxVYmI_mL5y9M8/edit#)
Summary of UI changes (these are applied once a death is confirmed). See the design spec for more info:
- [x] person's icon changes from pink to gray (screenshot 1)
- [x] the word "Deceased" is added either next to or immediately below the person's name on their profile (screenshot 1 & 2 are mobile view, on desktop the word "Deceased" should be immediately next to the person's name, see screenshots 6 & 7)
- [x] on the family page, a new row appears at the end of the list of family members that says "View deceased" (screenshot 3)
- [x] when you click on "View deceased", you are taken to a detail page that lists the deceased family members. These are the same as the content rows on the family page except that they show the age at the time of death and have a relative date of death on the right, e.g. "Died 3 years ago" (screenshot 4)
- [x] in search, deceased people should have gray icons and show up at the bottom of the list (screenshot 5)
**Screenshot 1**

**Screenshot 2**

**Screenshot 3**

**Screenshot 4**

**Screenshot 5**

**Screenshot 6**

**Screenshot 7**

|
test
|
add support for death reporting workflow this issue adds support for a death reporting and confirmation workflow involving chws and managers we need to support reporting death as well as reversing a death the basic workflow for reporting a death is as follows chw reports a death in the community via sms or an app form this death report triggers a task for the manager in the app or sends her an sms notification this task opens a second form death confirmation that the manager fills out and submits or the manager submits a death confirmation form via sms if the manager confirms the death the person s profile updates to show that they are deceased note the manager can also fill in the death confirmation form directly in app or via sms and the person s profile updates to show that they are deceased the basic workflow for reversing a death is as follows chw requests a correction via sms or an app form this request correction form triggers a task for the manager in the app or sends her an sms notification this task opens an undo death form that the manager submits or the manager submits an undo death form via sms if the manager confirms that the death should be reversed the person s profile updates to show that they are alive note the manager can also directly undo the death in app or via sms which should update the profile to show that the person is alive summary of ui changes these are applied once a death is confirmed see the design spec for more info person s icon changes from pink to gray screenshot the word deceased is added either next to or immediately below the person s name on their profile screenshot are mobile view on desktop the word deceased should be immediately next to the person s name see screenshots on the family page a new row appears at the end of the list of family members that says view deceased screenshot when you click on view deceased you are taken to a detail page that lists the deceased family members these are the same as the content rows on the family page except that they show the age at the time of death and have a relative date of death on the right e g died years ago screenshot in search deceased people should have gray icons and show up at the bottom of the list screenshot screenshot screenshot screenshot screenshot screenshot screenshot screenshot
| 1
|
300,607
| 22,688,608,217
|
IssuesEvent
|
2022-07-04 16:39:25
|
DickinsonCollege/FarmData2
|
https://api.github.com/repos/DickinsonCollege/FarmData2
|
closed
|
FD2 Example Module Readme Clarifications
|
documentation enhancement
|
In the README.md file in farmdata2_modules/fd2_tabs/fd2_example a few points could be clarified:
- Make it more clear that the `xyz` in the document is a place-holder for any module. Maybe just an (e.g. fd2_example or fd2_barn_kit) or something like that.
- In step 2 make it more clear that just a new block like the one shown needs to be added to the function. Currently the //... placeholders are in the wrong spots and its not quite intuitive what they mean anyway. So finding some way to clarify that just the block shown needs to be copied and edited would be good.
- Possibly move the section about creating a main tab to the bottom as that will be an unusual operation.
- Make clear that clearing cache is a console command.
- Make clear that clearing cache only has to happen if .module is modified not when .html is modified.
|
1.0
|
FD2 Example Module Readme Clarifications - In the README.md file in farmdata2_modules/fd2_tabs/fd2_example a few points could be clarified:
- Make it more clear that the `xyz` in the document is a place-holder for any module. Maybe just an (e.g. fd2_example or fd2_barn_kit) or something like that.
- In step 2 make it more clear that just a new block like the one shown needs to be added to the function. Currently the //... placeholders are in the wrong spots and its not quite intuitive what they mean anyway. So finding some way to clarify that just the block shown needs to be copied and edited would be good.
- Possibly move the section about creating a main tab to the bottom as that will be an unusual operation.
- Make clear that clearing cache is a console command.
- Make clear that clearing cache only has to happen if .module is modified not when .html is modified.
|
non_test
|
example module readme clarifications in the readme md file in modules tabs example a few points could be clarified make it more clear that the xyz in the document is a place holder for any module maybe just an e g example or barn kit or something like that in step make it more clear that just a new block like the one shown needs to be added to the function currently the placeholders are in the wrong spots and its not quite intuitive what they mean anyway so finding some way to clarify that just the block shown needs to be copied and edited would be good possibly move the section about creating a main tab to the bottom as that will be an unusual operation make clear that clearing cache is a console command make clear that clearing cache only has to happen if module is modified not when html is modified
| 0
|
204,177
| 23,218,806,906
|
IssuesEvent
|
2022-08-02 16:10:53
|
pravega/pravega
|
https://api.github.com/repos/pravega/pravega
|
opened
|
Update versions of gson, Apache Portable Runtime and Jackson-databind
|
area/security
|
**Describe the bug**
We need to update versions of the following dependencies in order to address vulnerabilities(CVEs) found by running container image scans.
| Library | Current Version | CVEs found |
| --- | --- | --- |
APR | 1.6.5 | CVE-2017-12613 |
com.google.code.gson | 2.8.6 | CVE-2022-25647
com.fasterxml.jackson.core | 2.13.2.2 | CVE-2020-36518
**Problem location**
`gradle.properties`
**Solution**
Bump up the above libraries to higher versions.
|
True
|
Update versions of gson, Apache Portable Runtime and Jackson-databind - **Describe the bug**
We need to update versions of the following dependencies in order to address vulnerabilities(CVEs) found by running container image scans.
| Library | Current Version | CVEs found |
| --- | --- | --- |
APR | 1.6.5 | CVE-2017-12613 |
com.google.code.gson | 2.8.6 | CVE-2022-25647
com.fasterxml.jackson.core | 2.13.2.2 | CVE-2020-36518
**Problem location**
`gradle.properties`
**Solution**
Bump up the above libraries to higher versions.
|
non_test
|
update versions of gson apache portable runtime and jackson databind describe the bug we need to update versions of the following dependencies in order to address vulnerabilities cves found by running container image scans library current version cves found apr cve com google code gson cve com fasterxml jackson core cve problem location gradle properties solution bump up the above libraries to higher versions
| 0
|
227,517
| 25,081,127,399
|
IssuesEvent
|
2022-11-07 19:24:14
|
JMD60260/fetchmeaband
|
https://api.github.com/repos/JMD60260/fetchmeaband
|
closed
|
CVE-2018-19838 (Medium) detected in libsass3.3.6, node-sass-3.13.1.tgz
|
security vulnerability
|
## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>libsass3.3.6</b>, <b>node-sass-3.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: /public/vendor/owl.carousel/package.json</p>
<p>Path to vulnerable library: /public/vendor/owl.carousel/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- grunt-sass-1.2.1.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution (node-sass): 5.0.0</p>
<p>Direct dependency fix Resolution (grunt-sass): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-19838 (Medium) detected in libsass3.3.6, node-sass-3.13.1.tgz - ## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>libsass3.3.6</b>, <b>node-sass-3.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: /public/vendor/owl.carousel/package.json</p>
<p>Path to vulnerable library: /public/vendor/owl.carousel/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- grunt-sass-1.2.1.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution (node-sass): 5.0.0</p>
<p>Direct dependency fix Resolution (grunt-sass): 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in node sass tgz cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file public vendor owl carousel package json path to vulnerable library public vendor owl carousel node modules node sass package json dependency hierarchy grunt sass tgz root library x node sass tgz vulnerable library found in base branch master vulnerability details in libsass prior to functions inside ast cpp for implement ast operators expansion allow attackers to cause a denial of service resulting from stack consumption via a crafted sass file as demonstrated by recursive calls involving clone clonechildren and copy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution node sass direct dependency fix resolution grunt sass step up your open source security game with mend
| 0
|
278,250
| 30,702,242,140
|
IssuesEvent
|
2023-07-27 01:14:12
|
snykiotcubedev/arangodb-3.7.6
|
https://api.github.com/repos/snykiotcubedev/arangodb-3.7.6
|
opened
|
CVE-2023-3079 (High) detected in v88.3.47, v88.3.47
|
Mend: dependency security vulnerability
|
## CVE-2023-3079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>v88.3.47</b>, <b>v88.3.47</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Type confusion in V8 in Google Chrome prior to 114.0.5735.110 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page. (Chromium security severity: High)
<p>Publish Date: 2023-06-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3079>CVE-2023-3079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://chromereleases.googleblog.com/2023/06/stable-channel-update-for-desktop.html">https://chromereleases.googleblog.com/2023/06/stable-channel-update-for-desktop.html</a></p>
<p>Release Date: 2023-06-05</p>
<p>Fix Resolution: 114.0.5735.110</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-3079 (High) detected in v88.3.47, v88.3.47 - ## CVE-2023-3079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>v88.3.47</b>, <b>v88.3.47</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Type confusion in V8 in Google Chrome prior to 114.0.5735.110 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page. (Chromium security severity: High)
<p>Publish Date: 2023-06-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3079>CVE-2023-3079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://chromereleases.googleblog.com/2023/06/stable-channel-update-for-desktop.html">https://chromereleases.googleblog.com/2023/06/stable-channel-update-for-desktop.html</a></p>
<p>Release Date: 2023-06-05</p>
<p>Fix Resolution: 114.0.5735.110</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in cve high severity vulnerability vulnerable libraries vulnerability details type confusion in in google chrome prior to allowed a remote attacker to potentially exploit heap corruption via a crafted html page chromium security severity high publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
120,708
| 10,132,219,378
|
IssuesEvent
|
2019-08-01 21:41:59
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: jepsen/g2/majority-ring-start-kill-2 failed
|
C-test-failure O-roachtest O-robot
|
SHA: https://github.com/cockroachdb/cockroach/commits/da56c792e968574b8f1d9ef3fdb45d56a530221a
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=jepsen/g2/majority-ring-start-kill-2 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1415578&tab=buildLog
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190801-1415578/jepsen/g2/majority-ring-start-kill-2/run_1
jepsen.go:264,jepsen.go:325,test_runner.go:691: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1564640260-44-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.0.86 -n 10.128.0.76 -n 10.128.0.59 -n 10.128.0.55 -n 10.128.0.45 \
--test g2 --nemesis majority-ring --nemesis2 start-kill-2 \
> invoke.log 2>&1 \
" returned:
stderr:
stdout:
Error: exit status 255
: exit status 1
```
|
2.0
|
roachtest: jepsen/g2/majority-ring-start-kill-2 failed - SHA: https://github.com/cockroachdb/cockroach/commits/da56c792e968574b8f1d9ef3fdb45d56a530221a
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=jepsen/g2/majority-ring-start-kill-2 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1415578&tab=buildLog
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190801-1415578/jepsen/g2/majority-ring-start-kill-2/run_1
jepsen.go:264,jepsen.go:325,test_runner.go:691: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1564640260-44-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.128.0.86 -n 10.128.0.76 -n 10.128.0.59 -n 10.128.0.55 -n 10.128.0.45 \
--test g2 --nemesis majority-ring --nemesis2 start-kill-2 \
> invoke.log 2>&1 \
" returned:
stderr:
stdout:
Error: exit status 255
: exit status 1
```
|
test
|
roachtest jepsen majority ring start kill failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests jepsen majority ring start kill pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts jepsen majority ring start kill run jepsen go jepsen go test runner go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test nemesis majority ring start kill invoke log returned stderr stdout error exit status exit status
| 1
|
481,840
| 13,892,890,748
|
IssuesEvent
|
2020-10-19 12:50:17
|
space-wizards/space-station-14
|
https://api.github.com/repos/space-wizards/space-station-14
|
closed
|
The implementation of excited groups is broken
|
Feature: Atmospherics Priority: 1-high
|
Yo, your implementation of excited groups, like all repos using tg LINDA past this pr https://github.com/tgstation/tgstation/pull/19189, is broken.
Because excited groups break down as soon as a turf within them is removed from active, this line breaks the core purpose of excited groups, slowly growing to represent the "processing" turfs in an area, then settling to equalize atmos diffs. https://github.com/space-wizards/space-station-14/blob/master/Content.Server/Atmos/TileAtmosphere.cs#L741
I'm not sure how relevant this is to your codebase, as you have monstermos's equalization to fill somewhat the same role, but I thought you'd want to know.
I've implemented a hellfix here: https://github.com/tgstation/tgstation/pull/52493, but it's a bit bloated (Isn't finished yet), and I only do what I do because I need to lower the active turf count as much as I can. In your case, removing the portion of the line that deals with timers, while it would make planetary turfs a lot laggier, will solve the issue. Or just remove excited groups, or keep them as they function now, as low group size settlers.
|
1.0
|
The implementation of excited groups is broken - Yo, your implementation of excited groups, like all repos using tg LINDA past this pr https://github.com/tgstation/tgstation/pull/19189, is broken.
Because excited groups break down as soon as a turf within them is removed from active, this line breaks the core purpose of excited groups, slowly growing to represent the "processing" turfs in an area, then settling to equalize atmos diffs. https://github.com/space-wizards/space-station-14/blob/master/Content.Server/Atmos/TileAtmosphere.cs#L741
I'm not sure how relevant this is to your codebase, as you have monstermos's equalization to fill somewhat the same role, but I thought you'd want to know.
I've implemented a hellfix here: https://github.com/tgstation/tgstation/pull/52493, but it's a bit bloated (Isn't finished yet), and I only do what I do because I need to lower the active turf count as much as I can. In your case, removing the portion of the line that deals with timers, while it would make planetary turfs a lot laggier, will solve the issue. Or just remove excited groups, or keep them as they function now, as low group size settlers.
|
non_test
|
the implementation of excited groups is broken yo your implementation of excited groups like all repos using tg linda past this pr is broken because excited groups break down as soon as a turf within them is removed from active this line breaks the core purpose of excited groups slowly growing to represent the processing turfs in an area then settling to equalize atmos diffs i m not sure how relevant this is to your codebase as you have monstermos s equalization to fill somewhat the same role but i thought you d want to know i ve implemented a hellfix here but it s a bit bloated isn t finished yet and i only do what i do because i need to lower the active turf count as much as i can in your case removing the portion of the line that deals with timers while it would make planetary turfs a lot laggier will solve the issue or just remove excited groups or keep them as they function now as low group size settlers
| 0
|
326,548
| 28,000,264,548
|
IssuesEvent
|
2023-03-27 11:14:53
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
closed
|
Research Engine's Metrics module
|
team/qa research role/qa-runtime-terror subteam/qa-rainbow qa-planning target/5.0.0 level/task type/test
|
# Description
The objective of this issue is to investigate and design a testing plan for the development issue: [Engine - Metrics module](https://github.com/wazuh/wazuh/issues/15988)
# Planning stage
- [x] Research the applied change.
- [x] Research if we have a test for this case.
- [x] Define the test cases. Identify the base cases, and then the rest of the tests as tier 2.
- [x] Define whether it is necessary to test systems, integration, or E2E. Create the corresponding issues.
|
1.0
|
Research Engine's Metrics module - # Description
The objective of this issue is to investigate and design a testing plan for the development issue: [Engine - Metrics module](https://github.com/wazuh/wazuh/issues/15988)
# Planning stage
- [x] Research the applied change.
- [x] Research if we have a test for this case.
- [x] Define the test cases. Identify the base cases, and then the rest of the tests as tier 2.
- [x] Define whether it is necessary to test systems, integration, or E2E. Create the corresponding issues.
|
test
|
research engine s metrics module description the objective of this issue is to investigate and design a testing plan for the development issue planning stage research the applied change research if we have a test for this case define the test cases identify the base cases and then the rest of the tests as tier define whether it is necessary to test systems integration or create the corresponding issues
| 1
|
197,725
| 14,940,686,710
|
IssuesEvent
|
2021-01-25 18:38:23
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: Jest Tests.src/setup_node_env - NodeVersionValidator should run the script WITH error
|
failed-test
|
A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 5000 ms for a test.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /dev/shm/workspace/parallel/4/kibana/src/setup_node_env/node_version_validator.test.js:16:3
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous> (/dev/shm/workspace/parallel/4/kibana/src/setup_node_env/node_version_validator.test.js:15:1)
at Runtime._execModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:1299:24)
at Runtime._loadModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:898:12)
at Runtime.requireModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:746:10)
at jestAdapter (/dev/shm/workspace/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:106:13)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at runTestInternal (/dev/shm/workspace/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/dev/shm/workspace/kibana/node_modules/jest-runner/build/runTest.js:472:34)
at Object.worker (/dev/shm/workspace/kibana/node_modules/jest-runner/build/testWorker.js:133:12)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/11402/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.src/setup_node_env","test.name":"NodeVersionValidator should run the script WITH error","test.failCount":1}} -->
|
1.0
|
Failing test: Jest Tests.src/setup_node_env - NodeVersionValidator should run the script WITH error - A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 5000 ms for a test.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /dev/shm/workspace/parallel/4/kibana/src/setup_node_env/node_version_validator.test.js:16:3
at _dispatchDescribe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:67:26)
at describe (/dev/shm/workspace/kibana/node_modules/jest-circus/build/index.js:30:5)
at Object.<anonymous> (/dev/shm/workspace/parallel/4/kibana/src/setup_node_env/node_version_validator.test.js:15:1)
at Runtime._execModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:1299:24)
at Runtime._loadModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:898:12)
at Runtime.requireModule (/dev/shm/workspace/kibana/node_modules/jest-runtime/build/index.js:746:10)
at jestAdapter (/dev/shm/workspace/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:106:13)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at runTestInternal (/dev/shm/workspace/kibana/node_modules/jest-runner/build/runTest.js:380:16)
at runTest (/dev/shm/workspace/kibana/node_modules/jest-runner/build/runTest.js:472:34)
at Object.worker (/dev/shm/workspace/kibana/node_modules/jest-runner/build/testWorker.js:133:12)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/11402/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.src/setup_node_env","test.name":"NodeVersionValidator should run the script WITH error","test.failCount":1}} -->
|
test
|
failing test jest tests src setup node env nodeversionvalidator should run the script with error a test failed on a tracked branch error thrown exceeded timeout of ms for a test use jest settimeout newtimeout to increase the timeout value if this is a long running test at dev shm workspace parallel kibana src setup node env node version validator test js at dispatchdescribe dev shm workspace kibana node modules jest circus build index js at describe dev shm workspace kibana node modules jest circus build index js at object dev shm workspace parallel kibana src setup node env node version validator test js at runtime execmodule dev shm workspace kibana node modules jest runtime build index js at runtime loadmodule dev shm workspace kibana node modules jest runtime build index js at runtime requiremodule dev shm workspace kibana node modules jest runtime build index js at jestadapter dev shm workspace kibana node modules jest circus build legacy code todo rewrite jestadapter js at processticksandrejections internal process task queues js at runtestinternal dev shm workspace kibana node modules jest runner build runtest js at runtest dev shm workspace kibana node modules jest runner build runtest js at object worker dev shm workspace kibana node modules jest runner build testworker js first failure
| 1
|
427,575
| 29,830,297,005
|
IssuesEvent
|
2023-06-18 07:21:02
|
DavidSolan0/mlops-repository
|
https://api.github.com/repos/DavidSolan0/mlops-repository
|
closed
|
Create UML diagrams
|
documentation
|
As a Data Scientist, I need to document the expected steps to understand and build my project. As starting point, I'll create a UML component diagram. This diagram will be in README.md to help the reader to understand the project structure.
|
1.0
|
Create UML diagrams - As a Data Scientist, I need to document the expected steps to understand and build my project. As starting point, I'll create a UML component diagram. This diagram will be in README.md to help the reader to understand the project structure.
|
non_test
|
create uml diagrams as a data scientist i need to document the expected steps to understand and build my project as starting point i ll create a uml component diagram this diagram will be in readme md to help the reader to understand the project structure
| 0
|
281,107
| 30,873,523,978
|
IssuesEvent
|
2023-08-03 12:57:36
|
eclipse-tractusx/sig-infra
|
https://api.github.com/repos/eclipse-tractusx/sig-infra
|
closed
|
Request access to VeraCode
|
security
|
Please report **undisclosed** or **confidential** vulnerabilities here: https://www.eclipse.org/security/
**Topics (Please mark an [x] to your Topic):**
- [ ] KICS
- [ ] Invicti
- [ ] GitGuardian
- [ ] Pentesting
- [ ] Security assessment (threat modeling, code reviews)
- [ ] Trivy
- [x] Veracode (initial setup, GitHub integration)
- [ ] Other
Hello Team,
please enable my email lucas.capellino@doubleslash.de to use veracode.
This is needed to work on the [tx-traceability-foss](https://github.com/catenax-ng/tx-traceability-foss) project.
Thank you and best regards,
Lucas
|
True
|
Request access to VeraCode - Please report **undisclosed** or **confidential** vulnerabilities here: https://www.eclipse.org/security/
**Topics (Please mark an [x] to your Topic):**
- [ ] KICS
- [ ] Invicti
- [ ] GitGuardian
- [ ] Pentesting
- [ ] Security assessment (threat modeling, code reviews)
- [ ] Trivy
- [x] Veracode (initial setup, GitHub integration)
- [ ] Other
Hello Team,
please enable my email lucas.capellino@doubleslash.de to use veracode.
This is needed to work on the [tx-traceability-foss](https://github.com/catenax-ng/tx-traceability-foss) project.
Thank you and best regards,
Lucas
|
non_test
|
request access to veracode please report undisclosed or confidential vulnerabilities here topics please mark an to your topic kics invicti gitguardian pentesting security assessment threat modeling code reviews trivy veracode initial setup github integration other hello team please enable my email lucas capellino doubleslash de to use veracode this is needed to work on the project thank you and best regards lucas
| 0
|
613,521
| 19,092,682,912
|
IssuesEvent
|
2021-11-29 13:48:33
|
jina-ai/jina
|
https://api.github.com/repos/jina-ai/jina
|
closed
|
Improve env var support with JinaD
|
area/core priority/important-longterm focus/ease-of-use
|
With jina-ai/jina#3251, we added support to pass environment variables to JinaD. But it has couple of limitations.
The env vars passed to a remote Flow are only accessed & processed in main JinaD and not passed to partial-daemon. So if in the executor code, we are trying to access an env var (which should ideally be passed from the Flow), it fails.
Passing env vars for remote Executors. This must be enabled in here.
|
1.0
|
Improve env var support with JinaD - With jina-ai/jina#3251, we added support to pass environment variables to JinaD. But it has couple of limitations.
The env vars passed to a remote Flow are only accessed & processed in main JinaD and not passed to partial-daemon. So if in the executor code, we are trying to access an env var (which should ideally be passed from the Flow), it fails.
Passing env vars for remote Executors. This must be enabled in here.
|
non_test
|
improve env var support with jinad with jina ai jina we added support to pass environment variables to jinad but it has couple of limitations the env vars passed to a remote flow are only accessed processed in main jinad and not passed to partial daemon so if in the executor code we are trying to access an env var which should ideally be passed from the flow it fails passing env vars for remote executors this must be enabled in here
| 0
|
53,759
| 23,054,054,763
|
IssuesEvent
|
2022-07-25 01:31:32
|
filecoin-project/venus
|
https://api.github.com/repos/filecoin-project/venus
|
closed
|
[venus-miner] JaegerTracing 引入及使用说明文档
|
BU-chain-service
|
### Checklist
- [X] This is **not** a new feature or an enhancement to the Filecoin protocol. If it is, please open an [FIP issue](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0001.md).
- [X] This is **not** brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on [the venus forum](https://github.com/filecoin-project/venus/discussions/categories/ideas) and select the category as `Ideas`.
- [X] I **have** a specific, actionable, and well motivated feature request to propose.
### Venus component
- [ ] venus daemon - [chain service] chain sync
- [ ] venus auth - [chain service] authentication
- [ ] venus messager - [chain service] message management (mpool)
- [ ] venus gateway - [chain service] gateway
- [X] venus miner - [chain service] mining and block production
- [ ] venus sealer/worker - sealing
- [ ] venus sealer - proving (WindowPoSt)
- [ ] venus market - storage deal
- [ ] venus market - retrieval deal
- [ ] venus market - data transfer
- [ ] venus light-weight client
- [ ] venus JSON-RPC API
- [ ] Other
### What is the motivation behind this feature request? Is your feature request related to a problem? Please describe.
JaegerTracing
### Describe the solution you'd like
JaegerTracing
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
1.0
|
[venus-miner] JaegerTracing 引入及使用说明文档 - ### Checklist
- [X] This is **not** a new feature or an enhancement to the Filecoin protocol. If it is, please open an [FIP issue](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0001.md).
- [X] This is **not** brainstorming ideas. If you have an idea you'd like to discuss, please open a new discussion on [the venus forum](https://github.com/filecoin-project/venus/discussions/categories/ideas) and select the category as `Ideas`.
- [X] I **have** a specific, actionable, and well motivated feature request to propose.
### Venus component
- [ ] venus daemon - [chain service] chain sync
- [ ] venus auth - [chain service] authentication
- [ ] venus messager - [chain service] message management (mpool)
- [ ] venus gateway - [chain service] gateway
- [X] venus miner - [chain service] mining and block production
- [ ] venus sealer/worker - sealing
- [ ] venus sealer - proving (WindowPoSt)
- [ ] venus market - storage deal
- [ ] venus market - retrieval deal
- [ ] venus market - data transfer
- [ ] venus light-weight client
- [ ] venus JSON-RPC API
- [ ] Other
### What is the motivation behind this feature request? Is your feature request related to a problem? Please describe.
JaegerTracing
### Describe the solution you'd like
JaegerTracing
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
non_test
|
jaegertracing 引入及使用说明文档 checklist this is not a new feature or an enhancement to the filecoin protocol if it is please open an this is not brainstorming ideas if you have an idea you d like to discuss please open a new discussion on and select the category as ideas i have a specific actionable and well motivated feature request to propose venus component venus daemon chain sync venus auth authentication venus messager message management mpool venus gateway gateway venus miner mining and block production venus sealer worker sealing venus sealer proving windowpost venus market storage deal venus market retrieval deal venus market data transfer venus light weight client venus json rpc api other what is the motivation behind this feature request is your feature request related to a problem please describe jaegertracing describe the solution you d like jaegertracing describe alternatives you ve considered no response additional context no response
| 0
|
93,963
| 11,838,220,056
|
IssuesEvent
|
2020-03-23 15:20:22
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[maps] provide UI feedback for immutable and mutable layer configuration parameters
|
Team:Geo design discuss
|
https://github.com/elastic/kibana/pull/60668 adds `scaling` configuration to the layer creation wizard. In the past, layer creation wizards have only displayed immutable configuration settings, i.e. settings that can not be changed after the layer is created. `scaling` is mutable and can be later changed by the user. How should we provide a clear distinction in the UI between settings that can not be edited later and settings that can be edited layer?
|
1.0
|
[maps] provide UI feedback for immutable and mutable layer configuration parameters - https://github.com/elastic/kibana/pull/60668 adds `scaling` configuration to the layer creation wizard. In the past, layer creation wizards have only displayed immutable configuration settings, i.e. settings that can not be changed after the layer is created. `scaling` is mutable and can be later changed by the user. How should we provide a clear distinction in the UI between settings that can not be edited later and settings that can be edited layer?
|
non_test
|
provide ui feedback for immutable and mutable layer configuration parameters adds scaling configuration to the layer creation wizard in the past layer creation wizards have only displayed immutable configuration settings i e settings that can not be changed after the layer is created scaling is mutable and can be later changed by the user how should we provide a clear distinction in the ui between settings that can not be edited later and settings that can be edited layer
| 0
|
401,659
| 27,334,686,136
|
IssuesEvent
|
2023-02-26 03:13:58
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
ConfigMap Hot Update Delay
|
kind/documentation sig/storage needs-triage
|
### What happened?
I use a configMap to mount at container , when i update the configMap , the container volume takes 20s or much longer to catch up the update .
And very strange ,if i update configMap again, the previous change will immediately show at the container volume.
### What did you expect to happen?
the change reflect to the mounted file within 20s
### How can we reproduce it (as minimally and precisely as possible)?
yes
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:30:03Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
</details>
### Cloud provider
<details>
None
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
# On Windows:
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
1.0
|
ConfigMap Hot Update Delay - ### What happened?
I use a configMap to mount at container , when i update the configMap , the container volume takes 20s or much longer to catch up the update .
And very strange ,if i update configMap again, the previous change will immediately show at the container volume.
### What did you expect to happen?
the change reflect to the mounted file within 20s
### How can we reproduce it (as minimally and precisely as possible)?
yes
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:30:03Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
</details>
### Cloud provider
<details>
None
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
# On Windows:
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
non_test
|
configmap hot update delay what happened i use a configmap to mount at container when i update the configmap the container volume takes or much longer to catch up the update and very strange if i update configmap again the previous change will immediately show at the container volume what did you expect to happen the change reflect to the mounted file within how can we reproduce it as minimally and precisely as possible yes anything else we need to know no response kubernetes version console kubeadm version kubeadm version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux cloud provider none os version console on linux cat etc os release name centos linux version core id centos id like rhel fedora version id pretty name centos linux core ansi color cpe name cpe o centos centos home url bug report url centos mantisbt project centos centos mantisbt project version redhat support product centos redhat support product version on windows install tools container runtime cri and version if applicable related plugins cni csi and versions if applicable
| 0
|
13,467
| 23,177,589,222
|
IssuesEvent
|
2022-07-31 16:50:19
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
How to configure Renovate to auto-merge our own dependencies
|
type:feature status:requirements priority-5-triage
|
### What would you like Renovate to be able to do?
Hi guys, I am figuring out how to configure renovate to auto-merge my own dependencies.
It saves us some time when we release patches to our type libs.
Some advice about this?
Jose
### If you have any ideas on how this should be implemented, please tell us here.
I need some advice on starting, the feature exists or should work with it feature?
### Is this a feature you are interested in implementing yourself?
Maybe
|
1.0
|
How to configure Renovate to auto-merge our own dependencies - ### What would you like Renovate to be able to do?
Hi guys, I am figuring out how to configure renovate to auto-merge my own dependencies.
It saves us some time when we release patches to our type libs.
Some advice about this?
Jose
### If you have any ideas on how this should be implemented, please tell us here.
I need some advice on starting, the feature exists or should work with it feature?
### Is this a feature you are interested in implementing yourself?
Maybe
|
non_test
|
how to configure renovate to auto merge our own dependencies what would you like renovate to be able to do hi guys i am figuring out how to configure renovate to auto merge my own dependencies it saves us some time when we release patches to our type libs some advice about this jose if you have any ideas on how this should be implemented please tell us here i need some advice on starting the feature exists or should work with it feature is this a feature you are interested in implementing yourself maybe
| 0
|
278,620
| 24,163,746,276
|
IssuesEvent
|
2022-09-22 13:37:16
|
opensrp/opensrp-client-anc
|
https://api.github.com/repos/opensrp/opensrp-client-anc
|
closed
|
User location is null on MeFragment and app crashes when a user clicks on the location button
|
high priority Tech Partner (SID Team) qa+ Client Testing
|
### Affected App or Server Version
v1.7.2
### What kind of support do you need?
Fix missing user assigned location and ensure app doesn't crash when the user is assigned a location.
### What is the acceptance criteria for your support request?
See user assigned location on me fragment.
App should not crash when one selects on user location
### Relevant Information
_No response_
|
1.0
|
User location is null on MeFragment and app crashes when a user clicks on the location button - ### Affected App or Server Version
v1.7.2
### What kind of support do you need?
Fix missing user assigned location and ensure app doesn't crash when the user is assigned a location.
### What is the acceptance criteria for your support request?
See user assigned location on me fragment.
App should not crash when one selects on user location
### Relevant Information
_No response_
|
test
|
user location is null on mefragment and app crashes when a user clicks on the location button affected app or server version what kind of support do you need fix missing user assigned location and ensure app doesn t crash when the user is assigned a location what is the acceptance criteria for your support request see user assigned location on me fragment app should not crash when one selects on user location relevant information no response
| 1
|
256,532
| 22,059,308,154
|
IssuesEvent
|
2022-05-30 15:43:28
|
thexerteproject/xerteonlinetoolkits
|
https://api.github.com/repos/thexerteproject/xerteonlinetoolkits
|
opened
|
Check spelling button crashes editor
|
bug Needs testing Editor
|
From the forums https://xerte.org.uk/index.php/en/forum/bugs-and-issues/2878-spellcheck-using-orientation-page-type#8153
After quickly testing this it seems that it doesn't matter what page or textbox is active when clicking the spelling button/menu and selecting Check Spelling it hangs the editor. I checked this in develop, 3.10 and 3.9 and same result in each. (Testing with Chrome and haven't checked others).
|
1.0
|
Check spelling button crashes editor - From the forums https://xerte.org.uk/index.php/en/forum/bugs-and-issues/2878-spellcheck-using-orientation-page-type#8153
After quickly testing this it seems that it doesn't matter what page or textbox is active when clicking the spelling button/menu and selecting Check Spelling it hangs the editor. I checked this in develop, 3.10 and 3.9 and same result in each. (Testing with Chrome and haven't checked others).
|
test
|
check spelling button crashes editor from the forums after quickly testing this it seems that it doesn t matter what page or textbox is active when clicking the spelling button menu and selecting check spelling it hangs the editor i checked this in develop and and same result in each testing with chrome and haven t checked others
| 1
|
332,058
| 24,335,369,211
|
IssuesEvent
|
2022-10-01 02:28:23
|
mediumroast/mediumroast_js
|
https://api.github.com/repos/mediumroast/mediumroast_js
|
opened
|
Check out the JSDoc better-docs template category
|
documentation enhancement
|
The @category and @subcategory options in source might be helpful to better organize the documentation.
|
1.0
|
Check out the JSDoc better-docs template category - The @category and @subcategory options in source might be helpful to better organize the documentation.
|
non_test
|
check out the jsdoc better docs template category the category and subcategory options in source might be helpful to better organize the documentation
| 0
|
116,324
| 9,830,226,130
|
IssuesEvent
|
2019-06-16 06:50:48
|
meateam/file-service
|
https://api.github.com/repos/meateam/file-service
|
closed
|
IsAllowed for root folder
|
bug test
|
In the function IsAllowed, if the file is the user's root folder the result should be true.
Now, it doesn't check what happens when an `undefined` / `null` / `""` (empty string) is sent, so it will cause an error.
Fix it and write corresponding tests for it.
|
1.0
|
IsAllowed for root folder - In the function IsAllowed, if the file is the user's root folder the result should be true.
Now, it doesn't check what happens when an `undefined` / `null` / `""` (empty string) is sent, so it will cause an error.
Fix it and write corresponding tests for it.
|
test
|
isallowed for root folder in the function isallowed if the file is the user s root folder the result should be true now it doesn t check what happens when an undefined null empty string is sent so it will cause an error fix it and write corresponding tests for it
| 1
|
124,281
| 10,302,654,698
|
IssuesEvent
|
2019-08-28 18:18:45
|
folkarps/F3
|
https://api.github.com/repos/folkarps/F3
|
closed
|
Add civilians to groupData
|
S5:Tested; Awaiting Release T:Component improvement
|
Since we have a civilian briefing we should also add civilians to:
`groupMarkers/fn_groupData.sqf` and `groupMarkers/f_setLocalGroupMarkers.sqf`.
This will make it easier for missionmakers to add civilian slots.
|
1.0
|
Add civilians to groupData - Since we have a civilian briefing we should also add civilians to:
`groupMarkers/fn_groupData.sqf` and `groupMarkers/f_setLocalGroupMarkers.sqf`.
This will make it easier for missionmakers to add civilian slots.
|
test
|
add civilians to groupdata since we have a civilian briefing we should also add civilians to groupmarkers fn groupdata sqf and groupmarkers f setlocalgroupmarkers sqf this will make it easier for missionmakers to add civilian slots
| 1
|
335,418
| 30,029,071,680
|
IssuesEvent
|
2023-06-27 08:24:13
|
Greenstand/treetracker-wallet-api
|
https://api.github.com/repos/Greenstand/treetracker-wallet-api
|
closed
|
[Integration tests refactor] __tests__/trust-relationship-send.spec.js
|
good first issue tests
|
Now we need to refactor all the test under _ _ tests _ _
The problem:
Currently, all the integration tests direct under _ _ tests _ _ are copied from the previous end-to-end tests. And in general, there is a lot of replicated code, and unreadable and hard to maintain. We need to refactor them to an easy way to write and maintain.
There is the example: #192 you can refer to this example to refactor the test, and put the new test file into _ tests _/integration dir, and put mock data into mock-data dir.
This issue/task is to refactor the test: https://github.com/Greenstand/treetracker-wallet-api/blob/master/__tests__/trust-relationship-send.spec.js
|
1.0
|
[Integration tests refactor] __tests__/trust-relationship-send.spec.js - Now we need to refactor all the test under _ _ tests _ _
The problem:
Currently, all the integration tests direct under _ _ tests _ _ are copied from the previous end-to-end tests. And in general, there is a lot of replicated code, and unreadable and hard to maintain. We need to refactor them to an easy way to write and maintain.
There is the example: #192 you can refer to this example to refactor the test, and put the new test file into _ tests _/integration dir, and put mock data into mock-data dir.
This issue/task is to refactor the test: https://github.com/Greenstand/treetracker-wallet-api/blob/master/__tests__/trust-relationship-send.spec.js
|
test
|
tests trust relationship send spec js now we need to refactor all the test under tests the problem currently all the integration tests direct under tests are copied from the previous end to end tests and in general there is a lot of replicated code and unreadable and hard to maintain we need to refactor them to an easy way to write and maintain there is the example you can refer to this example to refactor the test and put the new test file into tests integration dir and put mock data into mock data dir this issue task is to refactor the test
| 1
|
58,168
| 6,576,334,570
|
IssuesEvent
|
2017-09-11 19:25:58
|
zsh-users/zsh-syntax-highlighting
|
https://api.github.com/repos/zsh-users/zsh-syntax-highlighting
|
opened
|
test harness entry points (tests/*.zsh) should Bail Out! if zsh was run without -f
|
component:tests Task
|
Makefile runs tests under `zsh -f`.
The test harness entry points should enforce that zsh had been launched with -f and bail out if that's not the case. `[[ $- == *f* ]]` should work.
This would trigger if someone tries to run `zsh tests/foo.zsh` by hand, or to source that file from an interactive shell (that hadn't been started with -f; this caveat is probably fine).
(Related: the #! lines can't use -f because they use `#!.../env zsh` and passing multiple arguments is unportable.)
|
1.0
|
test harness entry points (tests/*.zsh) should Bail Out! if zsh was run without -f - Makefile runs tests under `zsh -f`.
The test harness entry points should enforce that zsh had been launched with -f and bail out if that's not the case. `[[ $- == *f* ]]` should work.
This would trigger if someone tries to run `zsh tests/foo.zsh` by hand, or to source that file from an interactive shell (that hadn't been started with -f; this caveat is probably fine).
(Related: the #! lines can't use -f because they use `#!.../env zsh` and passing multiple arguments is unportable.)
|
test
|
test harness entry points tests zsh should bail out if zsh was run without f makefile runs tests under zsh f the test harness entry points should enforce that zsh had been launched with f and bail out if that s not the case should work this would trigger if someone tries to run zsh tests foo zsh by hand or to source that file from an interactive shell that hadn t been started with f this caveat is probably fine related the lines can t use f because they use env zsh and passing multiple arguments is unportable
| 1
|
69,497
| 7,136,282,245
|
IssuesEvent
|
2018-01-23 06:13:47
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
Test failure: System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest / HttpClient_ClientUsesAuxRecord_Ok
|
area-System.Net.Http test bug test-run-core
|
## Type of failures
```
System.TimeoutException : Task timed out after 15000
at System.Threading.Tasks.TaskTimeoutExtensions.<TimeoutAfter>d__0.MoveNext()
at System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest.<HttpClient_ClientUsesAuxRecord_Ok>d__2.MoveNext() in E:\A\_work\2359\s\corefx\src\System.Net.Http\tests\FunctionalTests\SchSendAuxRecordHttpTest.cs:line 80
--- End of stack trace from previous location where exception was thrown ---
```
## History of failures
Day | Build | OS | Details
-- | -- | -- | --
1/3 | 20180103.03 | Win7 |
1/20 | 20180120.02 | Win7 | [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180120.02/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest~2FHttpClient_ClientUsesAuxRecord_Ok)
1/20 | 20180120.02 | Win7 | [ManagedHandler] [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180120.02/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ManagedHandler_SchSendAuxRecordHttpTest~2FHttpClient_ClientUsesAuxRecord_Ok)
|
2.0
|
Test failure: System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest / HttpClient_ClientUsesAuxRecord_Ok - ## Type of failures
```
System.TimeoutException : Task timed out after 15000
at System.Threading.Tasks.TaskTimeoutExtensions.<TimeoutAfter>d__0.MoveNext()
at System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest.<HttpClient_ClientUsesAuxRecord_Ok>d__2.MoveNext() in E:\A\_work\2359\s\corefx\src\System.Net.Http\tests\FunctionalTests\SchSendAuxRecordHttpTest.cs:line 80
--- End of stack trace from previous location where exception was thrown ---
```
## History of failures
Day | Build | OS | Details
-- | -- | -- | --
1/3 | 20180103.03 | Win7 |
1/20 | 20180120.02 | Win7 | [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180120.02/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.SchSendAuxRecordHttpTest~2FHttpClient_ClientUsesAuxRecord_Ok)
1/20 | 20180120.02 | Win7 | [ManagedHandler] [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180120.02/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ManagedHandler_SchSendAuxRecordHttpTest~2FHttpClient_ClientUsesAuxRecord_Ok)
|
test
|
test failure system net http functional tests schsendauxrecordhttptest httpclient clientusesauxrecord ok type of failures system timeoutexception task timed out after at system threading tasks tasktimeoutextensions d movenext at system net http functional tests schsendauxrecordhttptest d movenext in e a work s corefx src system net http tests functionaltests schsendauxrecordhttptest cs line end of stack trace from previous location where exception was thrown history of failures day build os details
| 1
|
14,035
| 3,372,457,261
|
IssuesEvent
|
2015-11-23 23:41:19
|
18F/doi-extractives-data
|
https://api.github.com/repos/18F/doi-extractives-data
|
closed
|
Content: How it works > Revenues
|
workflow:testing
|
Add Revenues content (p. 46-48, Exec Summary) `_how_it_works > revenues`
- [x] What revenues do companies pay for extracting natural resources?
- [x] Federal revenue [chart: federal revenue streams]
- [x] Revenue policy provisions (p. 49-50, Exec Summary)
- [x] Where does federal revenue go? (p. 51-55, Exec Summary)
- [x] Taxes
- [x] Corporate income taxes (includes exemptions)
- [x] Tax expenditures (pg. 27–33, Online Only doc)
|
1.0
|
Content: How it works > Revenues - Add Revenues content (p. 46-48, Exec Summary) `_how_it_works > revenues`
- [x] What revenues do companies pay for extracting natural resources?
- [x] Federal revenue [chart: federal revenue streams]
- [x] Revenue policy provisions (p. 49-50, Exec Summary)
- [x] Where does federal revenue go? (p. 51-55, Exec Summary)
- [x] Taxes
- [x] Corporate income taxes (includes exemptions)
- [x] Tax expenditures (pg. 27–33, Online Only doc)
|
test
|
content how it works revenues add revenues content p exec summary how it works revenues what revenues do companies pay for extracting natural resources federal revenue revenue policy provisions p exec summary where does federal revenue go p exec summary taxes corporate income taxes includes exemptions tax expenditures pg – online only doc
| 1
|
56,011
| 14,896,638,258
|
IssuesEvent
|
2021-01-21 10:39:42
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
opened
|
Synapse 1.26.0 should depend on psycopg2>=2.8
|
S-Major T-Defect X-Release-Blocker
|
@turt2live reported that his `/sync` broke:
```
[synchrotron_1] 2021-01-21 06:41:20,454 - synapse.http.server - 83 - ERROR - GET-6- Failed handle request via 'SyncRestServlet': <XForwardedForRequest at 0x7fe0a7364e10 method='GET' uri='/_matrix/client/r0/sync?filter=76&timeout=0&since=s30055085_169441467_1384_26059860_26058150_2158_574317_12846128_152' clientproto='HTTP/1.0' site=8050>
Traceback (most recent call last):
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 313, in _wait_for_sync_for_user
sync_config, since_token, full_state=full_state
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 344, in current_sync_for_user
return await self.generate_sync_result(sync_config, since_token, full_state)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1000, in generate_sync_result
sync_result_builder, account_data_by_room
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1439, in _generate_sync_entry_for_rooms
await concurrently_execute(handle_room_entries, room_entries, 10)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/util/async_helpers.py", line 174, in _concurrently_execute_inner
await maybe_awaitable(func(next(it)))
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1434, in handle_room_entries
always_include=sync_result_builder.full_state,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1831, in _generate_room_entry
newly_joined_room=newly_joined,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 452, in _load_filtered_recents
room_id
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 211, in get_current_state_ids
ret = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/util/metrics.py", line 92, in measured_func
r = await func(self, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 436, in resolve_state_groups_for_events
state_res_store=StateResolutionStore(self.store),
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 578, in resolve_state_groups
state_res_store=state_res_store,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 637, in resolve_events_with_store
state_res_store,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/v2.py", line 101, in resolve_events_with_store
room_id, state_sets, event_map, state_res_store
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/v2.py", line 339, in _get_auth_chain_difference
room_id, state_sets_ids
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/databases/main/event_federation.py", line 170, in get_auth_chain_difference
state_sets,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 664, in runInteraction
**kwargs,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 740, in runWithConnection
self._db_pool.runWithConnection(inner_func, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/enterprise/adbapi.py", line 306, in _runWithConnection
compat.reraise(excValue, excTraceback)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/compat.py", line 464, in reraise
raise exception.with_traceback(traceback)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/enterprise/adbapi.py", line 297, in _runWithConnection
result = func(conn, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 734, in inner_func
return func(db_conn, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 534, in new_transaction
r = func(cursor, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/databases/main/event_federation.py", line 327, in _get_auth_chain_difference_using_cover_index_txn
rows = txn.execute_values(sql, args)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 284, in execute_values
lambda *x: execute_values(self.txn, *x, fetch=True), sql, *args
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 314, in _do_execute
return func(sql, *args)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 284, in <lambda>
lambda *x: execute_values(self.txn, *x, fetch=True), sql, *args
TypeError: execute_values() got an unexpected keyword argument 'fetch'
```
According to the [psycopg2 release notes](https://www.psycopg.org/docs/news.html#what-s-new-in-psycopg-2-8), the `fetch` parameter was added to `execute_values()` function in version 2.8.
However, our dependencies list `2.7` at https://github.com/matrix-org/synapse/blob/release-v1.26.0/synapse/python_dependencies.py#L89-L90
|
1.0
|
Synapse 1.26.0 should depend on psycopg2>=2.8 - @turt2live reported that his `/sync` broke:
```
[synchrotron_1] 2021-01-21 06:41:20,454 - synapse.http.server - 83 - ERROR - GET-6- Failed handle request via 'SyncRestServlet': <XForwardedForRequest at 0x7fe0a7364e10 method='GET' uri='/_matrix/client/r0/sync?filter=76&timeout=0&since=s30055085_169441467_1384_26059860_26058150_2158_574317_12846128_152' clientproto='HTTP/1.0' site=8050>
Traceback (most recent call last):
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 313, in _wait_for_sync_for_user
sync_config, since_token, full_state=full_state
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 344, in current_sync_for_user
return await self.generate_sync_result(sync_config, since_token, full_state)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1000, in generate_sync_result
sync_result_builder, account_data_by_room
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1439, in _generate_sync_entry_for_rooms
await concurrently_execute(handle_room_entries, room_entries, 10)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/util/async_helpers.py", line 174, in _concurrently_execute_inner
await maybe_awaitable(func(next(it)))
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1434, in handle_room_entries
always_include=sync_result_builder.full_state,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 1831, in _generate_room_entry
newly_joined_room=newly_joined,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/handlers/sync.py", line 452, in _load_filtered_recents
room_id
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 211, in get_current_state_ids
ret = await self.resolve_state_groups_for_events(room_id, latest_event_ids)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/util/metrics.py", line 92, in measured_func
r = await func(self, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 436, in resolve_state_groups_for_events
state_res_store=StateResolutionStore(self.store),
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 578, in resolve_state_groups
state_res_store=state_res_store,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/__init__.py", line 637, in resolve_events_with_store
state_res_store,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/v2.py", line 101, in resolve_events_with_store
room_id, state_sets, event_map, state_res_store
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/state/v2.py", line 339, in _get_auth_chain_difference
room_id, state_sets_ids
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/databases/main/event_federation.py", line 170, in get_auth_chain_difference
state_sets,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 664, in runInteraction
**kwargs,
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 740, in runWithConnection
self._db_pool.runWithConnection(inner_func, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/enterprise/adbapi.py", line 306, in _runWithConnection
compat.reraise(excValue, excTraceback)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/python/compat.py", line 464, in reraise
raise exception.with_traceback(traceback)
File "/home/matrix/synapse/lib/python3.6/site-packages/twisted/enterprise/adbapi.py", line 297, in _runWithConnection
result = func(conn, *args, **kw)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 734, in inner_func
return func(db_conn, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 534, in new_transaction
r = func(cursor, *args, **kwargs)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/databases/main/event_federation.py", line 327, in _get_auth_chain_difference_using_cover_index_txn
rows = txn.execute_values(sql, args)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 284, in execute_values
lambda *x: execute_values(self.txn, *x, fetch=True), sql, *args
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 314, in _do_execute
return func(sql, *args)
File "/home/matrix/synapse/lib/python3.6/site-packages/synapse/storage/database.py", line 284, in <lambda>
lambda *x: execute_values(self.txn, *x, fetch=True), sql, *args
TypeError: execute_values() got an unexpected keyword argument 'fetch'
```
According to the [psycopg2 release notes](https://www.psycopg.org/docs/news.html#what-s-new-in-psycopg-2-8), the `fetch` parameter was added to `execute_values()` function in version 2.8.
However, our dependencies list `2.7` at https://github.com/matrix-org/synapse/blob/release-v1.26.0/synapse/python_dependencies.py#L89-L90
|
non_test
|
synapse should depend on reported that his sync broke synapse http server error get failed handle request via syncrestservlet traceback most recent call last file home matrix synapse lib site packages twisted internet defer py line in inlinecallbacks result result throwexceptionintogenerator g file home matrix synapse lib site packages twisted python failure py line in throwexceptionintogenerator return g throw self type self value self tb file home matrix synapse lib site packages synapse handlers sync py line in wait for sync for user sync config since token full state full state file home matrix synapse lib site packages synapse handlers sync py line in current sync for user return await self generate sync result sync config since token full state file home matrix synapse lib site packages synapse handlers sync py line in generate sync result sync result builder account data by room file home matrix synapse lib site packages synapse handlers sync py line in generate sync entry for rooms await concurrently execute handle room entries room entries file home matrix synapse lib site packages twisted internet defer py line in inlinecallbacks result result throwexceptionintogenerator g file home matrix synapse lib site packages twisted python failure py line in throwexceptionintogenerator return g throw self type self value self tb file home matrix synapse lib site packages synapse util async helpers py line in concurrently execute inner await maybe awaitable func next it file home matrix synapse lib site packages synapse handlers sync py line in handle room entries always include sync result builder full state file home matrix synapse lib site packages synapse handlers sync py line in generate room entry newly joined room newly joined file home matrix synapse lib site packages synapse handlers sync py line in load filtered recents room id file home matrix synapse lib site packages synapse state init py line in get current state ids ret await self resolve state groups for events room id latest event ids file home matrix synapse lib site packages synapse util metrics py line in measured func r await func self args kwargs file home matrix synapse lib site packages synapse state init py line in resolve state groups for events state res store stateresolutionstore self store file home matrix synapse lib site packages synapse state init py line in resolve state groups state res store state res store file home matrix synapse lib site packages synapse state init py line in resolve events with store state res store file home matrix synapse lib site packages synapse state py line in resolve events with store room id state sets event map state res store file home matrix synapse lib site packages synapse state py line in get auth chain difference room id state sets ids file home matrix synapse lib site packages synapse storage databases main event federation py line in get auth chain difference state sets file home matrix synapse lib site packages synapse storage database py line in runinteraction kwargs file home matrix synapse lib site packages synapse storage database py line in runwithconnection self db pool runwithconnection inner func args kwargs file home matrix synapse lib site packages twisted python threadpool py line in incontext result incontext thework file home matrix synapse lib site packages twisted python threadpool py line in incontext thework lambda context call ctx func args kw file home matrix synapse lib site packages twisted python context py line in callwithcontext return self currentcontext callwithcontext ctx func args kw file home matrix synapse lib site packages twisted python context py line in callwithcontext return func args kw file home matrix synapse lib site packages twisted enterprise adbapi py line in runwithconnection compat reraise excvalue exctraceback file home matrix synapse lib site packages twisted python compat py line in reraise raise exception with traceback traceback file home matrix synapse lib site packages twisted enterprise adbapi py line in runwithconnection result func conn args kw file home matrix synapse lib site packages synapse storage database py line in inner func return func db conn args kwargs file home matrix synapse lib site packages synapse storage database py line in new transaction r func cursor args kwargs file home matrix synapse lib site packages synapse storage databases main event federation py line in get auth chain difference using cover index txn rows txn execute values sql args file home matrix synapse lib site packages synapse storage database py line in execute values lambda x execute values self txn x fetch true sql args file home matrix synapse lib site packages synapse storage database py line in do execute return func sql args file home matrix synapse lib site packages synapse storage database py line in lambda x execute values self txn x fetch true sql args typeerror execute values got an unexpected keyword argument fetch according to the the fetch parameter was added to execute values function in version however our dependencies list at
| 0
|
84,632
| 24,367,410,226
|
IssuesEvent
|
2022-10-03 16:13:37
|
minetest/minetest
|
https://api.github.com/repos/minetest/minetest
|
closed
|
Why -ffinite-math-only ?
|
Bug @ Build
|
Minetest is built with `-ffinite-math-only`:
https://github.com/minetest/minetest/blob/2d10fa786792a27adb4097abe8c92f36cf47e6ce/src/CMakeLists.txt#L744-L747
(Since #9682. Before that, it was `-ffast-math`.)
But it is not true that we are not dealing with inf or nan. Lua mods can, for example, always pass nan or inf as argument.
This optimisation can for example change `std::isinf(num)` to a constant `false`, which makes checking for inf, nan and the like impossible.
I'm not aware of practical bugs caused by this.
But I once got problems with it in a PR when I've tried to add a setting that allows `inf` values ([code](https://github.com/Desour/minetest/blob/102e78f9e891f30ee3d37722baed93c0bcfd52b7/src/util/time_parsing.cpp#L43)).
|
1.0
|
Why -ffinite-math-only ? - Minetest is built with `-ffinite-math-only`:
https://github.com/minetest/minetest/blob/2d10fa786792a27adb4097abe8c92f36cf47e6ce/src/CMakeLists.txt#L744-L747
(Since #9682. Before that, it was `-ffast-math`.)
But it is not true that we are not dealing with inf or nan. Lua mods can, for example, always pass nan or inf as argument.
This optimisation can for example change `std::isinf(num)` to a constant `false`, which makes checking for inf, nan and the like impossible.
I'm not aware of practical bugs caused by this.
But I once got problems with it in a PR when I've tried to add a setting that allows `inf` values ([code](https://github.com/Desour/minetest/blob/102e78f9e891f30ee3d37722baed93c0bcfd52b7/src/util/time_parsing.cpp#L43)).
|
non_test
|
why ffinite math only minetest is built with ffinite math only since before that it was ffast math but it is not true that we are not dealing with inf or nan lua mods can for example always pass nan or inf as argument this optimisation can for example change std isinf num to a constant false which makes checking for inf nan and the like impossible i m not aware of practical bugs caused by this but i once got problems with it in a pr when i ve tried to add a setting that allows inf values
| 0
|
128,741
| 10,550,358,082
|
IssuesEvent
|
2019-10-03 10:51:57
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Min-width needed on table columns
|
Needs Testing [Block] Table
|
**Describe the bug**
Hi there, I was trying to add a table of data with three columns. The first two columns contain just a word or two, and the third column contains a full sentence or two, this has caused the first two columns to collapse to the point where they just have two or three letters stacked on top of each other.
Things look OK on the front end of the site, but it's borderline un-usable on the backend. (see screenshot below)
The only way to fix this is to switch to "fixed width" but then you can't adjust the width of the columns. Seems like that feature is being worked on in #9801
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new post
2. Create a new Table block with three columns and a few rows
3. Just put a few characters in the first two columns, and then put a few sentences in the third column
**Expected behavior**
I expect the columns with only a few characters to collapse to a certain point, the characters shouldn't start stacking on top one another.
**Screenshots**

**Desktop (please complete the following information):**
- OS: OSX 10.13.5
- Browser: Chrome 70.0.3538.67
- Plugin Version: 4.1.1
- WordPress Version: 4.9.8
|
1.0
|
Min-width needed on table columns - **Describe the bug**
Hi there, I was trying to add a table of data with three columns. The first two columns contain just a word or two, and the third column contains a full sentence or two, this has caused the first two columns to collapse to the point where they just have two or three letters stacked on top of each other.
Things look OK on the front end of the site, but it's borderline un-usable on the backend. (see screenshot below)
The only way to fix this is to switch to "fixed width" but then you can't adjust the width of the columns. Seems like that feature is being worked on in #9801
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new post
2. Create a new Table block with three columns and a few rows
3. Just put a few characters in the first two columns, and then put a few sentences in the third column
**Expected behavior**
I expect the columns with only a few characters to collapse to a certain point, the characters shouldn't start stacking on top one another.
**Screenshots**

**Desktop (please complete the following information):**
- OS: OSX 10.13.5
- Browser: Chrome 70.0.3538.67
- Plugin Version: 4.1.1
- WordPress Version: 4.9.8
|
test
|
min width needed on table columns describe the bug hi there i was trying to add a table of data with three columns the first two columns contain just a word or two and the third column contains a full sentence or two this has caused the first two columns to collapse to the point where they just have two or three letters stacked on top of each other things look ok on the front end of the site but it s borderline un usable on the backend see screenshot below the only way to fix this is to switch to fixed width but then you can t adjust the width of the columns seems like that feature is being worked on in to reproduce steps to reproduce the behavior create a new post create a new table block with three columns and a few rows just put a few characters in the first two columns and then put a few sentences in the third column expected behavior i expect the columns with only a few characters to collapse to a certain point the characters shouldn t start stacking on top one another screenshots desktop please complete the following information os osx browser chrome plugin version wordpress version
| 1
|
29,029
| 4,467,252,157
|
IssuesEvent
|
2016-08-25 03:23:49
|
dereklokgithub/PSM-UAT
|
https://api.github.com/repos/dereklokgithub/PSM-UAT
|
closed
|
Android - Product page description (stationary)
|
Ready for retest
|
Please put the available colors that are outside the screen to the second line
eg

|
1.0
|
Android - Product page description (stationary) - Please put the available colors that are outside the screen to the second line
eg

|
test
|
android product page description stationary please put the available colors that are outside the screen to the second line eg
| 1
|
221,334
| 24,612,953,621
|
IssuesEvent
|
2022-10-15 01:11:03
|
Symbolk/SmartCommitCore
|
https://api.github.com/repos/Symbolk/SmartCommitCore
|
opened
|
jgrapht-io-1.3.0.jar: 1 vulnerabilities (highest severity is: 5.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jgrapht-io-1.3.0.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-text/1.5/e9054ac321b9240440462532991c1d29d517c82d/commons-text-1.5.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-42889](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | commons-text-1.5.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-42889</summary>
### Vulnerable Library - <b>commons-text-1.5.jar</b></p>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-text">http://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-text/1.5/e9054ac321b9240440462532991c1d29d517c82d/commons-text-1.5.jar</p>
<p>
Dependency Hierarchy:
- jgrapht-io-1.3.0.jar (Root Library)
- :x: **commons-text-1.5.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default.
<p>Publish Date: 2022-10-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42889>CVE-2022-42889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p>
<p>Release Date: 2022-10-13</p>
<p>Fix Resolution: org.apache.commons:commons-text:1.10.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
jgrapht-io-1.3.0.jar: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jgrapht-io-1.3.0.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-text/1.5/e9054ac321b9240440462532991c1d29d517c82d/commons-text-1.5.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-42889](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | commons-text-1.5.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-42889</summary>
### Vulnerable Library - <b>commons-text-1.5.jar</b></p>
<p>Apache Commons Text is a library focused on algorithms working on strings.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-text">http://commons.apache.org/proper/commons-text</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-text/1.5/e9054ac321b9240440462532991c1d29d517c82d/commons-text-1.5.jar</p>
<p>
Dependency Hierarchy:
- jgrapht-io-1.3.0.jar (Root Library)
- :x: **commons-text-1.5.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default.
<p>Publish Date: 2022-10-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42889>CVE-2022-42889</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p>
<p>Release Date: 2022-10-13</p>
<p>Fix Resolution: org.apache.commons:commons-text:1.10.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_test
|
jgrapht io jar vulnerabilities highest severity is vulnerable library jgrapht io jar path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons text commons text jar vulnerabilities cve severity cvss dependency type fixed in remediation available medium commons text jar transitive n a details cve vulnerable library commons text jar apache commons text is a library focused on algorithms working on strings library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons text commons text jar dependency hierarchy jgrapht io jar root library x commons text jar vulnerable library found in base branch main vulnerability details apache commons text performs variable interpolation allowing properties to be dynamically evaluated and expanded the standard format for interpolation is prefix name where prefix is used to locate an instance of org apache commons text lookup stringlookup that performs the interpolation starting with version and continuing through the set of default lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers these lookups are script execute expressions using the jvm script execution engine javax script dns resolve dns records url load values from urls including from remote servers applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used users are recommended to upgrade to apache commons text which disables the problematic interpolators by default publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons text step up your open source security game with mend
| 0
|
97,549
| 8,659,599,684
|
IssuesEvent
|
2018-11-28 06:50:57
|
shahkhan40/shantestrep
|
https://api.github.com/repos/shahkhan40/shantestrep
|
closed
|
testing FX841 : ApiV1TestSuitesIdTestSuiteSearchGetQueryParamPageEmptyValue
|
testing FX841
|
Project : testing FX841
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTVmOTg0M2ItNmYxNC00MmQzLTk4MmItZWVjNTBiMDJlMDdl; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 28 Nov 2018 06:49:14 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/test-suites/VZnkGNAm/test-suite/search?page=
Request :
Response :
{
"timestamp" : "2018-11-28T06:49:14.870+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/test-suites/VZnkGNAm/test-suite/search"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
testing FX841 : ApiV1TestSuitesIdTestSuiteSearchGetQueryParamPageEmptyValue - Project : testing FX841
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTVmOTg0M2ItNmYxNC00MmQzLTk4MmItZWVjNTBiMDJlMDdl; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 28 Nov 2018 06:49:14 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/test-suites/VZnkGNAm/test-suite/search?page=
Request :
Response :
{
"timestamp" : "2018-11-28T06:49:14.870+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/test-suites/VZnkGNAm/test-suite/search"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
test
|
testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api test suites vznkgnam test suite search logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 1
|
150,071
| 11,945,375,152
|
IssuesEvent
|
2020-04-03 05:34:59
|
wcvendors/wcvendors
|
https://api.github.com/repos/wcvendors/wcvendors
|
closed
|
Test WooCommerce 4.0 RC1
|
Needs Further Testing
|
WooCommerce 4.0 is slated for release soon and we need to confirm that the current code base is not going to have any issues.
https://woocommerce.wordpress.com/2020/02/26/woocommerce-4-0-release-candidate-is-now-available/
- [ ] Vendor Registration
- [ ] Order processing
- [ ] Commissions
- [ ] Admin functions
|
1.0
|
Test WooCommerce 4.0 RC1 - WooCommerce 4.0 is slated for release soon and we need to confirm that the current code base is not going to have any issues.
https://woocommerce.wordpress.com/2020/02/26/woocommerce-4-0-release-candidate-is-now-available/
- [ ] Vendor Registration
- [ ] Order processing
- [ ] Commissions
- [ ] Admin functions
|
test
|
test woocommerce woocommerce is slated for release soon and we need to confirm that the current code base is not going to have any issues vendor registration order processing commissions admin functions
| 1
|
34,733
| 4,952,510,963
|
IssuesEvent
|
2016-12-01 12:15:22
|
halestudio/hale
|
https://api.github.com/repos/halestudio/hale
|
closed
|
Add MS SQL Server Reader
|
hale-support io prio-1-must to be tested
|
Add a schema and an instance reader to read from MS SQL Server databases. Support Geometry and Geography data types, as well as arrays. Read out as many constraints as possible from the schema. In case of Arcs, interpolate them as described in #181.
DoD:
- Automatically test with MS SQL Server 2012 or later
|
1.0
|
Add MS SQL Server Reader - Add a schema and an instance reader to read from MS SQL Server databases. Support Geometry and Geography data types, as well as arrays. Read out as many constraints as possible from the schema. In case of Arcs, interpolate them as described in #181.
DoD:
- Automatically test with MS SQL Server 2012 or later
|
test
|
add ms sql server reader add a schema and an instance reader to read from ms sql server databases support geometry and geography data types as well as arrays read out as many constraints as possible from the schema in case of arcs interpolate them as described in dod automatically test with ms sql server or later
| 1
|
4,730
| 7,194,144,135
|
IssuesEvent
|
2018-02-04 00:27:09
|
ismenc/Boo-King
|
https://api.github.com/repos/ismenc/Boo-King
|
opened
|
Cambios que aplicar
|
ACCDA requirement
|
En Stack -> invertir el if
informacion detalle de arrrendador corregir el .equals
en utilidades, el ultimo metodo avanzado ponerlo con printf y que acabe así:
%.2f\n", mediaLibrosPorArrendador);
|
1.0
|
Cambios que aplicar - En Stack -> invertir el if
informacion detalle de arrrendador corregir el .equals
en utilidades, el ultimo metodo avanzado ponerlo con printf y que acabe así:
%.2f\n", mediaLibrosPorArrendador);
|
non_test
|
cambios que aplicar en stack invertir el if informacion detalle de arrrendador corregir el equals en utilidades el ultimo metodo avanzado ponerlo con printf y que acabe así n medialibrosporarrendador
| 0
|
50,116
| 6,060,743,246
|
IssuesEvent
|
2017-06-14 03:12:43
|
1STi/EXBO
|
https://api.github.com/repos/1STi/EXBO
|
opened
|
CT025 - Login again
|
Test Case
|
Pre-conditions: Logged in
**Step 1**
to show me my last active Exbo on the main screen.
_Expected result_
Having previously logged
|
1.0
|
CT025 - Login again - Pre-conditions: Logged in
**Step 1**
to show me my last active Exbo on the main screen.
_Expected result_
Having previously logged
|
test
|
login again pre conditions logged in step to show me my last active exbo on the main screen expected result having previously logged
| 1
|
617,444
| 19,350,838,302
|
IssuesEvent
|
2021-12-15 15:29:45
|
Grasslands-2/GrazeScape
|
https://api.github.com/repos/Grasslands-2/GrazeScape
|
closed
|
Set Field Defaults for Pastures
|
High Priority
|
Fields being collected as pastures need defaults to make sure they wont throw errors when models are run. Set this up in the fieldgrid.js file.
|
1.0
|
Set Field Defaults for Pastures - Fields being collected as pastures need defaults to make sure they wont throw errors when models are run. Set this up in the fieldgrid.js file.
|
non_test
|
set field defaults for pastures fields being collected as pastures need defaults to make sure they wont throw errors when models are run set this up in the fieldgrid js file
| 0
|
336,131
| 30,121,135,948
|
IssuesEvent
|
2023-06-30 15:15:05
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
pkg/server/systemconfigwatcher/systemconfigwatcher_test: TestCache failed
|
C-test-failure O-robot branch-master T-kv
|
pkg/server/systemconfigwatcher/systemconfigwatcher_test.TestCache [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/5540903?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/5540903?buildTab=artifacts#/) on master @ [11408729c7835deda4e6ea90f47a2df5d1a7f57f](https://github.com/cockroachdb/cockroach/commits/11408729c7835deda4e6ea90f47a2df5d1a7f57f):
```
=== RUN TestCache/system
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCache.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-16957
|
1.0
|
pkg/server/systemconfigwatcher/systemconfigwatcher_test: TestCache failed - pkg/server/systemconfigwatcher/systemconfigwatcher_test.TestCache [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/5540903?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/5540903?buildTab=artifacts#/) on master @ [11408729c7835deda4e6ea90f47a2df5d1a7f57f](https://github.com/cockroachdb/cockroach/commits/11408729c7835deda4e6ea90f47a2df5d1a7f57f):
```
=== RUN TestCache/system
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCache.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-16957
|
test
|
pkg server systemconfigwatcher systemconfigwatcher test testcache failed pkg server systemconfigwatcher systemconfigwatcher test testcache with on master run testcache system parameters tags bazel gss help see also jira issue crdb
| 1
|
150,712
| 13,356,332,829
|
IssuesEvent
|
2020-08-31 08:00:08
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
Document how to use spring.factories to add auto-configuration to a test slice
|
type: documentation
|
#6001 and #6335 made it possible to add auto-configuration to a test slice by listing it in `spring.factories` but it was only documented in the javadoc of `@ImportAutoConfiguration`. Prompted by a discussion on Gitter with @jnizet, we should also mention it in the reference documentation. The ["Additional Auto-configuration and Slicing"](https://docs.spring.io/spring-boot/docs/2.3.1.RELEASE/reference/htmlsingle/#boot-features-testing-spring-boot-applications-testing-auto-configured-additional-auto-config) section is probably the right place for it.
|
1.0
|
Document how to use spring.factories to add auto-configuration to a test slice - #6001 and #6335 made it possible to add auto-configuration to a test slice by listing it in `spring.factories` but it was only documented in the javadoc of `@ImportAutoConfiguration`. Prompted by a discussion on Gitter with @jnizet, we should also mention it in the reference documentation. The ["Additional Auto-configuration and Slicing"](https://docs.spring.io/spring-boot/docs/2.3.1.RELEASE/reference/htmlsingle/#boot-features-testing-spring-boot-applications-testing-auto-configured-additional-auto-config) section is probably the right place for it.
|
non_test
|
document how to use spring factories to add auto configuration to a test slice and made it possible to add auto configuration to a test slice by listing it in spring factories but it was only documented in the javadoc of importautoconfiguration prompted by a discussion on gitter with jnizet we should also mention it in the reference documentation the section is probably the right place for it
| 0
|
268,421
| 23,368,048,596
|
IssuesEvent
|
2022-08-10 17:05:53
|
mehah/otclient
|
https://api.github.com/repos/mehah/otclient
|
closed
|
Item Animations Bug
|
bug Priority: Low Status: Pending Test Type: Bug
|
### Priority
Low
### Area
- [ ] Data
- [X] Source
- [ ] Docker
- [ ] Other
### What happened?
basically, depending on its position in relation to the item, it ends up not drawing in 32x32 and being a small square (I haven't tested it with other items, I just noticed this detail with the portal and I've come to inform you)
tested using release (main) with the latest commits applied.
tested item properties:
https://user-images.githubusercontent.com/86809689/183529537-0c90f3b7-ac2e-4312-8fa5-a97449ef5a9f.mp4
bug demo:
https://user-images.githubusercontent.com/86809689/183529539-2b9c741c-a5c7-4f80-805a-9536e8e08478.mp4
### What OS are you seeing the problem on?
Windows
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
Item Animations Bug - ### Priority
Low
### Area
- [ ] Data
- [X] Source
- [ ] Docker
- [ ] Other
### What happened?
basically, depending on its position in relation to the item, it ends up not drawing in 32x32 and being a small square (I haven't tested it with other items, I just noticed this detail with the portal and I've come to inform you)
tested using release (main) with the latest commits applied.
tested item properties:
https://user-images.githubusercontent.com/86809689/183529537-0c90f3b7-ac2e-4312-8fa5-a97449ef5a9f.mp4
bug demo:
https://user-images.githubusercontent.com/86809689/183529539-2b9c741c-a5c7-4f80-805a-9536e8e08478.mp4
### What OS are you seeing the problem on?
Windows
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
test
|
item animations bug priority low area data source docker other what happened basically depending on its position in relation to the item it ends up not drawing in and being a small square i haven t tested it with other items i just noticed this detail with the portal and i ve come to inform you tested using release main with the latest commits applied tested item properties bug demo what os are you seeing the problem on windows code of conduct i agree to follow this project s code of conduct
| 1
|
77,739
| 7,601,328,309
|
IssuesEvent
|
2018-04-28 12:17:32
|
fpco/fpco-salt-formula
|
https://api.github.com/repos/fpco/fpco-salt-formula
|
opened
|
multi-host test env on AWS
|
quality assurance test
|
Use the AWS Foundation Terraform modules to build out a super simple, multi-host test env that runs on AWS.
|
1.0
|
multi-host test env on AWS - Use the AWS Foundation Terraform modules to build out a super simple, multi-host test env that runs on AWS.
|
test
|
multi host test env on aws use the aws foundation terraform modules to build out a super simple multi host test env that runs on aws
| 1
|
299,934
| 22,633,686,236
|
IssuesEvent
|
2022-06-30 16:45:15
|
devonfw/devon4j
|
https://api.github.com/repos/devonfw/devon4j
|
closed
|
modernization and alignment with devon4quarkus
|
enhancement documentation cloud
|
We are in the process to offer two general options in the Java stack of devonfw:
* spring-boot
* quarkus
The current `devon4j` modules will remain the `spring-boot` specific libraries. We will fully support and maintain them also in the future.
In case we will create new modules/libraries for `quarkus` we will create those in https://github.com/devonfw/devon4quarkus
In the documentation we have two options:
1. Create and maintain a new and completely independet documentation for devon4quarkus and keep the `devon4j` documentation untouched as "devon4spring" documentation.
2. Adopt the existing `devon4j` documentation as a general documentation of the Java stack in devonfw. Separate options for spring/spring-boot and quarkus where things differ.
Both options have their pros and cons.
However, IMHO on the long run option 1. is better, because:
* It is reducing redundancies (~50% of the content with coding conventions, implementing a REST service with JAX-RS, etc. will be identical).
* Therefore maintenance effort is significantly reduced with 1.
* When searching for a specific topic with 1. you will typically get one hit for Java where as with 2. you will always get two hits even if both hits have the same content.
Please note that for the end-user we need a new and better way to approach the documentation anyway.
A user should choose for technology such as `C#` or `Java` on our devonfw.com website. When `Java` is selected we should give a brief introduction about the two options spring and quarkus with some pros and cons. From these two options you should be guided to the details via simple link to some nice summary guides explaining spring-boot and quarkus approach in Java stack of devonfw.
Therefore I create this issue to link first PRs in extending the `devon4j` documentation accordingly.
For the record: We can still have two separate PDFs generated for spring (classic devon4j) and quarkus (devon4quarkus) if we want (even when going for option 1.).
|
1.0
|
modernization and alignment with devon4quarkus - We are in the process to offer two general options in the Java stack of devonfw:
* spring-boot
* quarkus
The current `devon4j` modules will remain the `spring-boot` specific libraries. We will fully support and maintain them also in the future.
In case we will create new modules/libraries for `quarkus` we will create those in https://github.com/devonfw/devon4quarkus
In the documentation we have two options:
1. Create and maintain a new and completely independet documentation for devon4quarkus and keep the `devon4j` documentation untouched as "devon4spring" documentation.
2. Adopt the existing `devon4j` documentation as a general documentation of the Java stack in devonfw. Separate options for spring/spring-boot and quarkus where things differ.
Both options have their pros and cons.
However, IMHO on the long run option 1. is better, because:
* It is reducing redundancies (~50% of the content with coding conventions, implementing a REST service with JAX-RS, etc. will be identical).
* Therefore maintenance effort is significantly reduced with 1.
* When searching for a specific topic with 1. you will typically get one hit for Java where as with 2. you will always get two hits even if both hits have the same content.
Please note that for the end-user we need a new and better way to approach the documentation anyway.
A user should choose for technology such as `C#` or `Java` on our devonfw.com website. When `Java` is selected we should give a brief introduction about the two options spring and quarkus with some pros and cons. From these two options you should be guided to the details via simple link to some nice summary guides explaining spring-boot and quarkus approach in Java stack of devonfw.
Therefore I create this issue to link first PRs in extending the `devon4j` documentation accordingly.
For the record: We can still have two separate PDFs generated for spring (classic devon4j) and quarkus (devon4quarkus) if we want (even when going for option 1.).
|
non_test
|
modernization and alignment with we are in the process to offer two general options in the java stack of devonfw spring boot quarkus the current modules will remain the spring boot specific libraries we will fully support and maintain them also in the future in case we will create new modules libraries for quarkus we will create those in in the documentation we have two options create and maintain a new and completely independet documentation for and keep the documentation untouched as documentation adopt the existing documentation as a general documentation of the java stack in devonfw separate options for spring spring boot and quarkus where things differ both options have their pros and cons however imho on the long run option is better because it is reducing redundancies of the content with coding conventions implementing a rest service with jax rs etc will be identical therefore maintenance effort is significantly reduced with when searching for a specific topic with you will typically get one hit for java where as with you will always get two hits even if both hits have the same content please note that for the end user we need a new and better way to approach the documentation anyway a user should choose for technology such as c or java on our devonfw com website when java is selected we should give a brief introduction about the two options spring and quarkus with some pros and cons from these two options you should be guided to the details via simple link to some nice summary guides explaining spring boot and quarkus approach in java stack of devonfw therefore i create this issue to link first prs in extending the documentation accordingly for the record we can still have two separate pdfs generated for spring classic and quarkus if we want even when going for option
| 0
|
33,246
| 7,687,784,539
|
IssuesEvent
|
2018-05-17 07:17:50
|
eamodio/vscode-gitlens
|
https://api.github.com/repos/eamodio/vscode-gitlens
|
closed
|
"Running the contributed command:'gitlens.diffWithPrevious' failed." with keybinding not working
|
bug vscode issue
|
<!--
If you are encountering an issue that says `See output channel for more details`, please enable output channel logging by setting `"gitlens.outputLevel": "verbose"` in your settings.json. This will enable logging to the GitLens channel in the Output pane. Once enabled, please attempt to reproduce the issue (if possible) and attach the relevant log lines from the GitLens channel.
-->
- GitLens Version: 8.2.4
- VSCode Version: Version 1.24.0-insider (1.24.0-insider)
- OS Version: 10.13.4
Steps to Reproduce:
1. Disable all extensions, run it with the insiders build with only Gitlens enabled.
2. In your settings.json only keep the `"gitlens.keymap": "alternate",` setting.
2. Try to run the keyboard shortcut `alt+,`
3. See the bug
|
1.0
|
"Running the contributed command:'gitlens.diffWithPrevious' failed." with keybinding not working - <!--
If you are encountering an issue that says `See output channel for more details`, please enable output channel logging by setting `"gitlens.outputLevel": "verbose"` in your settings.json. This will enable logging to the GitLens channel in the Output pane. Once enabled, please attempt to reproduce the issue (if possible) and attach the relevant log lines from the GitLens channel.
-->
- GitLens Version: 8.2.4
- VSCode Version: Version 1.24.0-insider (1.24.0-insider)
- OS Version: 10.13.4
Steps to Reproduce:
1. Disable all extensions, run it with the insiders build with only Gitlens enabled.
2. In your settings.json only keep the `"gitlens.keymap": "alternate",` setting.
2. Try to run the keyboard shortcut `alt+,`
3. See the bug
|
non_test
|
running the contributed command gitlens diffwithprevious failed with keybinding not working if you are encountering an issue that says see output channel for more details please enable output channel logging by setting gitlens outputlevel verbose in your settings json this will enable logging to the gitlens channel in the output pane once enabled please attempt to reproduce the issue if possible and attach the relevant log lines from the gitlens channel gitlens version vscode version version insider insider os version steps to reproduce disable all extensions run it with the insiders build with only gitlens enabled in your settings json only keep the gitlens keymap alternate setting try to run the keyboard shortcut alt see the bug
| 0
|
107,628
| 16,761,613,040
|
IssuesEvent
|
2021-06-13 22:31:40
|
gms-ws-demo/nibrs
|
https://api.github.com/repos/gms-ws-demo/nibrs
|
closed
|
CVE-2019-20330 (High) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2526">https://github.com/FasterXML/jackson-databind/issues/2526</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20330","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-20330 (High) detected in multiple libraries - autoclosed - ## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2526">https://github.com/FasterXML/jackson-databind/issues/2526</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20330","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs flatfile pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs staging data pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar nibrs web nibrs web target nibrs web web inf lib jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before lacks certain net sf ehcache blocking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter json release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before lacks certain net sf ehcache blocking vulnerabilityurl
| 0
|
137,080
| 11,099,037,145
|
IssuesEvent
|
2019-12-16 16:15:52
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Listening port for Load Balancer is overwritten in UI and in API
|
[zube]: To Test kind/bug-qa
|
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
Create an EC2 Cluster with Amazon cloud provider.
Deploy a workload.
Add port as a load balancer.
Set listening port to anything but 0.

Launch.
Edit the workload.
**Result:**
Listening port has been set to 0.

Examining in API yields:
```
ports": [
{
"containerPort": 80,
"dnsName": "test-loadbalancer",
"kind": "LoadBalancer",
"name": "testport",
"protocol": "TCP",
"sourcePort": 0,
"type": "/v3/project/schemas/containerPort"
}
],
```
**Other details that may be helpful:**
May be related to the following: https://github.com/rancher/rancher/issues/23375
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): Rancher v.2.3.3
Seen in master-head
- Installation option (single install/HA): single install
|
1.0
|
Listening port for Load Balancer is overwritten in UI and in API - **What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
Create an EC2 Cluster with Amazon cloud provider.
Deploy a workload.
Add port as a load balancer.
Set listening port to anything but 0.

Launch.
Edit the workload.
**Result:**
Listening port has been set to 0.

Examining in API yields:
```
ports": [
{
"containerPort": 80,
"dnsName": "test-loadbalancer",
"kind": "LoadBalancer",
"name": "testport",
"protocol": "TCP",
"sourcePort": 0,
"type": "/v3/project/schemas/containerPort"
}
],
```
**Other details that may be helpful:**
May be related to the following: https://github.com/rancher/rancher/issues/23375
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): Rancher v.2.3.3
Seen in master-head
- Installation option (single install/HA): single install
|
test
|
listening port for load balancer is overwritten in ui and in api what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible create an cluster with amazon cloud provider deploy a workload add port as a load balancer set listening port to anything but launch edit the workload result listening port has been set to examining in api yields ports containerport dnsname test loadbalancer kind loadbalancer name testport protocol tcp sourceport type project schemas containerport other details that may be helpful may be related to the following environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui rancher v seen in master head installation option single install ha single install
| 1
|
133,410
| 5,202,747,802
|
IssuesEvent
|
2017-01-24 10:31:32
|
emfoundation/ce100-app
|
https://api.github.com/repos/emfoundation/ce100-app
|
opened
|
Split ajax request for new organisation
|
enhancement priority-4 T4h technical
|
At the moment the ajax request is uploading the image logo on S3 **and** creating the organisation in Postgres. We can only use ajax for the images, see https://github.com/emfoundation/ce100-app/pull/621#pullrequestreview-18126277
|
1.0
|
Split ajax request for new organisation - At the moment the ajax request is uploading the image logo on S3 **and** creating the organisation in Postgres. We can only use ajax for the images, see https://github.com/emfoundation/ce100-app/pull/621#pullrequestreview-18126277
|
non_test
|
split ajax request for new organisation at the moment the ajax request is uploading the image logo on and creating the organisation in postgres we can only use ajax for the images see
| 0
|
442,864
| 12,751,977,635
|
IssuesEvent
|
2020-06-27 14:05:09
|
jenkins-x/jx
|
https://api.github.com/repos/jenkins-x/jx
|
closed
|
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value
|
area/aks area/jenkins kind/bug lifecycle/rotten priority/important-longterm
|
### Summary
Trying to install Jenkins X in my freshly created cluster in Azure. But whenever i try to install it by executing command 'jx install' and enter the needed information, it always fall to the error:
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value
Both Static Jenkins Server and Serverless Jenkins installation has the same behavior.
### Steps to reproduce the behavior
1. Execute command `jx install'
2. Enter the required information in the terminal.
3. Watch out for the log in the terminal after the 'Setting the pipelines Git server https://github.com and user name [your username]'.
### Expected behavior
Install successfully.
### Actual behavior
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value
### Jx version
The output of `jx version` is:
```
jx 2.0.463
Kubernetes cluster v1.12.8
kubectl v1.14.1
helm client Client: v2.13.1+g618447c
git git version 2.20.1 (Apple Git-117)
Operating System Mac OS X 10.14.5 build 18F132
```
### Jenkins type
<!--
Select which installation type are you using.
-->
- [ x] Serverless Jenkins X Pipelines (Tekton + Prow)
- [x ] Classic Jenkins
### Kubernetes cluster
<!--
What kind of Kubernetes cluster are you using & how did you create it?
-->
I am using AKS, created it by following the instruction in Azure portal.
### Operating system / Environment
<!--
In which environment are you running the jx CLI?
-->
Operating System: Ubuntu 16.04
Environment: Dev
|
1.0
|
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value - ### Summary
Trying to install Jenkins X in my freshly created cluster in Azure. But whenever i try to install it by executing command 'jx install' and enter the needed information, it always fall to the error:
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value
Both Static Jenkins Server and Serverless Jenkins installation has the same behavior.
### Steps to reproduce the behavior
1. Execute command `jx install'
2. Enter the required information in the terminal.
3. Watch out for the log in the terminal after the 'Setting the pipelines Git server https://github.com and user name [your username]'.
### Expected behavior
Install successfully.
### Actual behavior
error: configuring the docker registry: configure cloud provider docker registry: getting cluster from Azure: invalid character 'W' looking for beginning of value
### Jx version
The output of `jx version` is:
```
jx 2.0.463
Kubernetes cluster v1.12.8
kubectl v1.14.1
helm client Client: v2.13.1+g618447c
git git version 2.20.1 (Apple Git-117)
Operating System Mac OS X 10.14.5 build 18F132
```
### Jenkins type
<!--
Select which installation type are you using.
-->
- [ x] Serverless Jenkins X Pipelines (Tekton + Prow)
- [x ] Classic Jenkins
### Kubernetes cluster
<!--
What kind of Kubernetes cluster are you using & how did you create it?
-->
I am using AKS, created it by following the instruction in Azure portal.
### Operating system / Environment
<!--
In which environment are you running the jx CLI?
-->
Operating System: Ubuntu 16.04
Environment: Dev
|
non_test
|
error configuring the docker registry configure cloud provider docker registry getting cluster from azure invalid character w looking for beginning of value summary trying to install jenkins x in my freshly created cluster in azure but whenever i try to install it by executing command jx install and enter the needed information it always fall to the error error configuring the docker registry configure cloud provider docker registry getting cluster from azure invalid character w looking for beginning of value both static jenkins server and serverless jenkins installation has the same behavior steps to reproduce the behavior execute command jx install enter the required information in the terminal watch out for the log in the terminal after the setting the pipelines git server and user name expected behavior install successfully actual behavior error configuring the docker registry configure cloud provider docker registry getting cluster from azure invalid character w looking for beginning of value jx version the output of jx version is jx kubernetes cluster kubectl helm client client git git version apple git operating system mac os x build jenkins type select which installation type are you using serverless jenkins x pipelines tekton prow classic jenkins kubernetes cluster what kind of kubernetes cluster are you using how did you create it i am using aks created it by following the instruction in azure portal operating system environment in which environment are you running the jx cli operating system ubuntu environment dev
| 0
|
247,456
| 20,980,687,605
|
IssuesEvent
|
2022-03-28 19:36:05
|
rstudio/rstudio
|
https://api.github.com/repos/rstudio/rstudio
|
closed
|
Square and resized icon for macOS Big Sur
|
enhancement macos test
|
Most icons on macOS Big Sur use a square shape similar to those found on iOS and iPadOS. For the sake of consistency, it would be great if the RStudio team would also consider updating the app icon in future Mac builds to use a square shape too.
|
1.0
|
Square and resized icon for macOS Big Sur - Most icons on macOS Big Sur use a square shape similar to those found on iOS and iPadOS. For the sake of consistency, it would be great if the RStudio team would also consider updating the app icon in future Mac builds to use a square shape too.
|
test
|
square and resized icon for macos big sur most icons on macos big sur use a square shape similar to those found on ios and ipados for the sake of consistency it would be great if the rstudio team would also consider updating the app icon in future mac builds to use a square shape too
| 1
|
59,261
| 14,369,095,144
|
IssuesEvent
|
2020-12-01 09:19:25
|
ignatandrei/stankins
|
https://api.github.com/repos/ignatandrei/stankins
|
closed
|
CVE-2018-16487 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b>, <b>lodash-4.6.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/xmlbuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- jscs-3.0.7.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/fg-lodash/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-saucelabs-9.0.1.tgz (Root Library)
- requestretry-1.9.1.tgz
- fg-lodash-0.0.2.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.6.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/grunt-jscs/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- :x: **lodash-4.6.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-16487 (Medium) detected in multiple libraries - ## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b>, <b>lodash-4.6.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/xmlbuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- jscs-3.0.7.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/fg-lodash/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-saucelabs-9.0.1.tgz (Root Library)
- requestretry-1.9.1.tgz
- fg-lodash-0.0.2.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.6.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/grunt-jscs/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- :x: **lodash-4.6.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm stankins htmlgenerator package json path to vulnerable library tmp ws scm stankins htmlgenerator node modules lodash package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file tmp ws scm stankins htmlgenerator wwwroot lib bootstrap package json path to vulnerable library tmp ws scm stankins htmlgenerator wwwroot lib bootstrap node modules xmlbuilder node modules lodash package json dependency hierarchy grunt jscs tgz root library jscs tgz x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file tmp ws scm stankins htmlgenerator wwwroot lib bootstrap package json path to vulnerable library tmp ws scm stankins htmlgenerator wwwroot lib bootstrap node modules fg lodash node modules lodash package json dependency hierarchy grunt saucelabs tgz root library requestretry tgz fg lodash tgz x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm stankins htmlgenerator wwwroot lib bootstrap package json path to vulnerable library tmp ws scm stankins htmlgenerator wwwroot lib bootstrap node modules grunt jscs node modules lodash package json dependency hierarchy grunt jscs tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability was found in lodash where the functions merge mergewith and defaultsdeep can be tricked into adding or modifying properties of object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
270,150
| 23,494,012,307
|
IssuesEvent
|
2022-08-17 22:00:47
|
pulp/pulp_rpm
|
https://api.github.com/repos/pulp/pulp_rpm
|
closed
|
Add test for modular RPM detection
|
Tests
|
Author: @dralley (dalley)
Redmine Issue: 8973, https://pulp.plan.io/issues/8973
---
Test that modular RPMs are indeed marked is_modular=True and vice versa
|
1.0
|
Add test for modular RPM detection - Author: @dralley (dalley)
Redmine Issue: 8973, https://pulp.plan.io/issues/8973
---
Test that modular RPMs are indeed marked is_modular=True and vice versa
|
test
|
add test for modular rpm detection author dralley dalley redmine issue test that modular rpms are indeed marked is modular true and vice versa
| 1
|
123,525
| 10,272,033,463
|
IssuesEvent
|
2019-08-23 15:26:02
|
ValveSoftware/steam-for-linux
|
https://api.github.com/repos/ValveSoftware/steam-for-linux
|
closed
|
Streaming from linux to SteamOS or Steam Link with Dualshock 4 crashes host
|
3rd party game Distro Family: Debian General controller / Steam Input Need Retest Steam client Streaming
|
#### Your system information
Operating System Version:
Debian GNU/Linux buster/sid (64 bit)
Kernel Name: Linux
Kernel Version: 4.18.0-3-amd64
Processor Information:
CPU Brand: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Memory:
RAM: 31681 Mb
Video Card:
Driver: NVIDIA Corporation GeForce GTX 1070
Driver Version: 4.6.0 NVIDIA 390.87
OpenGL Version: 4.6
* Steam client version (build number or date): Nov 26th 2018
* Distribution (e.g. Ubuntu): Debian 9.6
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
Streaming any game from Debian host to SteamOS or SteamLink with a Dualshock 4 controller connected to the client crashes Steam on the host as soon as the game is launched. Controller and streaming work fine if the controller is connected to the host. Steam does not log any errors during the crash.
Tested with Battleblock Theater, Portal 2, and DIRT Rally.
Controller works fine navigating Big Picture Mode menus on the SteamOS or Steam Link client.
###### Steps for reproducing this issue:
1. Pair or connect Dualshock 4 controller to SteamOS or Steam Link.
2. Launch a game stream from the Debian host PC.
##### Bootstrap log:
[2018-12-24 10:15:02] Startup - updater built Nov 26 2018 20:15:21
[2018-12-24 10:15:02] Checking for update on startup
[2018-12-24 10:15:02] Checking for available updates...
[2018-12-24 10:15:02] Downloading manifest: client-download.steampowered.com/client/steam_client_ubuntu12
[2018-12-24 10:15:03] Download skipped: /client/steam_client_ubuntu12 version 1543346820, installed version 1543346820
[2018-12-24 10:15:03] Nothing to do
[2018-12-24 10:15:03] Verifying installation...
[2018-12-24 10:15:03] Performing checksum verification of executable files
[2018-12-24 10:15:03] Verification complete
[2018-12-24 10:16:53] Shutdown
##### Streaming log:
[2018-12-24 10:15:00] Streaming started to steamos at 10.1.255.247:37500, audio channels = 2, MTU = 1468
[2018-12-24 10:15:00] Streaming quality: k_EStreamQualityBalanced
[2018-12-24 10:15:00] Streaming bitrate: Automatic
[2018-12-24 10:15:00] Maximum capture: 1920x1080 59.75 FPS
[2018-12-24 10:15:00] Video Streaming: enabled
[2018-12-24 10:15:00] Audio Streaming: enabled
[2018-12-24 10:15:00] Input Streaming: enabled
[2018-12-24 10:15:00] =====================================================================
[2018-12-24 10:15:00] Game: Portal 2 (620)
[2018-12-24 10:15:00] Setting target bitrate to 15000 Kbit/s, burst bitrate is 75000 Kbit/s
[2018-12-24 10:15:00] Recording system audio
[2018-12-24 10:15:00] Streamed game has created a window
[2018-12-24 10:15:00] Bringing streamed game to foreground - failed
[2018-12-24 10:15:00] Audio mix: start=30413070071, returned=0
[2018-12-24 10:15:00] Audio source [System Pulse]: init=0, through=0, last_start=0, mixed=0, drop_before=0, drop_after=0
[2018-12-24 10:15:00] Changing record window: 0x520000b
[2018-12-24 10:15:01] >>> Switching video stream from NONE to Desktop_MovieStream
[2018-12-24 10:15:01] Detected 8 logical processors, using 4 threads
[2018-12-24 10:15:01] >>> Capture method set to Desktop OpenGL NV12 + libx264 main (4 threads)
[2018-12-24 10:15:01] >>> Capture resolution set to 1920x1080
[2018-12-24 10:15:01] >>> Client video decoder set to VAAPI hardware decoding
[2018-12-24 10:15:01] Detected 8 logical processors, using 4 threads
##### Controller log:
[2018-12-24 10:11:06] No cached sticky mapping in ActivateActionSet.
[2018-12-24 10:12:29] No cached sticky mapping in ActivateActionSet.
[2018-12-24 10:14:59] Opted-in Controller Mask for AppId 620: 0
[2018-12-24 10:15:01] Remote Device Found
type: 054c 09cc
path: /dev/hidraw7
serial_number: f4:93:9f:c3:32:38 - 0
[2018-12-24 10:15:01] Manufacturer: Sony Interactive Entertainment
[2018-12-24 10:15:01] Product: Wireless Controller
[2018-12-24 10:15:01] Release: 100
[2018-12-24 10:15:01] Interface: 3
[2018-12-24 10:15:01] !! Steam controller device opened for index 0.
[2018-12-24 10:15:02] Controller 0 mapping uses xinput : false
[2018-12-24 10:15:05] Opted-in Controller Mask for AppId 0: 0
[2018-12-24 10:16:52] Exiting workitem thread
[2018-12-24 10:16:56] Opted-in Controller Mask for AppId 0: 0
|
1.0
|
Streaming from linux to SteamOS or Steam Link with Dualshock 4 crashes host - #### Your system information
Operating System Version:
Debian GNU/Linux buster/sid (64 bit)
Kernel Name: Linux
Kernel Version: 4.18.0-3-amd64
Processor Information:
CPU Brand: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Memory:
RAM: 31681 Mb
Video Card:
Driver: NVIDIA Corporation GeForce GTX 1070
Driver Version: 4.6.0 NVIDIA 390.87
OpenGL Version: 4.6
* Steam client version (build number or date): Nov 26th 2018
* Distribution (e.g. Ubuntu): Debian 9.6
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
Streaming any game from Debian host to SteamOS or SteamLink with a Dualshock 4 controller connected to the client crashes Steam on the host as soon as the game is launched. Controller and streaming work fine if the controller is connected to the host. Steam does not log any errors during the crash.
Tested with Battleblock Theater, Portal 2, and DIRT Rally.
Controller works fine navigating Big Picture Mode menus on the SteamOS or Steam Link client.
###### Steps for reproducing this issue:
1. Pair or connect Dualshock 4 controller to SteamOS or Steam Link.
2. Launch a game stream from the Debian host PC.
##### Bootstrap log:
[2018-12-24 10:15:02] Startup - updater built Nov 26 2018 20:15:21
[2018-12-24 10:15:02] Checking for update on startup
[2018-12-24 10:15:02] Checking for available updates...
[2018-12-24 10:15:02] Downloading manifest: client-download.steampowered.com/client/steam_client_ubuntu12
[2018-12-24 10:15:03] Download skipped: /client/steam_client_ubuntu12 version 1543346820, installed version 1543346820
[2018-12-24 10:15:03] Nothing to do
[2018-12-24 10:15:03] Verifying installation...
[2018-12-24 10:15:03] Performing checksum verification of executable files
[2018-12-24 10:15:03] Verification complete
[2018-12-24 10:16:53] Shutdown
##### Streaming log:
[2018-12-24 10:15:00] Streaming started to steamos at 10.1.255.247:37500, audio channels = 2, MTU = 1468
[2018-12-24 10:15:00] Streaming quality: k_EStreamQualityBalanced
[2018-12-24 10:15:00] Streaming bitrate: Automatic
[2018-12-24 10:15:00] Maximum capture: 1920x1080 59.75 FPS
[2018-12-24 10:15:00] Video Streaming: enabled
[2018-12-24 10:15:00] Audio Streaming: enabled
[2018-12-24 10:15:00] Input Streaming: enabled
[2018-12-24 10:15:00] =====================================================================
[2018-12-24 10:15:00] Game: Portal 2 (620)
[2018-12-24 10:15:00] Setting target bitrate to 15000 Kbit/s, burst bitrate is 75000 Kbit/s
[2018-12-24 10:15:00] Recording system audio
[2018-12-24 10:15:00] Streamed game has created a window
[2018-12-24 10:15:00] Bringing streamed game to foreground - failed
[2018-12-24 10:15:00] Audio mix: start=30413070071, returned=0
[2018-12-24 10:15:00] Audio source [System Pulse]: init=0, through=0, last_start=0, mixed=0, drop_before=0, drop_after=0
[2018-12-24 10:15:00] Changing record window: 0x520000b
[2018-12-24 10:15:01] >>> Switching video stream from NONE to Desktop_MovieStream
[2018-12-24 10:15:01] Detected 8 logical processors, using 4 threads
[2018-12-24 10:15:01] >>> Capture method set to Desktop OpenGL NV12 + libx264 main (4 threads)
[2018-12-24 10:15:01] >>> Capture resolution set to 1920x1080
[2018-12-24 10:15:01] >>> Client video decoder set to VAAPI hardware decoding
[2018-12-24 10:15:01] Detected 8 logical processors, using 4 threads
##### Controller log:
[2018-12-24 10:11:06] No cached sticky mapping in ActivateActionSet.
[2018-12-24 10:12:29] No cached sticky mapping in ActivateActionSet.
[2018-12-24 10:14:59] Opted-in Controller Mask for AppId 620: 0
[2018-12-24 10:15:01] Remote Device Found
type: 054c 09cc
path: /dev/hidraw7
serial_number: f4:93:9f:c3:32:38 - 0
[2018-12-24 10:15:01] Manufacturer: Sony Interactive Entertainment
[2018-12-24 10:15:01] Product: Wireless Controller
[2018-12-24 10:15:01] Release: 100
[2018-12-24 10:15:01] Interface: 3
[2018-12-24 10:15:01] !! Steam controller device opened for index 0.
[2018-12-24 10:15:02] Controller 0 mapping uses xinput : false
[2018-12-24 10:15:05] Opted-in Controller Mask for AppId 0: 0
[2018-12-24 10:16:52] Exiting workitem thread
[2018-12-24 10:16:56] Opted-in Controller Mask for AppId 0: 0
|
test
|
streaming from linux to steamos or steam link with dualshock crashes host your system information operating system version debian gnu linux buster sid bit kernel name linux kernel version processor information cpu brand intel r core tm cpu memory ram mb video card driver nvidia corporation geforce gtx driver version nvidia opengl version steam client version build number or date nov distribution e g ubuntu debian opted into steam client beta no have you checked for system updates yes please describe your issue in as much detail as possible streaming any game from debian host to steamos or steamlink with a dualshock controller connected to the client crashes steam on the host as soon as the game is launched controller and streaming work fine if the controller is connected to the host steam does not log any errors during the crash tested with battleblock theater portal and dirt rally controller works fine navigating big picture mode menus on the steamos or steam link client steps for reproducing this issue pair or connect dualshock controller to steamos or steam link launch a game stream from the debian host pc bootstrap log startup updater built nov checking for update on startup checking for available updates downloading manifest client download steampowered com client steam client download skipped client steam client version installed version nothing to do verifying installation performing checksum verification of executable files verification complete shutdown streaming log streaming started to steamos at audio channels mtu streaming quality k estreamqualitybalanced streaming bitrate automatic maximum capture fps video streaming enabled audio streaming enabled input streaming enabled game portal setting target bitrate to kbit s burst bitrate is kbit s recording system audio streamed game has created a window bringing streamed game to foreground failed audio mix start returned audio source init through last start mixed drop before drop after changing record window switching video stream from none to desktop moviestream detected logical processors using threads capture method set to desktop opengl main threads capture resolution set to client video decoder set to vaapi hardware decoding detected logical processors using threads controller log no cached sticky mapping in activateactionset no cached sticky mapping in activateactionset opted in controller mask for appid remote device found type path dev serial number manufacturer sony interactive entertainment product wireless controller release interface steam controller device opened for index controller mapping uses xinput false opted in controller mask for appid exiting workitem thread opted in controller mask for appid
| 1
|
169,332
| 26,782,327,254
|
IssuesEvent
|
2023-01-31 22:23:13
|
readthedocs/sphinx_rtd_theme
|
https://api.github.com/repos/readthedocs/sphinx_rtd_theme
|
opened
|
Add "scroll-behavior: smooth;" in CSS?
|
Improvement Design Feature
|
There are lots of anchor links that can be easily animated by using:
```
html {
scroll-behavior: smooth;
}
```
It's fairly uncomplicated to add and supported by most browsers: https://caniuse.com/?search=scroll-behavior
|
1.0
|
Add "scroll-behavior: smooth;" in CSS? - There are lots of anchor links that can be easily animated by using:
```
html {
scroll-behavior: smooth;
}
```
It's fairly uncomplicated to add and supported by most browsers: https://caniuse.com/?search=scroll-behavior
|
non_test
|
add scroll behavior smooth in css there are lots of anchor links that can be easily animated by using html scroll behavior smooth it s fairly uncomplicated to add and supported by most browsers
| 0
|
31,407
| 4,705,698,875
|
IssuesEvent
|
2016-10-13 15:10:10
|
INN/Largo
|
https://api.github.com/repos/INN/Largo
|
opened
|
undefined variables in inc/featured-media.php
|
priority: low status: needs tests type: bug
|
Notice: Undefined index: tag_ID in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 269
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 276
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 312
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 207
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 211
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 224
|
1.0
|
undefined variables in inc/featured-media.php -
Notice: Undefined index: tag_ID in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 269
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 276
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 312
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 207
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 211
Notice: Trying to get property of non-object in /srv/www/citylimits/htdocs/wp-content/themes/Largo/inc/featured-media.php on line 224
|
test
|
undefined variables in inc featured media php notice undefined index tag id in srv www citylimits htdocs wp content themes largo inc featured media php on line notice trying to get property of non object in srv www citylimits htdocs wp content themes largo inc featured media php on line notice trying to get property of non object in srv www citylimits htdocs wp content themes largo inc featured media php on line notice trying to get property of non object in srv www citylimits htdocs wp content themes largo inc featured media php on line notice trying to get property of non object in srv www citylimits htdocs wp content themes largo inc featured media php on line notice trying to get property of non object in srv www citylimits htdocs wp content themes largo inc featured media php on line
| 1
|
30,405
| 4,613,750,676
|
IssuesEvent
|
2016-09-25 06:16:37
|
daemonraco/json-validator
|
https://api.github.com/repos/daemonraco/json-validator
|
closed
|
Test Container Types against Non-containers
|
JSONValidator Unit Testing
|
## What to test
Test both container types when they are used to validate something that is not a container.
|
1.0
|
Test Container Types against Non-containers - ## What to test
Test both container types when they are used to validate something that is not a container.
|
test
|
test container types against non containers what to test test both container types when they are used to validate something that is not a container
| 1
|
308,694
| 26,625,095,761
|
IssuesEvent
|
2023-01-24 14:03:40
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Execution Context Functional Tests.x-pack/test/functional_execution_context/tests/browser·ts - Execution context Browser apps discover app propagates context for Discover
|
Team:Core failed-test
|
A test failed on a tracked branch
```
Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/parallel/21/kibana/x-pack/test/functional_execution_context/tests/browser.ts)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/17017/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Execution Context Functional Tests.x-pack/test/functional_execution_context/tests/browser·ts","test.name":"Execution context Browser apps discover app propagates context for Discover","test.failCount":9}} -->
|
1.0
|
Failing test: Execution Context Functional Tests.x-pack/test/functional_execution_context/tests/browser·ts - Execution context Browser apps discover app propagates context for Discover - A test failed on a tracked branch
```
Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/parallel/21/kibana/x-pack/test/functional_execution_context/tests/browser.ts)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/17017/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Execution Context Functional Tests.x-pack/test/functional_execution_context/tests/browser·ts","test.name":"Execution context Browser apps discover app propagates context for Discover","test.failCount":9}} -->
|
test
|
failing test execution context functional tests x pack test functional execution context tests browser·ts execution context browser apps discover app propagates context for discover a test failed on a tracked branch error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves dev shm workspace parallel kibana x pack test functional execution context tests browser ts at listontimeout internal timers js at processtimers internal timers js first failure
| 1
|
562,832
| 16,670,567,293
|
IssuesEvent
|
2021-06-07 10:17:51
|
robotframework/robotframework
|
https://api.github.com/repos/robotframework/robotframework
|
closed
|
Creating multiline documentation from CLI using `\n` like `--doc line1\nline2` doesn't work
|
bug priority: medium
|
I would like to add documentation using `-D --doc` arguments on multiple lines using command line.
rebot/robot help says
> -D --doc documentation Set the documentation of the top level suite.
> **Simple formatting is supported** (e.g. *bold*). If the
> documentation contains spaces, it must be quoted.
>
According to manual, simple formatting includes new line management using \n, but it does not work.
Using
`-D "line1\nline2"`
results in
`line1\nline2`
beeing displayed in the documentation instead of
```
line1
line2
```
I've tried `<br>` instead of `\n`, using an argument file, but behavior is the same.
This was tested with versions 3.2.2 and 4.0.2.
I have not tested all features of "simple formatting"
Aside from fixing what I think is a bug/limitation, having the possibility to include a file formated like the documentation section of the *** settings *** could show some usefullness.
Use case:
I'm merging different files, I would like to have in the documentation field, on separate lines (for readability) links to the individual files.
|
1.0
|
Creating multiline documentation from CLI using `\n` like `--doc line1\nline2` doesn't work - I would like to add documentation using `-D --doc` arguments on multiple lines using command line.
rebot/robot help says
> -D --doc documentation Set the documentation of the top level suite.
> **Simple formatting is supported** (e.g. *bold*). If the
> documentation contains spaces, it must be quoted.
>
According to manual, simple formatting includes new line management using \n, but it does not work.
Using
`-D "line1\nline2"`
results in
`line1\nline2`
beeing displayed in the documentation instead of
```
line1
line2
```
I've tried `<br>` instead of `\n`, using an argument file, but behavior is the same.
This was tested with versions 3.2.2 and 4.0.2.
I have not tested all features of "simple formatting"
Aside from fixing what I think is a bug/limitation, having the possibility to include a file formated like the documentation section of the *** settings *** could show some usefullness.
Use case:
I'm merging different files, I would like to have in the documentation field, on separate lines (for readability) links to the individual files.
|
non_test
|
creating multiline documentation from cli using n like doc doesn t work i would like to add documentation using d doc arguments on multiple lines using command line rebot robot help says d doc documentation set the documentation of the top level suite simple formatting is supported e g bold if the documentation contains spaces it must be quoted according to manual simple formatting includes new line management using n but it does not work using d results in beeing displayed in the documentation instead of i ve tried instead of n using an argument file but behavior is the same this was tested with versions and i have not tested all features of simple formatting aside from fixing what i think is a bug limitation having the possibility to include a file formated like the documentation section of the settings could show some usefullness use case i m merging different files i would like to have in the documentation field on separate lines for readability links to the individual files
| 0
|
70,064
| 7,176,390,925
|
IssuesEvent
|
2018-01-31 09:53:18
|
resin-io-modules/resin-procbots
|
https://api.github.com/repos/resin-io-modules/resin-procbots
|
closed
|
SyncBot: Not all text synchronised
|
flow/testing
|
In the user's comment beginning 'working on solving...' on https://forums.resin.io/t/auth-token-without-expiration-to-use-api/302/25 not all of the text ended up in Front.
|
1.0
|
SyncBot: Not all text synchronised - In the user's comment beginning 'working on solving...' on https://forums.resin.io/t/auth-token-without-expiration-to-use-api/302/25 not all of the text ended up in Front.
|
test
|
syncbot not all text synchronised in the user s comment beginning working on solving on not all of the text ended up in front
| 1
|
326,737
| 28,015,634,674
|
IssuesEvent
|
2023-03-27 22:22:45
|
CrazyOldBuffalo/calendar-events-skill
|
https://api.github.com/repos/CrazyOldBuffalo/calendar-events-skill
|
opened
|
Setup Integration Testing using behave python framework
|
testing
|
Setup Integration Testing using the behave framework as covered in the mycroft.ai documentation
|
1.0
|
Setup Integration Testing using behave python framework - Setup Integration Testing using the behave framework as covered in the mycroft.ai documentation
|
test
|
setup integration testing using behave python framework setup integration testing using the behave framework as covered in the mycroft ai documentation
| 1
|
290,376
| 32,068,543,104
|
IssuesEvent
|
2023-09-25 06:12:14
|
OpenZeppelin/cairo-contracts
|
https://api.github.com/repos/OpenZeppelin/cairo-contracts
|
closed
|
Replace env injection method
|
security
|
As reported by @nikitastupin ❤️
> I've noticed the [release](https://github.com/OpenZeppelin/cairo-contracts/blob/main/.github/workflows/release.yml) workflow was added previous week. It's safe but I'd rather use `"$RELEASE_VERSION"` instead of `${{ env.RELEASE_VERSION }}` as a best practice so that other devs can see it and replicate safe pattern in their workflows
> If `env.RELEASE_VERSION` was attacker-controlled then `${{ env.RELEASE_VERSION }}` would lead to command injection
> If you'd like to dive deeper I'd recommend https://securitylab.github.com/research/github-actions-untrusted-input/ for this case and https://github.com/nikitastupin/pwnhub for all other cases :slightly_smiling_face:
|
True
|
Replace env injection method - As reported by @nikitastupin ❤️
> I've noticed the [release](https://github.com/OpenZeppelin/cairo-contracts/blob/main/.github/workflows/release.yml) workflow was added previous week. It's safe but I'd rather use `"$RELEASE_VERSION"` instead of `${{ env.RELEASE_VERSION }}` as a best practice so that other devs can see it and replicate safe pattern in their workflows
> If `env.RELEASE_VERSION` was attacker-controlled then `${{ env.RELEASE_VERSION }}` would lead to command injection
> If you'd like to dive deeper I'd recommend https://securitylab.github.com/research/github-actions-untrusted-input/ for this case and https://github.com/nikitastupin/pwnhub for all other cases :slightly_smiling_face:
|
non_test
|
replace env injection method as reported by nikitastupin ❤️ i ve noticed the workflow was added previous week it s safe but i d rather use release version instead of env release version as a best practice so that other devs can see it and replicate safe pattern in their workflows if env release version was attacker controlled then env release version would lead to command injection if you d like to dive deeper i d recommend for this case and for all other cases slightly smiling face
| 0
|
325,171
| 27,852,776,989
|
IssuesEvent
|
2023-03-20 20:05:09
|
microsoft/vscode-remote-release
|
https://api.github.com/repos/microsoft/vscode-remote-release
|
opened
|
Test: Remove dependance on built-in repository configs
|
containers testplan-item
|
Refs: https://github.com/microsoft/vscode-remote-release/issues/7532
- [ ] anyOS
- [ ] anyOS
Complexity: 2
---
Use Dev Containers 0.286.0-pre-release or later.
Randomly checkout one or two of the below repositories that previously had a dev container config built-in and check that when you reopen its folder in a container you are asked for the dev container template to use and that the dev container opened is indeed built from that template:
- https://github.com/aymericdamien/TensorFlow-Examples
- https://github.com/barryclark/jekyll-now
- https://github.com/django/django
- https://github.com/microsoft/vscode-azure-account
- https://github.com/python/cpython
- https://github.com/spmallick/learnopencv
- https://github.com/tensorflow/addons
- https://github.com/tensorflow/tensorflow
- https://github.com/terraform-providers/terraform-provider-azurerm
|
1.0
|
Test: Remove dependance on built-in repository configs - Refs: https://github.com/microsoft/vscode-remote-release/issues/7532
- [ ] anyOS
- [ ] anyOS
Complexity: 2
---
Use Dev Containers 0.286.0-pre-release or later.
Randomly checkout one or two of the below repositories that previously had a dev container config built-in and check that when you reopen its folder in a container you are asked for the dev container template to use and that the dev container opened is indeed built from that template:
- https://github.com/aymericdamien/TensorFlow-Examples
- https://github.com/barryclark/jekyll-now
- https://github.com/django/django
- https://github.com/microsoft/vscode-azure-account
- https://github.com/python/cpython
- https://github.com/spmallick/learnopencv
- https://github.com/tensorflow/addons
- https://github.com/tensorflow/tensorflow
- https://github.com/terraform-providers/terraform-provider-azurerm
|
test
|
test remove dependance on built in repository configs refs anyos anyos complexity use dev containers pre release or later randomly checkout one or two of the below repositories that previously had a dev container config built in and check that when you reopen its folder in a container you are asked for the dev container template to use and that the dev container opened is indeed built from that template
| 1
|
118,427
| 4,744,923,981
|
IssuesEvent
|
2016-10-21 04:05:27
|
CovertJaguar/Railcraft
|
https://api.github.com/repos/CovertJaguar/Railcraft
|
closed
|
Redstone Condition Incorrect on RF Loader
|
bug priority-medium
|
Description:
The "complete" redstone condition for the RF loader should be "when cart is full", but instead it's "Process until cart is empty".
Tested With:
RailCraft: 1.10.2-10.0.0-beta-3
Forge: 1.10.2-12.18.2.2099
|
1.0
|
Redstone Condition Incorrect on RF Loader - Description:
The "complete" redstone condition for the RF loader should be "when cart is full", but instead it's "Process until cart is empty".
Tested With:
RailCraft: 1.10.2-10.0.0-beta-3
Forge: 1.10.2-12.18.2.2099
|
non_test
|
redstone condition incorrect on rf loader description the complete redstone condition for the rf loader should be when cart is full but instead it s process until cart is empty tested with railcraft beta forge
| 0
|
296,886
| 25,583,096,663
|
IssuesEvent
|
2022-12-01 06:57:03
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
acceptance: TestDockerC failed
|
C-test-failure O-robot branch-release-22.2
|
acceptance.TestDockerC [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7783894?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7783894?buildTab=artifacts#/) on release-22.2 @ [ca673993473629a46370100108dd8c185a592e91](https://github.com/cockroachdb/cockroach/commits/ca673993473629a46370100108dd8c185a592e91):
```
=== RUN TestDockerC/Success/runMode=docker
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 "": does not exist
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !goroutine 71 [running]:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0x0)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0x9c
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0xc000ca5020, {{{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}}, 0x172c99080221ecbf, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/clog.go:260 +0xb7
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepthInternal({0x8a6bb68, 0xc000082088}, 0x2, 0x4, 0x0, 0x0, {0x6915638, 0x14}, {0xc001946800, 0x1, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/channels.go:106 +0x6e6
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepth(...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/channels.go:39
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/bazel-out/k8-fastbuild-ST-1665e0aa65f7/bin/pkg/util/log/log_channels_generated.go:848
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance/cluster.CreateDocker({0x8a6bb68, 0xc000082088}, {{0x68e027f, 0x1}, {0xc000ca5890, 0x1, 0x1}, 0x12a05f200, 0x0, 0x0}, ...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/cluster/pkg/acceptance/cluster/dockercluster.go:155 +0x1d6
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance.StartCluster({0x8a6bb68, 0xc000082088}, 0xc000583ba0, {{0x68e027f, 0x1}, {0xc000ca5890, 0x1, 0x1}, 0x12a05f200, 0x0, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:79 +0x3fd
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc000583ba0)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:124 +0xae7
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !testing.tRunner(0xc000583ba0, 0xc00083a340)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! GOROOT/src/testing/testing.go:1446 +0x217
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !created by testing.(*T).Run
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! GOROOT/src/testing/testing.go:1493 +0x75e
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !For more context, check log files in: /artifacts/tmp/_tmp/b82317f51c7daead0d1312a2f3300dbe/logTestDockerC1078540355
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !****************************************************************************
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !This node experienced a fatal error (printed above), and as a result the
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !process is terminating.
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !Fatal errors can occur due to faulty hardware (disks, memory, clocks) or a
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !problem in CockroachDB. With your help, the support team at Cockroach Labs
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !will try to determine the root cause, recommend next steps, and we can
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !improve CockroachDB based on your report.
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !Please submit a crash report by following the instructions here:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! https://github.com/cockroachdb/cockroach/issues/new/choose
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !If you would rather not post publicly, please contact us directly at:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! support@cockroachlabs.com
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !The Cockroach Labs team appreciates your feedback.
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #78305 acceptance: TestDockerC/Fail/runMode=docker failed [C-test-failure O-robot branch-release-22.1]
- #63746 acceptance: TestDockerC failed [C-test-failure O-robot branch-master]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDockerC.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
acceptance: TestDockerC failed - acceptance.TestDockerC [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7783894?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/7783894?buildTab=artifacts#/) on release-22.2 @ [ca673993473629a46370100108dd8c185a592e91](https://github.com/cockroachdb/cockroach/commits/ca673993473629a46370100108dd8c185a592e91):
```
=== RUN TestDockerC/Success/runMode=docker
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 "": does not exist
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !goroutine 71 [running]:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0x0)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0x9c
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0xc000ca5020, {{{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}}, 0x172c99080221ecbf, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/clog.go:260 +0xb7
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepthInternal({0x8a6bb68, 0xc000082088}, 0x2, 0x4, 0x0, 0x0, {0x6915638, 0x14}, {0xc001946800, 0x1, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/channels.go:106 +0x6e6
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.logfDepth(...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/util/log/channels.go:39
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/bazel-out/k8-fastbuild-ST-1665e0aa65f7/bin/pkg/util/log/log_channels_generated.go:848
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance/cluster.CreateDocker({0x8a6bb68, 0xc000082088}, {{0x68e027f, 0x1}, {0xc000ca5890, 0x1, 0x1}, 0x12a05f200, 0x0, 0x0}, ...)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/cluster/pkg/acceptance/cluster/dockercluster.go:155 +0x1d6
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance.StartCluster({0x8a6bb68, 0xc000082088}, 0xc000583ba0, {{0x68e027f, 0x1}, {0xc000ca5890, 0x1, 0x1}, 0x12a05f200, 0x0, ...})
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:79 +0x3fd
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc000583ba0)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:124 +0xae7
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !testing.tRunner(0xc000583ba0, 0xc00083a340)
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! GOROOT/src/testing/testing.go:1446 +0x217
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !created by testing.(*T).Run
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! GOROOT/src/testing/testing.go:1493 +0x75e
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !For more context, check log files in: /artifacts/tmp/_tmp/b82317f51c7daead0d1312a2f3300dbe/logTestDockerC1078540355
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !****************************************************************************
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !This node experienced a fatal error (printed above), and as a result the
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !process is terminating.
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !Fatal errors can occur due to faulty hardware (disks, memory, clocks) or a
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !problem in CockroachDB. With your help, the support team at Cockroach Labs
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !will try to determine the root cause, recommend next steps, and we can
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !improve CockroachDB based on your report.
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !Please submit a crash report by following the instructions here:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! https://github.com/cockroachdb/cockroach/issues/new/choose
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !If you would rather not post publicly, please contact us directly at:
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 ! support@cockroachlabs.com
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !
F221201 06:57:01.522177 71 acceptance/cluster/dockercluster.go:155 [-] 1 !The Cockroach Labs team appreciates your feedback.
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #78305 acceptance: TestDockerC/Fail/runMode=docker failed [C-test-failure O-robot branch-release-22.1]
- #63746 acceptance: TestDockerC failed [C-test-failure O-robot branch-master]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDockerC.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
acceptance testdockerc failed acceptance testdockerc with on release run testdockerc success runmode docker acceptance cluster dockercluster go does not exist acceptance cluster dockercluster go goroutine acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log getstacks acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log get stacks go acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log loggert outputlogentry acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log clog go acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log logfdepthinternal acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log channels go acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log logfdepth acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log channels go acceptance cluster dockercluster go github com cockroachdb cockroach pkg util log fatalf acceptance cluster dockercluster go github com cockroachdb cockroach bazel out fastbuild st bin pkg util log log channels generated go acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance cluster createdocker acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance cluster pkg acceptance cluster dockercluster go acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance startcluster acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance util cluster go acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance testdocker acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance util docker go acceptance cluster dockercluster go testing trunner acceptance cluster dockercluster go goroot src testing testing go acceptance cluster dockercluster go created by testing t run acceptance cluster dockercluster go goroot src testing testing go acceptance cluster dockercluster go acceptance cluster dockercluster go for more context check log files in artifacts tmp tmp acceptance cluster dockercluster go acceptance cluster dockercluster go acceptance cluster dockercluster go acceptance cluster dockercluster go acceptance cluster dockercluster go this node experienced a fatal error printed above and as a result the acceptance cluster dockercluster go process is terminating acceptance cluster dockercluster go acceptance cluster dockercluster go fatal errors can occur due to faulty hardware disks memory clocks or a acceptance cluster dockercluster go problem in cockroachdb with your help the support team at cockroach labs acceptance cluster dockercluster go will try to determine the root cause recommend next steps and we can acceptance cluster dockercluster go improve cockroachdb based on your report acceptance cluster dockercluster go acceptance cluster dockercluster go please submit a crash report by following the instructions here acceptance cluster dockercluster go acceptance cluster dockercluster go acceptance cluster dockercluster go acceptance cluster dockercluster go if you would rather not post publicly please contact us directly at acceptance cluster dockercluster go acceptance cluster dockercluster go support cockroachlabs com acceptance cluster dockercluster go acceptance cluster dockercluster go the cockroach labs team appreciates your feedback parameters tags bazel gss race help see also same failure on other branches acceptance testdockerc fail runmode docker failed acceptance testdockerc failed cc cockroachdb sql experience
| 1
|
309,278
| 26,659,393,558
|
IssuesEvent
|
2023-01-25 19:39:07
|
OpenModelica/OpenModelica
|
https://api.github.com/repos/OpenModelica/OpenModelica
|
closed
|
Add reference results to Buildings library testing
|
COMP/Library Testing
|
Reference results for the latest version of Buildings are available here: https://simulationresearch.lbl.gov/jmodelica/modelica-buildings/Dymola/
@sjoelund, @adrpo, is the file format ok? Can you add them to the CI tests?
|
1.0
|
Add reference results to Buildings library testing - Reference results for the latest version of Buildings are available here: https://simulationresearch.lbl.gov/jmodelica/modelica-buildings/Dymola/
@sjoelund, @adrpo, is the file format ok? Can you add them to the CI tests?
|
test
|
add reference results to buildings library testing reference results for the latest version of buildings are available here sjoelund adrpo is the file format ok can you add them to the ci tests
| 1
|
264,950
| 20,049,703,208
|
IssuesEvent
|
2022-02-03 03:53:04
|
timescale/docs
|
https://api.github.com/repos/timescale/docs
|
closed
|
[Content Bug] Outdated code example
|
bug documentation
|
_Use this template for reporting bugs in the docs._
# Describe the bug
SQL query no longer returns any rows because there is no data within the previous 6 months. (Newest data in the dataset is from 2021-04-27.)
## What do the docs say now?
```sql
SELECT time_bucket('15 days', time) as "bucket"
,city_name, avg(temp_c)
FROM weather_metrics
WHERE time > now() - (6* INTERVAL '1 month')
GROUP BY bucket, city_name
ORDER BY bucket DESC;
```
## What should the docs say?
# Page affected
[Getting started > Query your data](https://docs.timescale.com/timescaledb/latest/getting-started/query-data/#time-bucket)
# Version affected
latest
# Subject matter expert (SME)
[If known, who is a good person to ask about this topic]
# Screenshots
[Attach images of screenshots showing the bug]
# Any further info
[Anything else you want to add, or further links]
|
1.0
|
[Content Bug] Outdated code example - _Use this template for reporting bugs in the docs._
# Describe the bug
SQL query no longer returns any rows because there is no data within the previous 6 months. (Newest data in the dataset is from 2021-04-27.)
## What do the docs say now?
```sql
SELECT time_bucket('15 days', time) as "bucket"
,city_name, avg(temp_c)
FROM weather_metrics
WHERE time > now() - (6* INTERVAL '1 month')
GROUP BY bucket, city_name
ORDER BY bucket DESC;
```
## What should the docs say?
# Page affected
[Getting started > Query your data](https://docs.timescale.com/timescaledb/latest/getting-started/query-data/#time-bucket)
# Version affected
latest
# Subject matter expert (SME)
[If known, who is a good person to ask about this topic]
# Screenshots
[Attach images of screenshots showing the bug]
# Any further info
[Anything else you want to add, or further links]
|
non_test
|
outdated code example use this template for reporting bugs in the docs describe the bug sql query no longer returns any rows because there is no data within the previous months newest data in the dataset is from what do the docs say now sql select time bucket days time as bucket city name avg temp c from weather metrics where time now interval month group by bucket city name order by bucket desc what should the docs say page affected version affected latest subject matter expert sme screenshots any further info
| 0
|
332,672
| 29,490,942,899
|
IssuesEvent
|
2023-06-02 13:25:24
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Informações Insitucionais - Registro das Competências - Capinópolis
|
generalization test development template - GRP (27) tag - Informações Institucionais subtag - Registro das Competências
|
DoD: Realizar o teste de Generalização do validador da tag Informações Insitucionais - Registro das Competências para o Município de Capinópolis.
|
1.0
|
Teste de generalizacao para a tag Informações Insitucionais - Registro das Competências - Capinópolis - DoD: Realizar o teste de Generalização do validador da tag Informações Insitucionais - Registro das Competências para o Município de Capinópolis.
|
test
|
teste de generalizacao para a tag informações insitucionais registro das competências capinópolis dod realizar o teste de generalização do validador da tag informações insitucionais registro das competências para o município de capinópolis
| 1
|
446,300
| 31,467,104,784
|
IssuesEvent
|
2023-08-30 03:32:36
|
supabase/supabase
|
https://api.github.com/repos/supabase/supabase
|
opened
|
Update Documentation to Include Docker Setup
|
documentation
|
# Improve documentation
## Link
https://github.com/supabase/supabase/blob/master/DEVELOPERS.md
## Describe the problem
I think it would benefit newcomers if the documentation included a dedicated section on setting up Docker and how to do so within the context of a forked Supabase repository. Personally, I was unaware that Docker might be necessary and how to go about setting it up when I first started trying to contribute
## Describe the improvement
A section on the provided link on how to set up Docker
|
1.0
|
Update Documentation to Include Docker Setup - # Improve documentation
## Link
https://github.com/supabase/supabase/blob/master/DEVELOPERS.md
## Describe the problem
I think it would benefit newcomers if the documentation included a dedicated section on setting up Docker and how to do so within the context of a forked Supabase repository. Personally, I was unaware that Docker might be necessary and how to go about setting it up when I first started trying to contribute
## Describe the improvement
A section on the provided link on how to set up Docker
|
non_test
|
update documentation to include docker setup improve documentation link describe the problem i think it would benefit newcomers if the documentation included a dedicated section on setting up docker and how to do so within the context of a forked supabase repository personally i was unaware that docker might be necessary and how to go about setting it up when i first started trying to contribute describe the improvement a section on the provided link on how to set up docker
| 0
|
87,246
| 8,068,856,366
|
IssuesEvent
|
2018-08-06 01:34:32
|
capitalone/cloud-custodian
|
https://api.github.com/repos/capitalone/cloud-custodian
|
closed
|
ci - drone failures on master branch because of docs
|
area/test-infra kind/bug
|
the past few commits have been failing on master due to the ghpages make recipe failing
```
$ make ghpages
git checkout gh-pages && \
mv docs/build/html new-docs && \
rm -rf docs && \
mv new-docs docs && \
git add -u && \
git add -A && \
git commit -m "Updated generated Sphinx documentation"
error: The following untracked working tree files would be overwritten by checkout:
.pytest_cache/README.md
tools/c7n_azure/tests/cassettes/TagsTest.test_tag_trim_space_must_be_btwn_0_and_15.yaml
Please move or remove them before you can switch branches.
Aborting
Makefile:47: recipe for target 'ghpages' failed
make: [ghpages] Error 1 (ignored)
$ git branch
* master
$ git checkout master
Already on 'master'
[info] Pulling image plugins/drone-git-push:latest
Drone Git Push Plugin built from e1d0995
$ git push origin gh-pages:gh-pages
error: src refspec gh-pages does not match any.
error: failed to push some refs to 'https://github.com/capitalone/cloud-custodian.git'
exit status 1
[info] build failed (exit code 1)
```
|
1.0
|
ci - drone failures on master branch because of docs - the past few commits have been failing on master due to the ghpages make recipe failing
```
$ make ghpages
git checkout gh-pages && \
mv docs/build/html new-docs && \
rm -rf docs && \
mv new-docs docs && \
git add -u && \
git add -A && \
git commit -m "Updated generated Sphinx documentation"
error: The following untracked working tree files would be overwritten by checkout:
.pytest_cache/README.md
tools/c7n_azure/tests/cassettes/TagsTest.test_tag_trim_space_must_be_btwn_0_and_15.yaml
Please move or remove them before you can switch branches.
Aborting
Makefile:47: recipe for target 'ghpages' failed
make: [ghpages] Error 1 (ignored)
$ git branch
* master
$ git checkout master
Already on 'master'
[info] Pulling image plugins/drone-git-push:latest
Drone Git Push Plugin built from e1d0995
$ git push origin gh-pages:gh-pages
error: src refspec gh-pages does not match any.
error: failed to push some refs to 'https://github.com/capitalone/cloud-custodian.git'
exit status 1
[info] build failed (exit code 1)
```
|
test
|
ci drone failures on master branch because of docs the past few commits have been failing on master due to the ghpages make recipe failing make ghpages git checkout gh pages mv docs build html new docs rm rf docs mv new docs docs git add u git add a git commit m updated generated sphinx documentation error the following untracked working tree files would be overwritten by checkout pytest cache readme md tools azure tests cassettes tagstest test tag trim space must be btwn and yaml please move or remove them before you can switch branches aborting makefile recipe for target ghpages failed make error ignored git branch master git checkout master already on master pulling image plugins drone git push latest drone git push plugin built from git push origin gh pages gh pages error src refspec gh pages does not match any error failed to push some refs to exit status build failed exit code
| 1
|
139,590
| 12,875,793,342
|
IssuesEvent
|
2020-07-11 00:45:46
|
pacificclimate/thunderbird
|
https://api.github.com/repos/pacificclimate/thunderbird
|
opened
|
Update version
|
documentation
|
The version is out of date in several places in the code. It should match the latest version lists in ["releases"](https://github.com/pacificclimate/thunderbird/releases).
|
1.0
|
Update version - The version is out of date in several places in the code. It should match the latest version lists in ["releases"](https://github.com/pacificclimate/thunderbird/releases).
|
non_test
|
update version the version is out of date in several places in the code it should match the latest version lists in
| 0
|
74,506
| 15,350,043,517
|
IssuesEvent
|
2021-03-01 01:13:35
|
woopignet/woopig-wordpress
|
https://api.github.com/repos/woopignet/woopig-wordpress
|
closed
|
Remove Files With Passwords from woopig-wordpress repo.
|
security
|
Leave on the server of course and add the filenames to .gitignore so they don't try to push a file delete to the server. That will wreck things.
|
True
|
Remove Files With Passwords from woopig-wordpress repo. - Leave on the server of course and add the filenames to .gitignore so they don't try to push a file delete to the server. That will wreck things.
|
non_test
|
remove files with passwords from woopig wordpress repo leave on the server of course and add the filenames to gitignore so they don t try to push a file delete to the server that will wreck things
| 0
|
345,734
| 24,872,878,129
|
IssuesEvent
|
2022-10-27 16:31:24
|
submariner-io/submariner-website
|
https://api.github.com/repos/submariner-io/submariner-website
|
closed
|
Document OVN Globalnet support
|
documentation
|
Globalnet is now supported with OVN (https://github.com/submariner-io/submariner/issues/383). The following docs should be updated:
- [ ] https://submariner.io/getting-started/architecture/networkplugin-syncer/ovn-kubernetes/
- [ ] https://submariner.io/operations/known-issues/#globalnet
|
1.0
|
Document OVN Globalnet support - Globalnet is now supported with OVN (https://github.com/submariner-io/submariner/issues/383). The following docs should be updated:
- [ ] https://submariner.io/getting-started/architecture/networkplugin-syncer/ovn-kubernetes/
- [ ] https://submariner.io/operations/known-issues/#globalnet
|
non_test
|
document ovn globalnet support globalnet is now supported with ovn the following docs should be updated
| 0
|
158,325
| 24,823,550,176
|
IssuesEvent
|
2022-10-25 18:31:48
|
SasView/sasview
|
https://api.github.com/repos/SasView/sasview
|
closed
|
How does/should SasView handle zero intensities? (Trac #277)
|
Enhancement Migrated from Trac Major SasView Fitting Redesign Stale
|
Zero intensities can arise in a reduced data file as a result of
'beam blocked' or 'electronic background' measurements, or masking.
Issues arising are:
- how does one discern between genuinely zero intensities and
intensities that (for whatever reason) are unknown or to be ignored;
- what uncertainty should be assigned to a zero intensity? (The issue
of zero uncertainties has been previously raised as a problem);
- should SasView have a 'how to handle zeros' option in its View ->
Startup Setting, for example (AJJ suggestion)?
Migrated from http://trac.sasview.org/ticket/277
```json
{
"status": "new",
"changetime": "2014-12-23T09:53:39",
"_ts": "2014-12-23 09:53:39.528593+00:00",
"description": "Zero intensities can arise in a reduced data file as a result of \n'beam blocked' or 'electronic background' measurements, or masking.\n\nIssues arising are:\n- how does one discern between genuinely zero intensities and \nintensities that (for whatever reason) are unknown or to be ignored;\n\n- what uncertainty should be assigned to a zero intensity? (The issue \nof zero uncertainties has been previously raised as a problem);\n\n- should SasView have a 'how to handle zeros' option in its View -> \nStartup Setting, for example (AJJ suggestion)?\n",
"reporter": "smk78",
"cc": "",
"resolution": "",
"workpackage": "SasView Fitting Redesign",
"time": "2014-12-23T09:53:39",
"component": "SasView",
"summary": "How does/should SasView handle zero intensities?",
"priority": "major",
"keywords": "",
"milestone": "SasView Next Release +1",
"owner": "",
"type": "enhancement"
}
```
|
1.0
|
How does/should SasView handle zero intensities? (Trac #277) - Zero intensities can arise in a reduced data file as a result of
'beam blocked' or 'electronic background' measurements, or masking.
Issues arising are:
- how does one discern between genuinely zero intensities and
intensities that (for whatever reason) are unknown or to be ignored;
- what uncertainty should be assigned to a zero intensity? (The issue
of zero uncertainties has been previously raised as a problem);
- should SasView have a 'how to handle zeros' option in its View ->
Startup Setting, for example (AJJ suggestion)?
Migrated from http://trac.sasview.org/ticket/277
```json
{
"status": "new",
"changetime": "2014-12-23T09:53:39",
"_ts": "2014-12-23 09:53:39.528593+00:00",
"description": "Zero intensities can arise in a reduced data file as a result of \n'beam blocked' or 'electronic background' measurements, or masking.\n\nIssues arising are:\n- how does one discern between genuinely zero intensities and \nintensities that (for whatever reason) are unknown or to be ignored;\n\n- what uncertainty should be assigned to a zero intensity? (The issue \nof zero uncertainties has been previously raised as a problem);\n\n- should SasView have a 'how to handle zeros' option in its View -> \nStartup Setting, for example (AJJ suggestion)?\n",
"reporter": "smk78",
"cc": "",
"resolution": "",
"workpackage": "SasView Fitting Redesign",
"time": "2014-12-23T09:53:39",
"component": "SasView",
"summary": "How does/should SasView handle zero intensities?",
"priority": "major",
"keywords": "",
"milestone": "SasView Next Release +1",
"owner": "",
"type": "enhancement"
}
```
|
non_test
|
how does should sasview handle zero intensities trac zero intensities can arise in a reduced data file as a result of beam blocked or electronic background measurements or masking issues arising are how does one discern between genuinely zero intensities and intensities that for whatever reason are unknown or to be ignored what uncertainty should be assigned to a zero intensity the issue of zero uncertainties has been previously raised as a problem should sasview have a how to handle zeros option in its view startup setting for example ajj suggestion migrated from json status new changetime ts description zero intensities can arise in a reduced data file as a result of n beam blocked or electronic background measurements or masking n nissues arising are n how does one discern between genuinely zero intensities and nintensities that for whatever reason are unknown or to be ignored n n what uncertainty should be assigned to a zero intensity the issue nof zero uncertainties has been previously raised as a problem n n should sasview have a how to handle zeros option in its view nstartup setting for example ajj suggestion n reporter cc resolution workpackage sasview fitting redesign time component sasview summary how does should sasview handle zero intensities priority major keywords milestone sasview next release owner type enhancement
| 0
|
147,789
| 13,215,580,740
|
IssuesEvent
|
2020-08-17 00:09:04
|
MusicTheorist/ArrayVisualizer
|
https://api.github.com/repos/MusicTheorist/ArrayVisualizer
|
closed
|
Use Aphitorite's cleaner version of original disparity formula
|
documentation enhancement fixed
|
double len = 2 * Math.abs(Math.abs(array[i] - i) - ArrayVisualizer.getCurrentLength() * 0.5) / ArrayVisualizer.getCurrentLength();
|
1.0
|
Use Aphitorite's cleaner version of original disparity formula - double len = 2 * Math.abs(Math.abs(array[i] - i) - ArrayVisualizer.getCurrentLength() * 0.5) / ArrayVisualizer.getCurrentLength();
|
non_test
|
use aphitorite s cleaner version of original disparity formula double len math abs math abs array i arrayvisualizer getcurrentlength arrayvisualizer getcurrentlength
| 0
|
229,064
| 18,279,714,955
|
IssuesEvent
|
2021-10-05 00:31:51
|
aces/Loris
|
https://api.github.com/repos/aces/Loris
|
opened
|
[DQT & Data Query Tool (Beta)] Running query with a filter shows results that should have been filtered out
|
Bug 24.0.0-testing
|
**Describe the bug**
For both the older version of the DQT and the 'Data Query Tool (Beta)', running a query after defining a filter shows results that should have been filtered out.
**To Reproduce**
Steps to reproduce the behavior (attach screenshots if applicable):
1. Go to either the older version of the DQT and the 'Data Query Tool (Beta)'
2. On the 'Define Fields' tab create a query by selecting an instrument and fields (eg. AOSI, all fields)
3. On the 'Define Filters' tab, create a filter (eg. AOSI, 'Administration' = 'All')
4. Run the query. The results table will display results that should have been filtered out.
**What did you expect to happen?**
All results in the results table should correspond to the criteria established for the filter.
**Browser Environment (please complete the following information):**
- Browser: Chrome
**Server Environment (if known):**
- LORIS Version: v24.0 testing
|
1.0
|
[DQT & Data Query Tool (Beta)] Running query with a filter shows results that should have been filtered out - **Describe the bug**
For both the older version of the DQT and the 'Data Query Tool (Beta)', running a query after defining a filter shows results that should have been filtered out.
**To Reproduce**
Steps to reproduce the behavior (attach screenshots if applicable):
1. Go to either the older version of the DQT and the 'Data Query Tool (Beta)'
2. On the 'Define Fields' tab create a query by selecting an instrument and fields (eg. AOSI, all fields)
3. On the 'Define Filters' tab, create a filter (eg. AOSI, 'Administration' = 'All')
4. Run the query. The results table will display results that should have been filtered out.
**What did you expect to happen?**
All results in the results table should correspond to the criteria established for the filter.
**Browser Environment (please complete the following information):**
- Browser: Chrome
**Server Environment (if known):**
- LORIS Version: v24.0 testing
|
test
|
running query with a filter shows results that should have been filtered out describe the bug for both the older version of the dqt and the data query tool beta running a query after defining a filter shows results that should have been filtered out to reproduce steps to reproduce the behavior attach screenshots if applicable go to either the older version of the dqt and the data query tool beta on the define fields tab create a query by selecting an instrument and fields eg aosi all fields on the define filters tab create a filter eg aosi administration all run the query the results table will display results that should have been filtered out what did you expect to happen all results in the results table should correspond to the criteria established for the filter browser environment please complete the following information browser chrome server environment if known loris version testing
| 1
|
249,742
| 21,188,962,844
|
IssuesEvent
|
2022-04-08 15:19:28
|
dotnet/sdk
|
https://api.github.com/repos/dotnet/sdk
|
opened
|
Testhost runtimeconfig is rewritten
|
Area-DotNet Test untriaged
|
### Describe the bug
Testhost uses a collection of testhost-*runtimeconfig.json files that are shipped with SDK, and that help projects that are not using Microsoft.NET.Test.SDK, or that don't generate their own runtimeconfig to run. They are used as fallback and should help the application target similar runtime as what it defines in its attributes.
An example file (testhost-3.1.runtimeconfig.json) looks like this:
```json
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "3.1.0-preview.0"
}
}
}
```
This file is rewritten by sdk build to always contain the current SDK version, which is not what the test app should target.
It happens here: https://github.com/dotnet/sdk/blob/main/src/Layout/redist/targets/GenerateLayout.targets#L403-L428
### To Reproduce
Look into `"C:\Program Files\dotnet\sdk\6.0.201\testhost-3.1.runtimeconfig.json"`
```json
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "6.0.3"
}
}
}
```
### Exceptions (if any)
None.
### Further technical details
|
1.0
|
Testhost runtimeconfig is rewritten - ### Describe the bug
Testhost uses a collection of testhost-*runtimeconfig.json files that are shipped with SDK, and that help projects that are not using Microsoft.NET.Test.SDK, or that don't generate their own runtimeconfig to run. They are used as fallback and should help the application target similar runtime as what it defines in its attributes.
An example file (testhost-3.1.runtimeconfig.json) looks like this:
```json
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "3.1.0-preview.0"
}
}
}
```
This file is rewritten by sdk build to always contain the current SDK version, which is not what the test app should target.
It happens here: https://github.com/dotnet/sdk/blob/main/src/Layout/redist/targets/GenerateLayout.targets#L403-L428
### To Reproduce
Look into `"C:\Program Files\dotnet\sdk\6.0.201\testhost-3.1.runtimeconfig.json"`
```json
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "6.0.3"
}
}
}
```
### Exceptions (if any)
None.
### Further technical details
|
test
|
testhost runtimeconfig is rewritten describe the bug testhost uses a collection of testhost runtimeconfig json files that are shipped with sdk and that help projects that are not using microsoft net test sdk or that don t generate their own runtimeconfig to run they are used as fallback and should help the application target similar runtime as what it defines in its attributes an example file testhost runtimeconfig json looks like this json runtimeoptions tfm framework name microsoft netcore app version preview this file is rewritten by sdk build to always contain the current sdk version which is not what the test app should target it happens here to reproduce look into c program files dotnet sdk testhost runtimeconfig json json runtimeoptions tfm framework name microsoft netcore app version exceptions if any none further technical details
| 1
|
330,579
| 28,442,221,673
|
IssuesEvent
|
2023-04-16 02:50:26
|
pombase/pombase-chado
|
https://api.github.com/repos/pombase/pombase-chado
|
closed
|
Read and store the allele_synonym column in PHAF files
|
bug needs testing
|
Currently the loading code ignores the `allele_synonym` column. There aren't many PHAF files with synonyms which is probably why we haven't bothered before.
Manu's new file has quite a few: `pombe-embl/external_data/phaf_files/chado_load/htp_phafs/PMID_34984977_phaf.tsv`
so I'll use that for testing.
Apart from that file there are only four other allele synonyms in the PHAF files.
@manulera
|
1.0
|
Read and store the allele_synonym column in PHAF files - Currently the loading code ignores the `allele_synonym` column. There aren't many PHAF files with synonyms which is probably why we haven't bothered before.
Manu's new file has quite a few: `pombe-embl/external_data/phaf_files/chado_load/htp_phafs/PMID_34984977_phaf.tsv`
so I'll use that for testing.
Apart from that file there are only four other allele synonyms in the PHAF files.
@manulera
|
test
|
read and store the allele synonym column in phaf files currently the loading code ignores the allele synonym column there aren t many phaf files with synonyms which is probably why we haven t bothered before manu s new file has quite a few pombe embl external data phaf files chado load htp phafs pmid phaf tsv so i ll use that for testing apart from that file there are only four other allele synonyms in the phaf files manulera
| 1
|
179,797
| 21,581,123,021
|
IssuesEvent
|
2022-05-02 18:50:47
|
xmidt-org/talaria
|
https://api.github.com/repos/xmidt-org/talaria
|
closed
|
CVE-2021-3121 (High) detected in github.com/hashicorp/consul-v1.7.0 - autoclosed
|
security vulnerability
|
## CVE-2021-3121 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/hashicorp/consul-v1.7.0</b></p></summary>
<p>Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.</p>
<p>
Dependency Hierarchy:
- github.com/go-kit/kit-v0.10.0 (Root Library)
- :x: **github.com/hashicorp/consul-v1.7.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/xmidt-org/talaria/commit/5efaa8f45af40b95fc2c19fa836010de6338e8fc">5efaa8f45af40b95fc2c19fa836010de6338e8fc</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in GoGo Protobuf before 1.3.2. plugin/unmarshal/unmarshal.go lacks certain index validation, aka the "skippy peanut butter" issue.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3121>CVE-2021-3121</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121</a></p>
<p>Release Date: 2021-01-11</p>
<p>Fix Resolution: v1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3121 (High) detected in github.com/hashicorp/consul-v1.7.0 - autoclosed - ## CVE-2021-3121 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/hashicorp/consul-v1.7.0</b></p></summary>
<p>Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.</p>
<p>
Dependency Hierarchy:
- github.com/go-kit/kit-v0.10.0 (Root Library)
- :x: **github.com/hashicorp/consul-v1.7.0** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/xmidt-org/talaria/commit/5efaa8f45af40b95fc2c19fa836010de6338e8fc">5efaa8f45af40b95fc2c19fa836010de6338e8fc</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in GoGo Protobuf before 1.3.2. plugin/unmarshal/unmarshal.go lacks certain index validation, aka the "skippy peanut butter" issue.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3121>CVE-2021-3121</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121</a></p>
<p>Release Date: 2021-01-11</p>
<p>Fix Resolution: v1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in github com hashicorp consul autoclosed cve high severity vulnerability vulnerable library github com hashicorp consul consul is a distributed highly available and data center aware solution to connect and configure applications across dynamic distributed infrastructure dependency hierarchy github com go kit kit root library x github com hashicorp consul vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in gogo protobuf before plugin unmarshal unmarshal go lacks certain index validation aka the skippy peanut butter issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
204,145
| 15,419,225,182
|
IssuesEvent
|
2021-03-05 09:52:30
|
TuragaLab/DECODE
|
https://api.github.com/repos/TuragaLab/DECODE
|
closed
|
Segmentation fault on testing the train implementation
|
bug help wanted tests
|
Got segmentation fault on `decode/test/test_train_val_impl.py` even though the test itself passed.
Can only reproduce this on Ubuntu with not so much RAM.
Though I cannot see excessive RAM usage in themonitor. The model is mocked and super small ...
Maybe this has something to do with the fact that multiprocessing fails for some people?
Debugging with gdb gives:
```
(gdb) r -m pytest decode/test/test_train_val_impl.py
Starting program: /home/lucas/miniconda3/envs/decode_dev_cpu/bin/python -m pytest decode/test/test_train_val_impl.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
====================================================================== test session starts =======================================================================
platform linux -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/lucas/git/DECODE/decode/test, configfile: pytest.ini
collected 2 items
decode/test/test_train_val_impl.py [New Thread 0x7fff1958d700 (LWP 2866)]
[New Thread 0x7fff13fff700 (LWP 2867)]
[New Thread 0x7fff0eee1700 (LWP 2868)]
.. [100%]
======================================================================== warnings summary ========================================================================
../../miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/cuda/__init__.py:52
/home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370151529/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================= 2 passed, 1 warning in 25.09s ==================================================================
[Thread 0x7fff1958d700 (LWP 2866) exited]
Thread 4 "python" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff0eee1700 (LWP 2868)]
0x000055555576b8ba in PyThreadState_Clear () at /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c:785
785 /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c: No such file or directory.
(gdb) where
#0 0x000055555576b8ba in PyThreadState_Clear () at /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c:785
#1 0x00007fffc64edb1c in pybind11::gil_scoped_acquire::dec_ref() ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#2 0x00007fffc64edb59 in pybind11::gil_scoped_acquire::~gil_scoped_acquire() ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#3 0x00007fffc6815dd9 in torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#4 0x00007ffff4431067 in std::execute_native_thread_routine (__p=0x555559bca5f0)
at /home/conda/feedstock_root/build_artifacts/ctng-compilers_1610729750655/work/.build/x86_64-conda-linux-gnu/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffff7bbd6db in start_thread (arg=0x7fff0eee1700) at pthread_create.c:463
#6 0x00007ffff6f39a3f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb)
```
|
1.0
|
Segmentation fault on testing the train implementation - Got segmentation fault on `decode/test/test_train_val_impl.py` even though the test itself passed.
Can only reproduce this on Ubuntu with not so much RAM.
Though I cannot see excessive RAM usage in themonitor. The model is mocked and super small ...
Maybe this has something to do with the fact that multiprocessing fails for some people?
Debugging with gdb gives:
```
(gdb) r -m pytest decode/test/test_train_val_impl.py
Starting program: /home/lucas/miniconda3/envs/decode_dev_cpu/bin/python -m pytest decode/test/test_train_val_impl.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
====================================================================== test session starts =======================================================================
platform linux -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /home/lucas/git/DECODE/decode/test, configfile: pytest.ini
collected 2 items
decode/test/test_train_val_impl.py [New Thread 0x7fff1958d700 (LWP 2866)]
[New Thread 0x7fff13fff700 (LWP 2867)]
[New Thread 0x7fff0eee1700 (LWP 2868)]
.. [100%]
======================================================================== warnings summary ========================================================================
../../miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/cuda/__init__.py:52
/home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370151529/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================= 2 passed, 1 warning in 25.09s ==================================================================
[Thread 0x7fff1958d700 (LWP 2866) exited]
Thread 4 "python" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff0eee1700 (LWP 2868)]
0x000055555576b8ba in PyThreadState_Clear () at /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c:785
785 /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c: No such file or directory.
(gdb) where
#0 0x000055555576b8ba in PyThreadState_Clear () at /home/conda/feedstock_root/build_artifacts/python-split_1611624120657/work/Python/pystate.c:785
#1 0x00007fffc64edb1c in pybind11::gil_scoped_acquire::dec_ref() ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#2 0x00007fffc64edb59 in pybind11::gil_scoped_acquire::~gil_scoped_acquire() ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#3 0x00007fffc6815dd9 in torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) ()
from /home/lucas/miniconda3/envs/decode_dev_cpu/lib/python3.9/site-packages/torch/lib/libtorch_python.so
#4 0x00007ffff4431067 in std::execute_native_thread_routine (__p=0x555559bca5f0)
at /home/conda/feedstock_root/build_artifacts/ctng-compilers_1610729750655/work/.build/x86_64-conda-linux-gnu/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffff7bbd6db in start_thread (arg=0x7fff0eee1700) at pthread_create.c:463
#6 0x00007ffff6f39a3f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb)
```
|
test
|
segmentation fault on testing the train implementation got segmentation fault on decode test test train val impl py even though the test itself passed can only reproduce this on ubuntu with not so much ram though i cannot see excessive ram usage in themonitor the model is mocked and super small maybe this has something to do with the fact that multiprocessing fails for some people debugging with gdb gives gdb r m pytest decode test test train val impl py starting program home lucas envs decode dev cpu bin python m pytest decode test test train val impl py using host libthread db library lib linux gnu libthread db so test session starts platform linux python pytest py pluggy rootdir home lucas git decode decode test configfile pytest ini collected items decode test test train val impl py warnings summary envs decode dev cpu lib site packages torch cuda init py home lucas envs decode dev cpu lib site packages torch cuda init py userwarning cuda initialization found no nvidia driver on your system please check that you have an nvidia gpu and installed a driver from triggered internally at opt conda conda bld pytorch work cuda cudafunctions cpp return torch c cuda getdevicecount docs passed warning in thread python received signal sigsegv segmentation fault in pythreadstate clear at home conda feedstock root build artifacts python split work python pystate c home conda feedstock root build artifacts python split work python pystate c no such file or directory gdb where in pythreadstate clear at home conda feedstock root build artifacts python split work python pystate c in gil scoped acquire dec ref from home lucas envs decode dev cpu lib site packages torch lib libtorch python so in gil scoped acquire gil scoped acquire from home lucas envs decode dev cpu lib site packages torch lib libtorch python so in torch autograd python pythonengine thread init int std shared ptr const bool from home lucas envs decode dev cpu lib site packages torch lib libtorch python so in std execute native thread routine p at home conda feedstock root build artifacts ctng compilers work build conda linux gnu src gcc libstdc src c thread cc in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s gdb
| 1
|
180,117
| 13,921,727,225
|
IssuesEvent
|
2020-10-21 12:21:28
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
Fix Multi-module coverage for modules with 'dot .' in the name
|
Component/Testerina Team/TestFramework
|
**Description:**
Muti-module coverage does not work with modules that have a dot in the name.
The `ISourceFileCoverage` class used in CodeCoverage report returns a package name which has the dot `.` replaced with an underscore `_`.
```
String sourceFileModule = sourceFileCoverage.getPackageName().split("/")[1];
ModuleCoverage.getInstance().updateSourceFileCoverage(jsonCachePath, sourceFileModule,
sourceFileCoverage.getName(), coveredLines, missedLines);
```
This causes the path to the source files to be resolved incorrectly, thus resulting in a NoSuchFileException.
This can be seen for modules such as java.jdbc which inturn becomes java_jdbc.
Simply replacing the `_` with `.` in sourceFileModule will not work as this can interfere with modules that contain `_` originally in their name.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
https://github.com/ballerina-platform/ballerina-lang/issues/26477
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
2.0
|
Fix Multi-module coverage for modules with 'dot .' in the name - **Description:**
Muti-module coverage does not work with modules that have a dot in the name.
The `ISourceFileCoverage` class used in CodeCoverage report returns a package name which has the dot `.` replaced with an underscore `_`.
```
String sourceFileModule = sourceFileCoverage.getPackageName().split("/")[1];
ModuleCoverage.getInstance().updateSourceFileCoverage(jsonCachePath, sourceFileModule,
sourceFileCoverage.getName(), coveredLines, missedLines);
```
This causes the path to the source files to be resolved incorrectly, thus resulting in a NoSuchFileException.
This can be seen for modules such as java.jdbc which inturn becomes java_jdbc.
Simply replacing the `_` with `.` in sourceFileModule will not work as this can interfere with modules that contain `_` originally in their name.
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
https://github.com/ballerina-platform/ballerina-lang/issues/26477
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
test
|
fix multi module coverage for modules with dot in the name description muti module coverage does not work with modules that have a dot in the name the isourcefilecoverage class used in codecoverage report returns a package name which has the dot replaced with an underscore string sourcefilemodule sourcefilecoverage getpackagename split modulecoverage getinstance updatesourcefilecoverage jsoncachepath sourcefilemodule sourcefilecoverage getname coveredlines missedlines this causes the path to the source files to be resolved incorrectly thus resulting in a nosuchfileexception this can be seen for modules such as java jdbc which inturn becomes java jdbc simply replacing the with in sourcefilemodule will not work as this can interfere with modules that contain originally in their name steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 1
|
41,599
| 16,824,243,039
|
IssuesEvent
|
2021-06-17 16:22:28
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
[Compute Swagger] Dead link to virtual-machines-linux-use-root-privileges
|
Compute - VM Service Service Attention bug customer-reported needs-team-attention
|
Hi,
The link to https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-use-root-privileges?toc=/azure/virtual-machines/linux/toc.json returns 404.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 696674d3-df55-bb81-adb4-d87e3ffc7c45
* Version Independent ID: 2afe6e9c-7bad-80ff-57bc-5fc356aee93d
* Content: [OSProfile Class (Microsoft.Azure.Management.Compute.Models) - Azure for .NET Developers](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile?view=azure-dotnet)
* Content Source: [xml/Microsoft.Azure.Management.Compute.Models/OSProfile.xml](https://github.com/Azure/azure-docs-sdk-dotnet/blob/master/xml/Microsoft.Azure.Management.Compute.Models/OSProfile.xml)
* Service: **multiple**
* GitHub Login: @CamSoper
* Microsoft Alias: **casoper**
|
2.0
|
[Compute Swagger] Dead link to virtual-machines-linux-use-root-privileges -
Hi,
The link to https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-use-root-privileges?toc=/azure/virtual-machines/linux/toc.json returns 404.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 696674d3-df55-bb81-adb4-d87e3ffc7c45
* Version Independent ID: 2afe6e9c-7bad-80ff-57bc-5fc356aee93d
* Content: [OSProfile Class (Microsoft.Azure.Management.Compute.Models) - Azure for .NET Developers](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile?view=azure-dotnet)
* Content Source: [xml/Microsoft.Azure.Management.Compute.Models/OSProfile.xml](https://github.com/Azure/azure-docs-sdk-dotnet/blob/master/xml/Microsoft.Azure.Management.Compute.Models/OSProfile.xml)
* Service: **multiple**
* GitHub Login: @CamSoper
* Microsoft Alias: **casoper**
|
non_test
|
dead link to virtual machines linux use root privileges hi the link to returns document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service multiple github login camsoper microsoft alias casoper
| 0
|
27,737
| 4,328,054,878
|
IssuesEvent
|
2016-07-26 12:53:56
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
opened
|
flake: timeout building images/openvswitch
|
kind/test-flake priority/P2
|
```
[ERROR] PID 16337: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest` exited with status 1.
[INFO] Stack Trace:
[INFO] 1: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest`
[INFO] 2: hack/build-images.sh:98: build
[INFO] 3: hack/build-images.sh:109: image
[INFO] Exiting with code 1.
```
The operation timedout on:
```
RUN curl -L -o /etc/yum.repos.d/origin-next-epel-7.repo https://copr.fedoraproject.org/coprs/maxamillion/origin-next/repo/epel-7/maxamillion-origin-next-epel-7.repo && INSTALL_PKGS="openvswitch" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && chmod +x /usr/local/bin/*
...
url: (28) Operation timed out after 300760 milliseconds with 0 out of 0 bytes received
error: running '/bin/sh -c curl -L -o /etc/yum.repos.d/origin-next-epel-7.repo https://copr.fedoraproject.org/coprs/maxamillion/origin-next/repo/epel-7/maxamillion-origin-next-epel-7.repo && INSTALL_PKGS="openvswitch" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && chmod +x /usr/local/bin/*' failed with exit code 28
[ERROR] PID 16337: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest` exited with status 1.
```
|
1.0
|
flake: timeout building images/openvswitch - ```
[ERROR] PID 16337: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest` exited with status 1.
[INFO] Stack Trace:
[INFO] 1: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest`
[INFO] 2: hack/build-images.sh:98: build
[INFO] 3: hack/build-images.sh:109: image
[INFO] Exiting with code 1.
```
The operation timedout on:
```
RUN curl -L -o /etc/yum.repos.d/origin-next-epel-7.repo https://copr.fedoraproject.org/coprs/maxamillion/origin-next/repo/epel-7/maxamillion-origin-next-epel-7.repo && INSTALL_PKGS="openvswitch" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && chmod +x /usr/local/bin/*
...
url: (28) Operation timed out after 300760 milliseconds with 0 out of 0 bytes received
error: running '/bin/sh -c curl -L -o /etc/yum.repos.d/origin-next-epel-7.repo https://copr.fedoraproject.org/coprs/maxamillion/origin-next/repo/epel-7/maxamillion-origin-next-epel-7.repo && INSTALL_PKGS="openvswitch" && yum install -y $INSTALL_PKGS && rpm -V $INSTALL_PKGS && yum clean all && chmod +x /usr/local/bin/*' failed with exit code 28
[ERROR] PID 16337: hack/build-images.sh:64: `'/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc' ex dockerbuild images/openvswitch openshift/openvswitch:latest` exited with status 1.
```
|
test
|
flake timeout building images openvswitch pid hack build images sh data src github com openshift origin output local bin linux oc ex dockerbuild images openvswitch openshift openvswitch latest exited with status stack trace hack build images sh data src github com openshift origin output local bin linux oc ex dockerbuild images openvswitch openshift openvswitch latest hack build images sh build hack build images sh image exiting with code the operation timedout on run curl l o etc yum repos d origin next epel repo install pkgs openvswitch yum install y install pkgs rpm v install pkgs yum clean all chmod x usr local bin url operation timed out after milliseconds with out of bytes received error running bin sh c curl l o etc yum repos d origin next epel repo install pkgs openvswitch yum install y install pkgs rpm v install pkgs yum clean all chmod x usr local bin failed with exit code pid hack build images sh data src github com openshift origin output local bin linux oc ex dockerbuild images openvswitch openshift openvswitch latest exited with status
| 1
|
231,552
| 18,778,081,367
|
IssuesEvent
|
2021-11-08 00:17:30
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Frequent test failures of `TestDownloadOnly/v1.22.3-rc.0/preload-exists`
|
priority/backlog kind/failing-test
|
This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_Linux_crio_arm64](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_crio_arm64&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|23.08|
|[Docker_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_macOS&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|20.00|
|
1.0
|
Frequent test failures of `TestDownloadOnly/v1.22.3-rc.0/preload-exists` - This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_Linux_crio_arm64](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_crio_arm64&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|23.08|
|[Docker_macOS](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_macOS&test=TestDownloadOnly/v1.22.3-rc.0/preload-exists)|20.00|
|
test
|
frequent test failures of testdownloadonly rc preload exists this test has high flake rates for the following environments environment flake rate
| 1
|
74,643
| 7,434,643,143
|
IssuesEvent
|
2018-03-26 11:48:01
|
Chartes-TNAH/dico-proso
|
https://api.github.com/repos/Chartes-TNAH/dico-proso
|
closed
|
Problèmes lancement serveur
|
bug test
|
J'ai commencé la création de mes pages et des routes et j'ai constaté plusieurs problèmes au lancement du serveur. Les erreurs sont arrivées successivement, donc j'ai réussi à régler certains d'entre eux, mais sur d'autres je bloque complétément.
Voici les fichiers sur lesquels je suis intervenue:
- dans le fichier "constantes.py", il y avait la partie tirée de l'exemple 18 de l'application gazetteer que j'ai corrigé comme suit:
```py
SQLALCHEMY_DATABASE_URI = 'mysql://hoozhoo_user:password@localhost/hoozhoo'
```
J'ai fait pointer la config de test et de production sur notre seule base. En même temps, nous n'avons pas de db.sqlite. En faisant ce changement, nous n'avons plus d'erreurs sur la config.
- dans le fichier données.py j'ai corrigé une typo et j'ai rajouté un db.relationship aux classes Authorship_link et Link
- dans le fichier "conteneur.html", j'ai fermé deux balises {% endif %} et {% endwith %}
(En plus, j'ai constaté que tant que la page accueil n'est pas créé, la page conteneur provoque toujours des erreurs, donc il vaut mieux exécuter le code sans {% extends "conteneur.html" %} de la page html )
Malgré ces corrections, l'application continue de planter, mais maintenant uniquement sur la base de données. Il semble que sql alchemy a un problème avec "double foreign key on the same table". Je ne ne sais pas comme remédier ce problème. Voici le message d'erreur précis:
"Could not determine join condition between parent/child tables on relationship Person.link_pers1 - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table."
|
1.0
|
Problèmes lancement serveur - J'ai commencé la création de mes pages et des routes et j'ai constaté plusieurs problèmes au lancement du serveur. Les erreurs sont arrivées successivement, donc j'ai réussi à régler certains d'entre eux, mais sur d'autres je bloque complétément.
Voici les fichiers sur lesquels je suis intervenue:
- dans le fichier "constantes.py", il y avait la partie tirée de l'exemple 18 de l'application gazetteer que j'ai corrigé comme suit:
```py
SQLALCHEMY_DATABASE_URI = 'mysql://hoozhoo_user:password@localhost/hoozhoo'
```
J'ai fait pointer la config de test et de production sur notre seule base. En même temps, nous n'avons pas de db.sqlite. En faisant ce changement, nous n'avons plus d'erreurs sur la config.
- dans le fichier données.py j'ai corrigé une typo et j'ai rajouté un db.relationship aux classes Authorship_link et Link
- dans le fichier "conteneur.html", j'ai fermé deux balises {% endif %} et {% endwith %}
(En plus, j'ai constaté que tant que la page accueil n'est pas créé, la page conteneur provoque toujours des erreurs, donc il vaut mieux exécuter le code sans {% extends "conteneur.html" %} de la page html )
Malgré ces corrections, l'application continue de planter, mais maintenant uniquement sur la base de données. Il semble que sql alchemy a un problème avec "double foreign key on the same table". Je ne ne sais pas comme remédier ce problème. Voici le message d'erreur précis:
"Could not determine join condition between parent/child tables on relationship Person.link_pers1 - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table."
|
test
|
problèmes lancement serveur j ai commencé la création de mes pages et des routes et j ai constaté plusieurs problèmes au lancement du serveur les erreurs sont arrivées successivement donc j ai réussi à régler certains d entre eux mais sur d autres je bloque complétément voici les fichiers sur lesquels je suis intervenue dans le fichier constantes py il y avait la partie tirée de l exemple de l application gazetteer que j ai corrigé comme suit py sqlalchemy database uri mysql hoozhoo user password localhost hoozhoo j ai fait pointer la config de test et de production sur notre seule base en même temps nous n avons pas de db sqlite en faisant ce changement nous n avons plus d erreurs sur la config dans le fichier données py j ai corrigé une typo et j ai rajouté un db relationship aux classes authorship link et link dans le fichier conteneur html j ai fermé deux balises endif et endwith en plus j ai constaté que tant que la page accueil n est pas créé la page conteneur provoque toujours des erreurs donc il vaut mieux exécuter le code sans extends conteneur html de la page html malgré ces corrections l application continue de planter mais maintenant uniquement sur la base de données il semble que sql alchemy a un problème avec double foreign key on the same table je ne ne sais pas comme remédier ce problème voici le message d erreur précis could not determine join condition between parent child tables on relationship person link there are multiple foreign key paths linking the tables specify the foreign keys argument providing a list of those columns which should be counted as containing a foreign key reference to the parent table
| 1
|
520,613
| 15,089,357,437
|
IssuesEvent
|
2021-02-06 05:14:57
|
cs130-w21/8
|
https://api.github.com/repos/cs130-w21/8
|
closed
|
Implement `BillObject` class methods (Mutators)
|
Priority ⚠️ Task ⏳
|
Implement mutator methods inside [Models/BillObject.swift](https://github.com/cs130-w21/8/tree/master/Dots/Dots/Models/BillObject.swift)
### TODO
- [x] mutating func clearEntries()
- [x] mutating func setTitle(newTitle: String)
- [x] mutating func setDate(date: Date)
- [x] mutating func setTaxRate(tax: Double)
- [x] mutating func setInitiator(initiator: Int)
- [x] mutating func setParticipants(participants: [Int])
- [x] mutating func removeParticipant(at: Int)
- [x] mutating func addNewEntry(entry: EntryObject)
- [x] mutating func addNewEntry(entryTitle: String, participants: [Int], value: Double, amount: Int, withTax: Bool)
- [x] mutating func removeEntry(at: Int)
|
1.0
|
Implement `BillObject` class methods (Mutators) - Implement mutator methods inside [Models/BillObject.swift](https://github.com/cs130-w21/8/tree/master/Dots/Dots/Models/BillObject.swift)
### TODO
- [x] mutating func clearEntries()
- [x] mutating func setTitle(newTitle: String)
- [x] mutating func setDate(date: Date)
- [x] mutating func setTaxRate(tax: Double)
- [x] mutating func setInitiator(initiator: Int)
- [x] mutating func setParticipants(participants: [Int])
- [x] mutating func removeParticipant(at: Int)
- [x] mutating func addNewEntry(entry: EntryObject)
- [x] mutating func addNewEntry(entryTitle: String, participants: [Int], value: Double, amount: Int, withTax: Bool)
- [x] mutating func removeEntry(at: Int)
|
non_test
|
implement billobject class methods mutators implement mutator methods inside todo mutating func clearentries mutating func settitle newtitle string mutating func setdate date date mutating func settaxrate tax double mutating func setinitiator initiator int mutating func setparticipants participants mutating func removeparticipant at int mutating func addnewentry entry entryobject mutating func addnewentry entrytitle string participants value double amount int withtax bool mutating func removeentry at int
| 0
|
274,919
| 23,879,592,637
|
IssuesEvent
|
2022-09-07 23:10:25
|
julia-vscode/julia-vscode
|
https://api.github.com/repos/julia-vscode/julia-vscode
|
closed
|
more verbose output
|
area-testing
|
Please let me know if you prefer this king of comment here or on Discourse.
I have some tests that should not take long (4 seconds). However, when I ran them for the first time, it took quite a while to get the response (would say 3 minutes).
The issue is that there is no feedback on what is going on, which sort of triggers one to kill the tests:

The test finally ended without errors:

It seemed to report two runs of the tests, one taking 3 minutes, the other 4 seconds.
Subsequent runs of the same test take 3-4 seconds.
The issue here is not knowing what is going on. If there was more data being output to the log we would feels assured that nothing is broken.
|
1.0
|
more verbose output -
Please let me know if you prefer this king of comment here or on Discourse.
I have some tests that should not take long (4 seconds). However, when I ran them for the first time, it took quite a while to get the response (would say 3 minutes).
The issue is that there is no feedback on what is going on, which sort of triggers one to kill the tests:

The test finally ended without errors:

It seemed to report two runs of the tests, one taking 3 minutes, the other 4 seconds.
Subsequent runs of the same test take 3-4 seconds.
The issue here is not knowing what is going on. If there was more data being output to the log we would feels assured that nothing is broken.
|
test
|
more verbose output please let me know if you prefer this king of comment here or on discourse i have some tests that should not take long seconds however when i ran them for the first time it took quite a while to get the response would say minutes the issue is that there is no feedback on what is going on which sort of triggers one to kill the tests the test finally ended without errors it seemed to report two runs of the tests one taking minutes the other seconds subsequent runs of the same test take seconds the issue here is not knowing what is going on if there was more data being output to the log we would feels assured that nothing is broken
| 1
|
183,131
| 14,201,189,885
|
IssuesEvent
|
2020-11-16 07:13:55
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
There are untranslated strings in the 'Undelete' dialog
|
:gear: blobs 🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.15.1
**Build**: 20200904.2
**Branch**: main
**Platform/OS**: Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Language**: All
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Deutsch' -> Restart Storage Explorer.
3. Expand one storage account -> Blob Containers.
4. Select one blob container -> Right click one folder -> Click 'Undelete -> Undelete Selected‘.
5. Check the strings in the localized 'Udelete' dialog.
**Expect Experience:**
All strings are translated in the 'Undelete' dialog.
**Actual Experience:**
There are untranslated strings in the 'Undelete' dialog.

**More Info:**
For one blob.

|
1.0
|
There are untranslated strings in the 'Undelete' dialog - **Storage Explorer Version:** 1.15.1
**Build**: 20200904.2
**Branch**: main
**Platform/OS**: Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Language**: All
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Deutsch' -> Restart Storage Explorer.
3. Expand one storage account -> Blob Containers.
4. Select one blob container -> Right click one folder -> Click 'Undelete -> Undelete Selected‘.
5. Check the strings in the localized 'Udelete' dialog.
**Expect Experience:**
All strings are translated in the 'Undelete' dialog.
**Actual Experience:**
There are untranslated strings in the 'Undelete' dialog.

**More Info:**
For one blob.

|
test
|
there are untranslated strings in the undelete dialog storage explorer version build branch main platform os windows linux ubuntu macos catalina architecture language all regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select deutsch restart storage explorer expand one storage account blob containers select one blob container right click one folder click undelete undelete selected‘ check the strings in the localized udelete dialog expect experience all strings are translated in the undelete dialog actual experience there are untranslated strings in the undelete dialog more info for one blob
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.