Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
96,987 | 28,070,731,603 | IssuesEvent | 2023-03-29 18:51:45 | MinaProtocol/mina | https://api.github.com/repos/MinaProtocol/mina | closed | Merge the archive node artifact job with the main build-artifact job | Size: XS ~ 1 day buildkite archive-node build-pipeline-refactor | All of the binaries required for the archive node package are built as part of build-artifact anyway, this should reduce overall runtime / total cpu cycles | 2.0 | Merge the archive node artifact job with the main build-artifact job - All of the binaries required for the archive node package are built as part of build-artifact anyway, this should reduce overall runtime / total cpu cycles | non_defect | merge the archive node artifact job with the main build artifact job all of the binaries required for the archive node package are built as part of build artifact anyway this should reduce overall runtime total cpu cycles | 0 |
24,279 | 5,042,284,941 | IssuesEvent | 2016-12-19 13:29:39 | google/material-design-lite | https://api.github.com/repos/google/material-design-lite | closed | mdl-card__menu not documented | Cards Documentation v1-bug | Component : Card
What are you trying to do or find out more about?
Element class mdl-card__menu
Where have you looked?
http://www.getmdl.io/components/index.html#cards-section
Where did you expect to find this information?
In section "Configuration options"
What did I find out?
- Container for positioning child elements in top right card corner
- Docs show an example share button
Thanks.
| 1.0 | mdl-card__menu not documented - Component : Card
What are you trying to do or find out more about?
Element class mdl-card__menu
Where have you looked?
http://www.getmdl.io/components/index.html#cards-section
Where did you expect to find this information?
In section "Configuration options"
What did I find out?
- Container for positioning child elements in top right card corner
- Docs show an example share button
Thanks.
| non_defect | mdl card menu not documented component card what are you trying to do or find out more about element class mdl card menu where have you looked where did you expect to find this information in section configuration options what did i find out container for positioning child elements in top right card corner docs show an example share button thanks | 0 |
66,607 | 20,373,819,324 | IssuesEvent | 2022-02-21 13:49:33 | OpenMS/OpenMS | https://api.github.com/repos/OpenMS/OpenMS | closed | PSMFeatureextractor and Percolator | defect major TOPP usability | Dear all,
I have been trying to use the new Percolator adapter with no luck. I do searches with Comet, using the Comet adaptor in both OpenMS 2.2 (using TOPPAS) and the 2.3 nightly builds (last of 05.12.2017) in both TOPPAS and Knime.
Briefly my workflow is as follows, CometAdpator -> Peptideindexer -> PSMFeatureextractor -> Percolator.
If I run it in TOPPAS the PSMFeatureExtractor always crashes without any information. In Knime, PSMFeatureExtractor passes but Percolator always crashes. What I can get out of the log is: Merging Peptides IDs, merging Proteins IDs, Percolator problem... and then crashes.
If I use then the PSMFeatureExtractor result from Knime and use it as input for PercolatorAdapter in TOPPAS, I can run Percolator, however it always uses the -U option, even if I select -Y option (tdc true):
which my guess should be used as the search in Comet was done using a database containing decoys, and this should be indexed when using PeptideIndexer. Moreover, Percolator reports:
"Separate target and decoy search inputs detected, using target-decoy competition on Percolator scores", even though the -Y option is seen. Nonetheless it continues and finish. However if I then do IDFilter on that (q < 0.01) everything is gone. If I run Percolator including the -f option (Protein inference) I also can't filter the file. However, if I export this file doing text export without IDFilter and filter manually for proteins with q-value of less than 0.01 and compare this with the same search files processed with PeptideProphet/ProteinProphet I get an overlap of ~80% at the protein level. However, I would need to have the filtered file as input for IDMapper, to only map peptides at 1%FDR to features.
Maybe somebody has more experience on running PercolatorAdapter and I might be using it wrong.
Cheers,
Alejandro
| 1.0 | PSMFeatureextractor and Percolator - Dear all,
I have been trying to use the new Percolator adapter with no luck. I do searches with Comet, using the Comet adaptor in both OpenMS 2.2 (using TOPPAS) and the 2.3 nightly builds (last of 05.12.2017) in both TOPPAS and Knime.
Briefly my workflow is as follows, CometAdpator -> Peptideindexer -> PSMFeatureextractor -> Percolator.
If I run it in TOPPAS the PSMFeatureExtractor always crashes without any information. In Knime, PSMFeatureExtractor passes but Percolator always crashes. What I can get out of the log is: Merging Peptides IDs, merging Proteins IDs, Percolator problem... and then crashes.
If I use then the PSMFeatureExtractor result from Knime and use it as input for PercolatorAdapter in TOPPAS, I can run Percolator, however it always uses the -U option, even if I select -Y option (tdc true):
which my guess should be used as the search in Comet was done using a database containing decoys, and this should be indexed when using PeptideIndexer. Moreover, Percolator reports:
"Separate target and decoy search inputs detected, using target-decoy competition on Percolator scores", even though the -Y option is seen. Nonetheless it continues and finish. However if I then do IDFilter on that (q < 0.01) everything is gone. If I run Percolator including the -f option (Protein inference) I also can't filter the file. However, if I export this file doing text export without IDFilter and filter manually for proteins with q-value of less than 0.01 and compare this with the same search files processed with PeptideProphet/ProteinProphet I get an overlap of ~80% at the protein level. However, I would need to have the filtered file as input for IDMapper, to only map peptides at 1%FDR to features.
Maybe somebody has more experience on running PercolatorAdapter and I might be using it wrong.
Cheers,
Alejandro
| defect | psmfeatureextractor and percolator dear all i have been trying to use the new percolator adapter with no luck i do searches with comet using the comet adaptor in both openms using toppas and the nightly builds last of in both toppas and knime briefly my workflow is as follows cometadpator peptideindexer psmfeatureextractor percolator if i run it in toppas the psmfeatureextractor always crashes without any information in knime psmfeatureextractor passes but percolator always crashes what i can get out of the log is merging peptides ids merging proteins ids percolator problem and then crashes if i use then the psmfeatureextractor result from knime and use it as input for percolatoradapter in toppas i can run percolator however it always uses the u option even if i select y option tdc true which my guess should be used as the search in comet was done using a database containing decoys and this should be indexed when using peptideindexer moreover percolator reports separate target and decoy search inputs detected using target decoy competition on percolator scores even though the y option is seen nonetheless it continues and finish however if i then do idfilter on that q everything is gone if i run percolator including the f option protein inference i also can t filter the file however if i export this file doing text export without idfilter and filter manually for proteins with q value of less than and compare this with the same search files processed with peptideprophet proteinprophet i get an overlap of at the protein level however i would need to have the filtered file as input for idmapper to only map peptides at fdr to features maybe somebody has more experience on running percolatoradapter and i might be using it wrong cheers alejandro | 1 |
43,251 | 11,580,638,159 | IssuesEvent | 2020-02-21 20:37:04 | vector-im/riot-web | https://api.github.com/repos/vector-im/riot-web | closed | Pasting an mxid into the new invite dialog reliably coughs up bogus "failed to find users" error | bug defect p1 project:ftue-user-lists | To repro:
* copy an mxid like `@vincent_houlot:matrix.org`
* Paste it into the invite box
* See this error dialog
<img width="804" alt="Screenshot 2020-02-21 at 14 23 06" src="https://user-images.githubusercontent.com/1294269/75042102-cb02e600-54b5-11ea-99d1-7bd4c5be7d65.png">
| 1.0 | Pasting an mxid into the new invite dialog reliably coughs up bogus "failed to find users" error - To repro:
* copy an mxid like `@vincent_houlot:matrix.org`
* Paste it into the invite box
* See this error dialog
<img width="804" alt="Screenshot 2020-02-21 at 14 23 06" src="https://user-images.githubusercontent.com/1294269/75042102-cb02e600-54b5-11ea-99d1-7bd4c5be7d65.png">
| defect | pasting an mxid into the new invite dialog reliably coughs up bogus failed to find users error to repro copy an mxid like vincent houlot matrix org paste it into the invite box see this error dialog img width alt screenshot at src | 1 |
36,507 | 7,974,036,312 | IssuesEvent | 2018-07-17 02:51:59 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | `DateTimeType::marshal()` occasionally returns string instead of object | Defect ORM RFC | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.3.11
### What you did
Create a form containing a text input field for a datetime property.
```php
echo $this->Form->create($article);
echo $this->Form->input('published', ['type' => 'text']);
echo $this->Form->button('Submit');
echo $this->Form->end();
```
Dump the datetime property after `patchEntity()`.
```php
$article = $this->Articles->get(1);
$this->Articles->patchEntity($article, $this->request->data);
if ($this->Articles->save($article)) {
var_dump($article->published);
}
$this->set('article', $article);
```
### What happened
When the posted value is `2017-01-01 00:00:00`, I get:
```
object(Cake\I18n\FrozenTime)#156 (3) {
["time"]=>
string(25) "2017-01-01T00:00:00+09:00"
["timezone"]=>
string(10) "Asia/Tokyo"
["fixedNowTime"]=>
bool(false)
}
```
However, when the posted value is `2017-01-01 00:00`, I get:
```
string(16) "2017-01-01 00:00"
```
### What you expected to happen
I always get:
```
object(Cake\I18n\FrozenTime)#156 (3) {
["time"]=>
string(25) "2017-01-01T00:00:00+09:00"
["timezone"]=>
string(10) "Asia/Tokyo"
["fixedNowTime"]=>
bool(false)
}
```
### Related code
When the input format is not `Y-m-d H:i:s`, `DateTimeType::marshal()` returns a string.
https://github.com/cakephp/cakephp/blob/3b341696e13ad6aed51e330d8e54479a41780512/src/Database/Type/DateTimeType.php#L164-L166
The following line would also return a string.
https://github.com/cakephp/cakephp/blob/3b341696e13ad6aed51e330d8e54479a41780512/src/Database/Type/DateTimeType.php#L170-L172 | 1.0 | `DateTimeType::marshal()` occasionally returns string instead of object - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.3.11
### What you did
Create a form containing a text input field for a datetime property.
```php
echo $this->Form->create($article);
echo $this->Form->input('published', ['type' => 'text']);
echo $this->Form->button('Submit');
echo $this->Form->end();
```
Dump the datetime property after `patchEntity()`.
```php
$article = $this->Articles->get(1);
$this->Articles->patchEntity($article, $this->request->data);
if ($this->Articles->save($article)) {
var_dump($article->published);
}
$this->set('article', $article);
```
### What happened
When the posted value is `2017-01-01 00:00:00`, I get:
```
object(Cake\I18n\FrozenTime)#156 (3) {
["time"]=>
string(25) "2017-01-01T00:00:00+09:00"
["timezone"]=>
string(10) "Asia/Tokyo"
["fixedNowTime"]=>
bool(false)
}
```
However, when the posted value is `2017-01-01 00:00`, I get:
```
string(16) "2017-01-01 00:00"
```
### What you expected to happen
I always get:
```
object(Cake\I18n\FrozenTime)#156 (3) {
["time"]=>
string(25) "2017-01-01T00:00:00+09:00"
["timezone"]=>
string(10) "Asia/Tokyo"
["fixedNowTime"]=>
bool(false)
}
```
### Related code
When the input format is not `Y-m-d H:i:s`, `DateTimeType::marshal()` returns a string.
https://github.com/cakephp/cakephp/blob/3b341696e13ad6aed51e330d8e54479a41780512/src/Database/Type/DateTimeType.php#L164-L166
The following line would also return a string.
https://github.com/cakephp/cakephp/blob/3b341696e13ad6aed51e330d8e54479a41780512/src/Database/Type/DateTimeType.php#L170-L172 | defect | datetimetype marshal occasionally returns string instead of object this is a multiple allowed bug enhancement feature discussion rfc cakephp version what you did create a form containing a text input field for a datetime property php echo this form create article echo this form input published echo this form button submit echo this form end dump the datetime property after patchentity php article this articles get this articles patchentity article this request data if this articles save article var dump article published this set article article what happened when the posted value is i get object cake frozentime string string asia tokyo bool false however when the posted value is i get string what you expected to happen i always get object cake frozentime string string asia tokyo bool false related code when the input format is not y m d h i s datetimetype marshal returns a string the following line would also return a string | 1 |
18,966 | 11,102,187,081 | IssuesEvent | 2019-12-16 23:13:30 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Issue with "az aks get-credentials" command | AKS Question Service Attention | When I try to use "az aks get-credentials" command I get the following error:
$ az aks get-credentials --resource-group "$RESOURCE_GROUP" --name "$AKS_CLUSTER_NAME" --admin
The HTTP method 'POST' is not supported.
It worked OK few days ago.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d75c3a8d-1187-bea1-a7d4-7ebba71c1790
* Version Independent ID: 75b0c352-762e-16ae-5cc6-d807ffc1dc3f
* Content: [az aks](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials)
* Content Source: [latest/docs-ref-autogen/aks.yml](https://github.com/Azure/azure-docs-cli-python/blob/live/latest/docs-ref-autogen/aks.yml)
* Service: **container-service**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw** | 1.0 | Issue with "az aks get-credentials" command - When I try to use "az aks get-credentials" command I get the following error:
$ az aks get-credentials --resource-group "$RESOURCE_GROUP" --name "$AKS_CLUSTER_NAME" --admin
The HTTP method 'POST' is not supported.
It worked OK few days ago.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d75c3a8d-1187-bea1-a7d4-7ebba71c1790
* Version Independent ID: 75b0c352-762e-16ae-5cc6-d807ffc1dc3f
* Content: [az aks](https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials)
* Content Source: [latest/docs-ref-autogen/aks.yml](https://github.com/Azure/azure-docs-cli-python/blob/live/latest/docs-ref-autogen/aks.yml)
* Service: **container-service**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw** | non_defect | issue with az aks get credentials command when i try to use az aks get credentials command i get the following error az aks get credentials resource group resource group name aks cluster name admin the http method post is not supported it worked ok few days ago document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login rloutlaw microsoft alias routlaw | 0 |
22,414 | 3,644,970,404 | IssuesEvent | 2016-02-15 12:31:56 | contao/core | https://api.github.com/repos/contao/core | closed | Anlegen neuer Templateordner ist im Backend nicht möglich | defect | Sowohl in Contao 3.5.5 als auch in Contao 3.5.6 können keine neuen Templateverzeichnisse angelegt werden.
Der Link dafür fehlt.
Wenn ich ein Verzeichnis via SSH anlege wird es angezeigt und kann verwendet werden.
Bei mir betrifft es sowohl Updates, als auch Neuinstallation. | 1.0 | Anlegen neuer Templateordner ist im Backend nicht möglich - Sowohl in Contao 3.5.5 als auch in Contao 3.5.6 können keine neuen Templateverzeichnisse angelegt werden.
Der Link dafür fehlt.
Wenn ich ein Verzeichnis via SSH anlege wird es angezeigt und kann verwendet werden.
Bei mir betrifft es sowohl Updates, als auch Neuinstallation. | defect | anlegen neuer templateordner ist im backend nicht möglich sowohl in contao als auch in contao können keine neuen templateverzeichnisse angelegt werden der link dafür fehlt wenn ich ein verzeichnis via ssh anlege wird es angezeigt und kann verwendet werden bei mir betrifft es sowohl updates als auch neuinstallation | 1 |
14,312 | 17,200,340,489 | IssuesEvent | 2021-07-17 04:53:17 | oilshell/oil | https://api.github.com/repos/oilshell/oil | closed | cell sublanguage: a[i] support in -n (nameref) and ${!ref} | compatibility osh-language | - usage from @abathur https://oilshell.zulipchat.com/#narrow/stream/121540-oil-discuss/topic/nameref.20implemented
- I think bash-completion used it in the `${!ref}` context, although I may have patched around it
| True | cell sublanguage: a[i] support in -n (nameref) and ${!ref} - - usage from @abathur https://oilshell.zulipchat.com/#narrow/stream/121540-oil-discuss/topic/nameref.20implemented
- I think bash-completion used it in the `${!ref}` context, although I may have patched around it
| non_defect | cell sublanguage a support in n nameref and ref usage from abathur i think bash completion used it in the ref context although i may have patched around it | 0 |
2,771 | 2,607,944,861 | IssuesEvent | 2015-02-26 00:32:52 | chrsmithdemos/switchlist | https://api.github.com/repos/chrsmithdemos/switchlist | opened | Have "print all" raise a dialog box to prompt for which documents to print | auto-migrated Priority-Medium Type-Defect | ```
Doing a print action from the main window causes all switch lists for all
trains to be printed. It would be nice if it also printed all the interesting
reports - car positions and yard reports. Open a dialog box where the user can
indicate which should be printed.
Requires making the reports printable using the "print all" view.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 24 Apr 2011 at 5:20 | 1.0 | Have "print all" raise a dialog box to prompt for which documents to print - ```
Doing a print action from the main window causes all switch lists for all
trains to be printed. It would be nice if it also printed all the interesting
reports - car positions and yard reports. Open a dialog box where the user can
indicate which should be printed.
Requires making the reports printable using the "print all" view.
```
-----
Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 24 Apr 2011 at 5:20 | defect | have print all raise a dialog box to prompt for which documents to print doing a print action from the main window causes all switch lists for all trains to be printed it would be nice if it also printed all the interesting reports car positions and yard reports open a dialog box where the user can indicate which should be printed requires making the reports printable using the print all view original issue reported on code google com by rwbowdi gmail com on apr at | 1 |
12,783 | 3,645,676,886 | IssuesEvent | 2016-02-15 15:36:55 | dbpedia-spotlight/dbpedia-spotlight | https://api.github.com/repos/dbpedia-spotlight/dbpedia-spotlight | closed | Splitting Occs and Spotlight Live | bug documentation | - Mapping licenses in the pom and wiki
- Problem on Maven/Github - blob is too big for the dependencies:
<dependency>
<groupId>org.wiki.harvester.dependency</groupId>
<artifactId>morphadorner</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.wiki.harvester.dependency</groupId>
<artifactId>pedia.uima.harvester</artifactId>
<version>1.0</version>
</dependency> | 1.0 | Splitting Occs and Spotlight Live - - Mapping licenses in the pom and wiki
- Problem on Maven/Github - blob is too big for the dependencies:
<dependency>
<groupId>org.wiki.harvester.dependency</groupId>
<artifactId>morphadorner</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.wiki.harvester.dependency</groupId>
<artifactId>pedia.uima.harvester</artifactId>
<version>1.0</version>
</dependency> | non_defect | splitting occs and spotlight live mapping licenses in the pom and wiki problem on maven github blob is too big for the dependencies org wiki harvester dependency morphadorner org wiki harvester dependency pedia uima harvester | 0 |
72,339 | 24,059,166,461 | IssuesEvent | 2022-09-16 20:09:21 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | New CI failures in `sparse` with nightly numpy | defect scipy.sparse | If you have submitted a pull request recently, you probably noticed some new failures in the Azure job `prerelease_deps_coverage_64bit_blas` that are unrelated to your changes:
```
=========================== short test summary info ============================
FAILED scipy/sparse/tests/test_base.py::TestCOO::test_reshape_copy - Assertio...
FAILED scipy/sparse/tests/test_base.py::TestCOONonCanonical::test_reshape_copy
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_limit_10[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_no_64[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_random[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_all_32[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_all_64[TestCOO-test_reshape_copy]
```
This job is the one that uses a "nightly" NumPy build. The failures are the result of a [recent change](https://github.com/numpy/numpy/pull/21995) in NumPy; see https://github.com/numpy/numpy/pull/21995#issuecomment-1249447631 for a description of how the NumPy change breaks the tests in `scipy.sparse`.
Based on the follow-up comment and the tests in that PR that were approved, it looks like this is an intentional change that is not considered a backwards compatibility break. It changes behavior, but it is apparently behavior that was never guaranteed to always remain the same.
I think we'll be able to work-around the NumPy change pretty easily.
| 1.0 | New CI failures in `sparse` with nightly numpy - If you have submitted a pull request recently, you probably noticed some new failures in the Azure job `prerelease_deps_coverage_64bit_blas` that are unrelated to your changes:
```
=========================== short test summary info ============================
FAILED scipy/sparse/tests/test_base.py::TestCOO::test_reshape_copy - Assertio...
FAILED scipy/sparse/tests/test_base.py::TestCOONonCanonical::test_reshape_copy
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_limit_10[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_no_64[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_random[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_all_32[TestCOO-test_reshape_copy]
FAILED scipy/sparse/tests/test_base.py::Test64Bit::test_resiliency_all_64[TestCOO-test_reshape_copy]
```
This job is the one that uses a "nightly" NumPy build. The failures are the result of a [recent change](https://github.com/numpy/numpy/pull/21995) in NumPy; see https://github.com/numpy/numpy/pull/21995#issuecomment-1249447631 for a description of how the NumPy change breaks the tests in `scipy.sparse`.
Based on the follow-up comment and the tests in that PR that were approved, it looks like this is an intentional change that is not considered a backwards compatibility break. It changes behavior, but it is apparently behavior that was never guaranteed to always remain the same.
I think we'll be able to work-around the NumPy change pretty easily.
| defect | new ci failures in sparse with nightly numpy if you have submitted a pull request recently you probably noticed some new failures in the azure job prerelease deps coverage blas that are unrelated to your changes short test summary info failed scipy sparse tests test base py testcoo test reshape copy assertio failed scipy sparse tests test base py testcoononcanonical test reshape copy failed scipy sparse tests test base py test resiliency limit failed scipy sparse tests test base py test no failed scipy sparse tests test base py test resiliency random failed scipy sparse tests test base py test resiliency all failed scipy sparse tests test base py test resiliency all this job is the one that uses a nightly numpy build the failures are the result of a in numpy see for a description of how the numpy change breaks the tests in scipy sparse based on the follow up comment and the tests in that pr that were approved it looks like this is an intentional change that is not considered a backwards compatibility break it changes behavior but it is apparently behavior that was never guaranteed to always remain the same i think we ll be able to work around the numpy change pretty easily | 1 |
28,625 | 5,312,757,016 | IssuesEvent | 2017-02-13 10:03:22 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | FAIL: scipy.integrate - test_odeint_full_jac | defect scipy.integrate | teapot:scipy andrew$ python runtests.py -s integrate
Building, see build.log...
Build OK
Running unit tests for scipy.integrate
NumPy version 1.12.0
NumPy relaxed strides checking option: True
NumPy is installed in /Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy
SciPy version 0.19.0.dev0+
SciPy is installed in /Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy
Python version 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]
nose version 1.3.7
.......................................................................................................................................................................................................................F...K............................................
======================================================================
FAIL: test_odeint_jac.test_odeint_full_jac
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy/integrate/tests/test_odeint_jac.py", line 71, in test_odeint_full_jac
check_odeint(JACTYPE_FULL)
File "/Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy/integrate/tests/test_odeint_jac.py", line 66, in check_odeint
assert_allclose(yfinal, y1, rtol=1e-12)
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy/testing/utils.py", line 1411, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy/testing/utils.py", line 796, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-12, atol=0
(mismatch 100.0%)
x: array([ 4.266167e-04, 2.668761e-05, 2.054494e-06, 2.566184e-08,
3.395174e-10])
y: array([ 4.266167e-04, 2.668761e-05, 2.054494e-06, 2.566184e-08,
3.395237e-10])
----------------------------------------------------------------------
Ran 264 tests in 2.622s
| 1.0 | FAIL: scipy.integrate - test_odeint_full_jac - teapot:scipy andrew$ python runtests.py -s integrate
Building, see build.log...
Build OK
Running unit tests for scipy.integrate
NumPy version 1.12.0
NumPy relaxed strides checking option: True
NumPy is installed in /Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy
SciPy version 0.19.0.dev0+
SciPy is installed in /Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy
Python version 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]
nose version 1.3.7
.......................................................................................................................................................................................................................F...K............................................
======================================================================
FAIL: test_odeint_jac.test_odeint_full_jac
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy/integrate/tests/test_odeint_jac.py", line 71, in test_odeint_full_jac
check_odeint(JACTYPE_FULL)
File "/Users/andrew/Documents/Andy/programming/scipy/build/testenv/lib/python3.5/site-packages/scipy/integrate/tests/test_odeint_jac.py", line 66, in check_odeint
assert_allclose(yfinal, y1, rtol=1e-12)
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy/testing/utils.py", line 1411, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/Users/andrew/miniconda3/envs/dev3/lib/python3.5/site-packages/numpy/testing/utils.py", line 796, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-12, atol=0
(mismatch 100.0%)
x: array([ 4.266167e-04, 2.668761e-05, 2.054494e-06, 2.566184e-08,
3.395174e-10])
y: array([ 4.266167e-04, 2.668761e-05, 2.054494e-06, 2.566184e-08,
3.395237e-10])
----------------------------------------------------------------------
Ran 264 tests in 2.622s
| defect | fail scipy integrate test odeint full jac teapot scipy andrew python runtests py s integrate building see build log build ok running unit tests for scipy integrate numpy version numpy relaxed strides checking option true numpy is installed in users andrew envs lib site packages numpy scipy version scipy is installed in users andrew documents andy programming scipy build testenv lib site packages scipy python version continuum analytics inc default jul nose version f k fail test odeint jac test odeint full jac traceback most recent call last file users andrew envs lib site packages nose case py line in runtest self test self arg file users andrew documents andy programming scipy build testenv lib site packages scipy integrate tests test odeint jac py line in test odeint full jac check odeint jactype full file users andrew documents andy programming scipy build testenv lib site packages scipy integrate tests test odeint jac py line in check odeint assert allclose yfinal rtol file users andrew envs lib site packages numpy testing utils py line in assert allclose verbose verbose header header equal nan equal nan file users andrew envs lib site packages numpy testing utils py line in assert array compare raise assertionerror msg assertionerror not equal to tolerance rtol atol mismatch x array y array ran tests in | 1 |
203,759 | 23,180,381,354 | IssuesEvent | 2022-08-01 01:07:04 | haeli05/source | https://api.github.com/repos/haeli05/source | opened | CVE-2022-2564 (High) detected in mongoose-5.4.15.tgz | security vulnerability | ## CVE-2022-2564 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongoose-5.4.15.tgz</b></p></summary>
<p>Mongoose MongoDB ODM</p>
<p>Library home page: <a href="https://registry.npmjs.org/mongoose/-/mongoose-5.4.15.tgz">https://registry.npmjs.org/mongoose/-/mongoose-5.4.15.tgz</a></p>
<p>Path to dependency file: /BackEnd/package.json</p>
<p>Path to vulnerable library: /BackEnd/node_modules/mongoose/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mongoose-5.4.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/haeli05/source/commit/cd3dfb369e611687baef858fdd1cf268baed1b52">cd3dfb369e611687baef858fdd1cf268baed1b52</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype Pollution in GitHub repository automattic/mongoose prior to 6.4.6.
<p>Publish Date: 2022-07-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2564>CVE-2022-2564</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2564">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2564</a></p>
<p>Release Date: 2022-07-28</p>
<p>Fix Resolution: 6.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-2564 (High) detected in mongoose-5.4.15.tgz - ## CVE-2022-2564 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongoose-5.4.15.tgz</b></p></summary>
<p>Mongoose MongoDB ODM</p>
<p>Library home page: <a href="https://registry.npmjs.org/mongoose/-/mongoose-5.4.15.tgz">https://registry.npmjs.org/mongoose/-/mongoose-5.4.15.tgz</a></p>
<p>Path to dependency file: /BackEnd/package.json</p>
<p>Path to vulnerable library: /BackEnd/node_modules/mongoose/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mongoose-5.4.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/haeli05/source/commit/cd3dfb369e611687baef858fdd1cf268baed1b52">cd3dfb369e611687baef858fdd1cf268baed1b52</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype Pollution in GitHub repository automattic/mongoose prior to 6.4.6.
<p>Publish Date: 2022-07-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2564>CVE-2022-2564</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2564">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2564</a></p>
<p>Release Date: 2022-07-28</p>
<p>Fix Resolution: 6.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in mongoose tgz cve high severity vulnerability vulnerable library mongoose tgz mongoose mongodb odm library home page a href path to dependency file backend package json path to vulnerable library backend node modules mongoose package json dependency hierarchy x mongoose tgz vulnerable library found in head commit a href vulnerability details prototype pollution in github repository automattic mongoose prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
316,735 | 23,645,612,330 | IssuesEvent | 2022-08-25 21:45:38 | awslabs/diagram-maker | https://api.github.com/repos/awslabs/diagram-maker | closed | What is the preferred way to update the store data? | documentation | **Describe the issue with documentation**
I'm tryng to dynamically update the consumerData on a particular node, after it was created.
What is the preffered way to update the store data?
It would be nice to include this in the documentation or in the examples.
| 1.0 | What is the preferred way to update the store data? - **Describe the issue with documentation**
I'm tryng to dynamically update the consumerData on a particular node, after it was created.
What is the preffered way to update the store data?
It would be nice to include this in the documentation or in the examples.
| non_defect | what is the preferred way to update the store data describe the issue with documentation i m tryng to dynamically update the consumerdata on a particular node after it was created what is the preffered way to update the store data it would be nice to include this in the documentation or in the examples | 0 |
9,513 | 2,615,154,434 | IssuesEvent | 2015-03-01 06:32:27 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | Restoring Session file | auto-migrated Priority-Triage Type-Defect | ```
Every time I try to restore a previous session file
reaver basically re-arranges the characters withing the file
and it start to use random character to crack the pin.
For example: 05:b5670 it actually used the bssid to try and crack
the pin. I would like to know if, there's a precise way, or a different
method to restoring a session file. Is this a problem that everyone is
getting?
```
Original issue reported on code.google.com by `SkeThVi...@gmail.com` on 1 Mar 2012 at 4:17 | 1.0 | Restoring Session file - ```
Every time I try to restore a previous session file
reaver basically re-arranges the characters withing the file
and it start to use random character to crack the pin.
For example: 05:b5670 it actually used the bssid to try and crack
the pin. I would like to know if, there's a precise way, or a different
method to restoring a session file. Is this a problem that everyone is
getting?
```
Original issue reported on code.google.com by `SkeThVi...@gmail.com` on 1 Mar 2012 at 4:17 | defect | restoring session file every time i try to restore a previous session file reaver basically re arranges the characters withing the file and it start to use random character to crack the pin for example it actually used the bssid to try and crack the pin i would like to know if there s a precise way or a different method to restoring a session file is this a problem that everyone is getting original issue reported on code google com by skethvi gmail com on mar at | 1 |
185,324 | 21,786,158,429 | IssuesEvent | 2022-05-14 06:44:45 | classicvalues/AA-ionic-login | https://api.github.com/repos/classicvalues/AA-ionic-login | closed | WS-2019-0379 (Medium) detected in commons-codec-1.10.jar - autoclosed | security vulnerability | ## WS-2019-0379 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.10.jar</b></p></summary>
<p>The Apache Commons Codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>Path to dependency file: /node_modules/@capacitor/status-bar/android/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.2.1.jar (Root Library)
- builder-4.2.1.jar
- sdklib-27.2.1.jar
- httpmime-4.5.6.jar
- httpclient-4.5.6.jar
- :x: **commons-codec-1.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation.
<p>Publish Date: 2019-05-20
<p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113">https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113</a></p>
<p>Release Date: 2019-05-20</p>
<p>Fix Resolution: commons-codec:commons-codec:1.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0379 (Medium) detected in commons-codec-1.10.jar - autoclosed - ## WS-2019-0379 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.10.jar</b></p></summary>
<p>The Apache Commons Codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>Path to dependency file: /node_modules/@capacitor/status-bar/android/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-codec/commons-codec/1.10/4b95f4897fa13f2cd904aee711aeafc0c5295cd8/commons-codec-1.10.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.2.1.jar (Root Library)
- builder-4.2.1.jar
- sdklib-27.2.1.jar
- httpmime-4.5.6.jar
- httpclient-4.5.6.jar
- :x: **commons-codec-1.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache commons-codec before version “commons-codec-1.13-RC1” is vulnerable to information disclosure due to Improper Input validation.
<p>Publish Date: 2019-05-20
<p>URL: <a href=https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113>WS-2019-0379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113">https://github.com/apache/commons-codec/commit/48b615756d1d770091ea3322eefc08011ee8b113</a></p>
<p>Release Date: 2019-05-20</p>
<p>Fix Resolution: commons-codec:commons-codec:1.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws medium detected in commons codec jar autoclosed ws medium severity vulnerability vulnerable library commons codec jar the apache commons codec package contains simple encoder and decoders for various formats such as and hexadecimal in addition to these widely used encoders and decoders the codec package also maintains a collection of phonetic encoding utilities path to dependency file node modules capacitor status bar android build gradle path to vulnerable library home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar home wss scanner gradle caches modules files commons codec commons codec commons codec jar dependency hierarchy lint gradle jar root library builder jar sdklib jar httpmime jar httpclient jar x commons codec jar vulnerable library found in head commit a href found in base branch master vulnerability details apache commons codec before version “commons codec ” is vulnerable to information disclosure due to improper input validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons codec commons codec step up your open source security game with whitesource | 0 |
58,529 | 16,589,157,876 | IssuesEvent | 2021-06-01 04:52:15 | SAP/fundamental-ngx | https://api.github.com/repos/SAP/fundamental-ngx | closed | Bug: Platform Object Status - focus and tabbing issue for clickable object status | Defect Hunting Low QA Approved bug platform | #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
For linked object status, focus should happen after clicking. Also, clickable objects should be tabbable. Fundamental Styles supports tabbing https://fundamental-styles.netlify.app/?path=/docs/components-object-status--primary#clickable-object-status
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
latest
#### If this is a bug, please provide steps for reproducing it.
1. Go to https://fundamental-ngx.netlify.app/#/platform/object-status
2. Go to Clickable Object Status example.
3. Try to tab through the clickable object statuses. None of them are tabbable.
Also, UI5 example shows focus for clickable object status, but it is not present here. Generally, if something is a link, one would expect focus and tabbing for it.
UI5:

| 1.0 | Bug: Platform Object Status - focus and tabbing issue for clickable object status - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
For linked object status, focus should happen after clicking. Also, clickable objects should be tabbable. Fundamental Styles supports tabbing https://fundamental-styles.netlify.app/?path=/docs/components-object-status--primary#clickable-object-status
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
latest
#### If this is a bug, please provide steps for reproducing it.
1. Go to https://fundamental-ngx.netlify.app/#/platform/object-status
2. Go to Clickable Object Status example.
3. Try to tab through the clickable object statuses. None of them are tabbable.
Also, UI5 example shows focus for clickable object status, but it is not present here. Generally, if something is a link, one would expect focus and tabbing for it.
UI5:

| defect | bug platform object status focus and tabbing issue for clickable object status is this a bug enhancement or feature request bug briefly describe your proposal for linked object status focus should happen after clicking also clickable objects should be tabbable fundamental styles supports tabbing which versions of angular and fundamental library for angular are affected if this is a feature request use current version latest if this is a bug please provide steps for reproducing it go to go to clickable object status example try to tab through the clickable object statuses none of them are tabbable also example shows focus for clickable object status but it is not present here generally if something is a link one would expect focus and tabbing for it | 1 |
7,019 | 2,610,322,341 | IssuesEvent | 2015-02-26 19:43:56 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | closed | Map Issue | auto-migrated Priority-Medium Type-Defect | ```
Default Fondor Reinforcement Points
No reinforcement points on fondor when defending as the republic, though they
were there when i attacked it which is weird.
Clones wars GC, normal as republic
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 11 May 2011 at 12:40 | 1.0 | Map Issue - ```
Default Fondor Reinforcement Points
No reinforcement points on fondor when defending as the republic, though they
were there when i attacked it which is weird.
Clones wars GC, normal as republic
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 11 May 2011 at 12:40 | defect | map issue default fondor reinforcement points no reinforcement points on fondor when defending as the republic though they were there when i attacked it which is weird clones wars gc normal as republic original issue reported on code google com by gmail com on may at | 1 |
416,329 | 28,076,878,131 | IssuesEvent | 2023-03-30 00:57:06 | f1tenth/f1tenth_gym | https://api.github.com/repos/f1tenth/f1tenth_gym | closed | Example agent Dockerfile | documentation enhancement | Add example files for setting up an agent using the gym environment.
- Template Dockerfile, including comments on how to add dependencies
- Build and run scripts
- (tentative) native installation script | 1.0 | Example agent Dockerfile - Add example files for setting up an agent using the gym environment.
- Template Dockerfile, including comments on how to add dependencies
- Build and run scripts
- (tentative) native installation script | non_defect | example agent dockerfile add example files for setting up an agent using the gym environment template dockerfile including comments on how to add dependencies build and run scripts tentative native installation script | 0 |
46,561 | 13,174,660,718 | IssuesEvent | 2020-08-11 23:06:55 | shaundmorris/ddf | https://api.github.com/repos/shaundmorris/ddf | closed | CVE-2014-0114 High Severity Vulnerability detected by WhiteSource | security vulnerability wontfix | ## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.7.0.jar</b></p></summary>
<p>The Java language provides Reflection and Introspection APIs (see the java.lang.reflect and java.beans packages in the JDK Javadocs). However, these APIs can be quite complex to understand and utilize. The BeanUtils component provides easy-to-use wrappers around these capabilities</p>
<p>path: /root/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar</p>
<p>
<p>Library home page: <a href=http://jakarta.apache.org/commons/beanutils/>http://jakarta.apache.org/commons/beanutils/</a></p>
Dependency Hierarchy:
- commons-configuration-1.6.jar (Root Library)
- commons-digester-1.8.jar
- :x: **commons-beanutils-1.7.0.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://issues.apache.org/jira/browse/BEANUTILS-463">https://issues.apache.org/jira/browse/BEANUTILS-463</a></p>
<p>Release Date: 2014-05-24</p>
<p>Fix Resolution: Upgrade to version 1.9.2 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2014-0114 High Severity Vulnerability detected by WhiteSource - ## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.7.0.jar</b></p></summary>
<p>The Java language provides Reflection and Introspection APIs (see the java.lang.reflect and java.beans packages in the JDK Javadocs). However, these APIs can be quite complex to understand and utilize. The BeanUtils component provides easy-to-use wrappers around these capabilities</p>
<p>path: /root/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar</p>
<p>
<p>Library home page: <a href=http://jakarta.apache.org/commons/beanutils/>http://jakarta.apache.org/commons/beanutils/</a></p>
Dependency Hierarchy:
- commons-configuration-1.6.jar (Root Library)
- commons-digester-1.8.jar
- :x: **commons-beanutils-1.7.0.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://issues.apache.org/jira/browse/BEANUTILS-463">https://issues.apache.org/jira/browse/BEANUTILS-463</a></p>
<p>Release Date: 2014-05-24</p>
<p>Fix Resolution: Upgrade to version 1.9.2 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high severity vulnerability detected by whitesource cve high severity vulnerability vulnerable library commons beanutils jar the java language provides reflection and introspection apis see the java lang reflect and java beans packages in the jdk javadocs however these apis can be quite complex to understand and utilize the beanutils component provides easy to use wrappers around these capabilities path root repository commons beanutils commons beanutils commons beanutils jar library home page a href dependency hierarchy commons configuration jar root library commons digester jar x commons beanutils jar vulnerable library vulnerability details apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution upgrade to version or greater step up your open source security game with whitesource | 0 |
2,555 | 2,607,927,771 | IssuesEvent | 2015-02-26 00:25:24 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | Minify_Cache_File write empty files | auto-migrated Priority-Critical Type-Defect | ```
On at least one server tested, Minify 2.0.2b writes only zero-length
files. Removing the LOCK_EX option from file_put_contents() fixed the
issue, but I don't know why.
Both my test/prod are similar Apache/mod_php setups on Red Hat 5, but only
the production had the issue.
```
-----
Original issue reported on code.google.com by `mrclay....@gmail.com` on 23 Jul 2008 at 7:50 | 1.0 | Minify_Cache_File write empty files - ```
On at least one server tested, Minify 2.0.2b writes only zero-length
files. Removing the LOCK_EX option from file_put_contents() fixed the
issue, but I don't know why.
Both my test/prod are similar Apache/mod_php setups on Red Hat 5, but only
the production had the issue.
```
-----
Original issue reported on code.google.com by `mrclay....@gmail.com` on 23 Jul 2008 at 7:50 | defect | minify cache file write empty files on at least one server tested minify writes only zero length files removing the lock ex option from file put contents fixed the issue but i don t know why both my test prod are similar apache mod php setups on red hat but only the production had the issue original issue reported on code google com by mrclay gmail com on jul at | 1 |
21,112 | 3,461,696,089 | IssuesEvent | 2015-12-20 09:26:25 | arti01/jkursy | https://api.github.com/repos/arti01/jkursy | closed | Logowanie kursanta | auto-migrated Priority-Low Type-Defect | ```
Po nieudanym logowaniu powrót do głównej a nie do strony logowania
```
Original issue reported on code.google.com by `stasiom...@gmail.com` on 14 Mar 2011 at 9:35 | 1.0 | Logowanie kursanta - ```
Po nieudanym logowaniu powrót do głównej a nie do strony logowania
```
Original issue reported on code.google.com by `stasiom...@gmail.com` on 14 Mar 2011 at 9:35 | defect | logowanie kursanta po nieudanym logowaniu powrót do głównej a nie do strony logowania original issue reported on code google com by stasiom gmail com on mar at | 1 |
20,526 | 2,622,852,040 | IssuesEvent | 2015-03-04 08:05:46 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | "Advanced Monitor" in Popup | auto-migrated Priority-Medium | ```
The majority of the pages I monitor require a selector.
It would be nice if there was an "Advanced Monitor" option in the popup that
would kick you straight to the advanced configuration of the new monitor. As
the process now stands, I must first add the page with the popup (via left
click), then open the button's context menu (via right click), find my new
monitor, and turn on advanced monitoring.
If cluttering the popup is a concern, maybe just make this behavior
configurable from advanced options. That way, those of us who need it can set
it once and forget it.
```
Original issue reported on code.google.com by `aftermar...@gmail.com` on 14 Oct 2012 at 12:08
* Merged into: #208 | 1.0 | "Advanced Monitor" in Popup - ```
The majority of the pages I monitor require a selector.
It would be nice if there was an "Advanced Monitor" option in the popup that
would kick you straight to the advanced configuration of the new monitor. As
the process now stands, I must first add the page with the popup (via left
click), then open the button's context menu (via right click), find my new
monitor, and turn on advanced monitoring.
If cluttering the popup is a concern, maybe just make this behavior
configurable from advanced options. That way, those of us who need it can set
it once and forget it.
```
Original issue reported on code.google.com by `aftermar...@gmail.com` on 14 Oct 2012 at 12:08
* Merged into: #208 | non_defect | advanced monitor in popup the majority of the pages i monitor require a selector it would be nice if there was an advanced monitor option in the popup that would kick you straight to the advanced configuration of the new monitor as the process now stands i must first add the page with the popup via left click then open the button s context menu via right click find my new monitor and turn on advanced monitoring if cluttering the popup is a concern maybe just make this behavior configurable from advanced options that way those of us who need it can set it once and forget it original issue reported on code google com by aftermar gmail com on oct at merged into | 0 |
20,110 | 10,459,781,771 | IssuesEvent | 2019-09-20 11:53:50 | Alfresco/alfresco-transform-core | https://api.github.com/repos/Alfresco/alfresco-transform-core | closed | CVE-2019-12402 (Medium) detected in commons-compress-1.18.jar | security vulnerability | ## CVE-2019-12402 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.18.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /alfresco-transform-core/alfresco-docker-tika/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.18.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Alfresco/alfresco-transform-core/commit/8142836caf3a42dc77a0e74346e16bbc3eaf9c7b">8142836caf3a42dc77a0e74346e16bbc3eaf9c7b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The file name encoding algorithm used internally in Apache Commons Compress 1.15 to 1.18 can get into an infinite loop when faced with specially crafted inputs. This can lead to a denial of service attack if an attacker can choose the file names inside of an archive created by Compress.
<p>Publish Date: 2019-08-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402>CVE-2019-12402</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402</a></p>
<p>Release Date: 2019-08-30</p>
<p>Fix Resolution: 1.19</p>
</p>
</details>
<p></p>
| True | CVE-2019-12402 (Medium) detected in commons-compress-1.18.jar - ## CVE-2019-12402 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.18.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /alfresco-transform-core/alfresco-docker-tika/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar,2/repository/org/apache/commons/commons-compress/1.18/commons-compress-1.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.18.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Alfresco/alfresco-transform-core/commit/8142836caf3a42dc77a0e74346e16bbc3eaf9c7b">8142836caf3a42dc77a0e74346e16bbc3eaf9c7b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The file name encoding algorithm used internally in Apache Commons Compress 1.15 to 1.18 can get into an infinite loop when faced with specially crafted inputs. This can lead to a denial of service attack if an attacker can choose the file names inside of an archive created by Compress.
<p>Publish Date: 2019-08-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402>CVE-2019-12402</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12402</a></p>
<p>Release Date: 2019-08-30</p>
<p>Fix Resolution: 1.19</p>
</p>
</details>
<p></p>
| non_defect | cve medium detected in commons compress jar cve medium severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj library home page a href path to dependency file alfresco transform core alfresco docker tika pom xml path to vulnerable library root repository org apache commons commons compress commons compress jar repository org apache commons commons compress commons compress jar dependency hierarchy x commons compress jar vulnerable library found in head commit a href vulnerability details the file name encoding algorithm used internally in apache commons compress to can get into an infinite loop when faced with specially crafted inputs this can lead to a denial of service attack if an attacker can choose the file names inside of an archive created by compress publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution | 0 |
85,883 | 16,756,864,742 | IssuesEvent | 2021-06-13 00:48:34 | virocon-organization/virocon | https://api.github.com/repos/virocon-organization/virocon | closed | Axis limits of plot_2D_isodensity too small | code improvement | **I'm submitting a ...**
- [ ] bug report
- [ ] feature request
- [x] code improvement request
## Expected behavior
We should not have the impression that maybe not all datapoints are shown.
## Actual behavior
**How does it currently work (with the bug causing problems or without the feature)?**

## Steps to reproduce the problem (how to see the actual behavior)
import numpy as np
import matplotlib.pyplot as plt
from virocon import (
read_ec_benchmark_dataset,
GlobalHierarchicalModel,
ExponentiatedWeibullDistribution,
LogNormalDistribution,
DependenceFunction,
WidthOfIntervalSlicer,
IFORMContour,
plot_2D_isodensity
)
# Load sea state measurements from NDBC buoy 44007.
data = read_ec_benchmark_dataset("datasets/ec-benchmark_dataset_A.txt")
# Define the marginal distribution for Hs.
dist_description_hs = {
"distribution": ExponentiatedWeibullDistribution(),
"intervals": WidthOfIntervalSlicer(width=0.5, min_n_points=50),
}
# Define the conditional distribution for Tz
def _asymdecrease3(x, a, b, c):
return a + b / (1 + c * x)
def _lnsquare2(x, a, b, c):
return np.log(a + b * np.sqrt(np.divide(x, 9.81)))
bounds = [(0, None), (0, None), (None, None)]
sigma_dep = DependenceFunction(_asymdecrease3, bounds=bounds, latex="$a + b / (1 + c * x)$")
mu_dep = DependenceFunction(_lnsquare2, bounds=bounds, latex="$\ln(a + b \sqrt{x / 9.81})$")
dist_description_tz = {
"distribution": LogNormalDistribution(),
"conditional_on": 0,
"parameters": {"sigma": sigma_dep, "mu": mu_dep,},
}
# Create the joint model structure.
dist_descriptions = [dist_description_hs, dist_description_tz]
model = GlobalHierarchicalModel(dist_descriptions)
# Define how the model shall be fitted to data
fit_description_hs = {"method": "wlsq", "weights": "quadratic"}
fit_descriptions = [fit_description_hs, None]
# Fit the model to the data (estimate the model's parameter values).
model.fit(data, fit_descriptions)
# Print the estimated parameter values
print(model)
# Analyze the model's goodnes of fit based with an isodensity plot.
semantics = {
"names": ["Significant wave height", "Zero-up-crossing period"],
"symbols": ["H_s", "T_z"],
"units": ["m", "s"],
}
plot_2D_isodensity(model, data, semantics, swap_axis=True)
plt.show()
| 1.0 | Axis limits of plot_2D_isodensity too small - **I'm submitting a ...**
- [ ] bug report
- [ ] feature request
- [x] code improvement request
## Expected behavior
We should not have the impression that maybe not all datapoints are shown.
## Actual behavior
**How does it currently work (with the bug causing problems or without the feature)?**

## Steps to reproduce the problem (how to see the actual behavior)
import numpy as np
import matplotlib.pyplot as plt
from virocon import (
read_ec_benchmark_dataset,
GlobalHierarchicalModel,
ExponentiatedWeibullDistribution,
LogNormalDistribution,
DependenceFunction,
WidthOfIntervalSlicer,
IFORMContour,
plot_2D_isodensity
)
# Load sea state measurements from NDBC buoy 44007.
data = read_ec_benchmark_dataset("datasets/ec-benchmark_dataset_A.txt")
# Define the marginal distribution for Hs.
dist_description_hs = {
"distribution": ExponentiatedWeibullDistribution(),
"intervals": WidthOfIntervalSlicer(width=0.5, min_n_points=50),
}
# Define the conditional distribution for Tz
def _asymdecrease3(x, a, b, c):
return a + b / (1 + c * x)
def _lnsquare2(x, a, b, c):
return np.log(a + b * np.sqrt(np.divide(x, 9.81)))
bounds = [(0, None), (0, None), (None, None)]
sigma_dep = DependenceFunction(_asymdecrease3, bounds=bounds, latex="$a + b / (1 + c * x)$")
mu_dep = DependenceFunction(_lnsquare2, bounds=bounds, latex="$\ln(a + b \sqrt{x / 9.81})$")
dist_description_tz = {
"distribution": LogNormalDistribution(),
"conditional_on": 0,
"parameters": {"sigma": sigma_dep, "mu": mu_dep,},
}
# Create the joint model structure.
dist_descriptions = [dist_description_hs, dist_description_tz]
model = GlobalHierarchicalModel(dist_descriptions)
# Define how the model shall be fitted to data
fit_description_hs = {"method": "wlsq", "weights": "quadratic"}
fit_descriptions = [fit_description_hs, None]
# Fit the model to the data (estimate the model's parameter values).
model.fit(data, fit_descriptions)
# Print the estimated parameter values
print(model)
# Analyze the model's goodnes of fit based with an isodensity plot.
semantics = {
"names": ["Significant wave height", "Zero-up-crossing period"],
"symbols": ["H_s", "T_z"],
"units": ["m", "s"],
}
plot_2D_isodensity(model, data, semantics, swap_axis=True)
plt.show()
| non_defect | axis limits of plot isodensity too small i m submitting a bug report feature request code improvement request expected behavior we should not have the impression that maybe not all datapoints are shown actual behavior how does it currently work with the bug causing problems or without the feature steps to reproduce the problem how to see the actual behavior import numpy as np import matplotlib pyplot as plt from virocon import read ec benchmark dataset globalhierarchicalmodel exponentiatedweibulldistribution lognormaldistribution dependencefunction widthofintervalslicer iformcontour plot isodensity load sea state measurements from ndbc buoy data read ec benchmark dataset datasets ec benchmark dataset a txt define the marginal distribution for hs dist description hs distribution exponentiatedweibulldistribution intervals widthofintervalslicer width min n points define the conditional distribution for tz def x a b c return a b c x def x a b c return np log a b np sqrt np divide x bounds sigma dep dependencefunction bounds bounds latex a b c x mu dep dependencefunction bounds bounds latex ln a b sqrt x dist description tz distribution lognormaldistribution conditional on parameters sigma sigma dep mu mu dep create the joint model structure dist descriptions model globalhierarchicalmodel dist descriptions define how the model shall be fitted to data fit description hs method wlsq weights quadratic fit descriptions fit the model to the data estimate the model s parameter values model fit data fit descriptions print the estimated parameter values print model analyze the model s goodnes of fit based with an isodensity plot semantics names symbols units plot isodensity model data semantics swap axis true plt show | 0 |
823,636 | 31,027,768,748 | IssuesEvent | 2023-08-10 10:18:04 | autonity/docs.autonity.org | https://api.github.com/repos/autonity/docs.autonity.org | closed | Feature/oracle: document | enhancement Priority 1 | ### Description
Update docs to document accountability.
### References
- https://github.com/autonity/autonity/commit/c35d82559e6617493fc790659b88d33bc506ccc9
| 1.0 | Feature/oracle: document - ### Description
Update docs to document accountability.
### References
- https://github.com/autonity/autonity/commit/c35d82559e6617493fc790659b88d33bc506ccc9
| non_defect | feature oracle document description update docs to document accountability references | 0 |
78,301 | 22,193,320,231 | IssuesEvent | 2022-06-07 02:59:12 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | tensorflow cpu module's speed lower on windows than linux | stat:awaiting response type:build/install type:support stalled 1.4.0 | System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit
- TensorFlow installed from (source or binary):build tensorflow source to shared lib
- TensorFlow version (use command below):tensorflow v1.3.0
- Python version: 3.5
- Bazel version (if compiling from source):N/A
- GCC/Compiler version (if compiling from source):N/A
- CUDA/cuDNN version:N/A
- GPU model and memory:N/A
- Exact command to reproduce:N/A
Describe the problem
Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason?
windows7 PC environment:
CPU: Intel Core i3 2120
time: 80~160 ms
ubuntu16.04 PC environment:
CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz
time: 40~100 ms
| 1.0 | tensorflow cpu module's speed lower on windows than linux - System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit
- TensorFlow installed from (source or binary):build tensorflow source to shared lib
- TensorFlow version (use command below):tensorflow v1.3.0
- Python version: 3.5
- Bazel version (if compiling from source):N/A
- GCC/Compiler version (if compiling from source):N/A
- CUDA/cuDNN version:N/A
- GPU model and memory:N/A
- Exact command to reproduce:N/A
Describe the problem
Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason?
windows7 PC environment:
CPU: Intel Core i3 2120
time: 80~160 ms
ubuntu16.04 PC environment:
CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz
time: 40~100 ms
| non_defect | tensorflow cpu module s speed lower on windows than linux system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu and ubuntu tensorflow installed from source or binary build tensorflow source to shared lib tensorflow version use command below tensorflow python version bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce n a describe the problem training tensorflow module and detect faces both on and ubuntu but it costs about twice time on than so we want to know this issue is normal or not and if it is normal what s the reason pc environment cpu intel core time ms pc environment cpu intel r core tm cpu time ms | 0 |
58,008 | 8,222,606,086 | IssuesEvent | 2018-09-06 08:03:24 | Jeedom-Zigate/jeedom-plugin-zigate | https://api.github.com/repos/Jeedom-Zigate/jeedom-plugin-zigate | opened | Amélirer la documentation du code | documentation enhancement standardization | Le code manque un peu de documentation... Voir pour faire une PHPDoc. | 1.0 | Amélirer la documentation du code - Le code manque un peu de documentation... Voir pour faire une PHPDoc. | non_defect | amélirer la documentation du code le code manque un peu de documentation voir pour faire une phpdoc | 0 |
18,443 | 3,061,970,026 | IssuesEvent | 2015-08-16 03:45:54 | eczarny/spectacle | https://api.github.com/repos/eczarny/spectacle | closed | Empty space left at the top for apps with hidden menubar | defect ★ | When an app has menubar hidden and is resized by spectacle there's an empty space left at the top where the menubar used to be.
I'm adding
<key>LSUIPresentationMode</key>
<integer>4</integer>
to info.plist of an app to hide its menubar but it happens with other apps like Emacs that have an option to hide it. | 1.0 | Empty space left at the top for apps with hidden menubar - When an app has menubar hidden and is resized by spectacle there's an empty space left at the top where the menubar used to be.
I'm adding
<key>LSUIPresentationMode</key>
<integer>4</integer>
to info.plist of an app to hide its menubar but it happens with other apps like Emacs that have an option to hide it. | defect | empty space left at the top for apps with hidden menubar when an app has menubar hidden and is resized by spectacle there s an empty space left at the top where the menubar used to be i m adding lsuipresentationmode to info plist of an app to hide its menubar but it happens with other apps like emacs that have an option to hide it | 1 |
103,302 | 16,602,467,524 | IssuesEvent | 2021-06-01 21:38:08 | gms-ws-sandbox/nibrs | https://api.github.com/repos/gms-ws-sandbox/nibrs | opened | CVE-2019-0221 (Medium) detected in tomcat-embed-core-8.5.20.jar, tomcat-embed-core-8.5.34.jar | security vulnerability | ## CVE-2019-0221 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-8.5.20.jar</b>, <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-8.5.20.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.20/tomcat-embed-core-8.5.20.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-8.5.20.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/web/nibrs-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p>
<p>Release Date: 2019-05-28</p>
<p>Fix Resolution: 9.0.0.18,8.5.40,7.0.94</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.20","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tomcat.embed:tomcat-embed-core:8.5.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"9.0.0.18,8.5.40,7.0.94"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.34","packageFilePaths":["/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.0.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.0.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.34","isMinimumFixVersionAvailable":true,"minimumFixVersion":"9.0.0.18,8.5.40,7.0.94"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-0221","vulnerabilityDetails":"The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-0221 (Medium) detected in tomcat-embed-core-8.5.20.jar, tomcat-embed-core-8.5.34.jar - ## CVE-2019-0221 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-8.5.20.jar</b>, <b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-8.5.20.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.20/tomcat-embed-core-8.5.20.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-8.5.20.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-8.5.34.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs/web/nibrs-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,nibrs/tools/nibrs-route/target/nibrs-route-1.0.0/WEB-INF/lib/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.34/tomcat-embed-core-8.5.34.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.0.5.RELEASE.jar
- :x: **tomcat-embed-core-8.5.34.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/dba6b0930aa319c568021490e9259f5cae89b6c5">dba6b0930aa319c568021490e9259f5cae89b6c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p>
<p>Release Date: 2019-05-28</p>
<p>Fix Resolution: 9.0.0.18,8.5.40,7.0.94</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.20","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.tomcat.embed:tomcat-embed-core:8.5.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"9.0.0.18,8.5.40,7.0.94"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.34","packageFilePaths":["/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.0.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.0.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.34","isMinimumFixVersionAvailable":true,"minimumFixVersion":"9.0.0.18,8.5.40,7.0.94"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-0221","vulnerabilityDetails":"The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in tomcat embed core jar tomcat embed core jar cve medium severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib tomcat embed core jar dependency hierarchy x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs web nibrs web pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar nibrs tools nibrs route target nibrs route web inf lib tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the ssi printenv command in apache tomcat to to and to echoes user provided data without escaping and is therefore vulnerable to xss ssi is disabled by default the printenv command is intended for debugging and is unlikely to be present in a production website publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion packagetype java groupid org apache tomcat embed packagename tomcat embed core packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the ssi printenv command in apache tomcat to to and to echoes user provided data without escaping and is therefore vulnerable to xss ssi is disabled by default the printenv command is intended for debugging and is unlikely to be present in a production website vulnerabilityurl | 0 |
21,724 | 3,548,871,967 | IssuesEvent | 2016-01-20 16:00:38 | josecl/cool-php-captcha | https://api.github.com/repos/josecl/cool-php-captcha | closed | caching problem | auto-migrated Priority-Medium Security Type-Defect | ```
next bug is in example-form.php
line 76 and followers: <img src="captcha.php" id="captcha" />
if example-form.php ... uses headers to force cache:
- Expires in future
- or Expires to current time
- or Cache-Control WITHOUT must-revalidate, post-check=0, pre-check=0
- or Pragma private or public
so when open example-form.php -> submit form -> i.e. to send-form.php and there
will found error in user data so show error and link with javascript
window.history.go(-1) ... or user self go back in browser
>> user will get back to example-form.php BUT this page WILL NOT BEEN
REQUESTED, BECAUSE IS CACHED and you get page with data that he writes
before... that OK
>> bug is in next step... WHEN USER SUBMIT FROM AGAIN...
-> captcha will be always SAME and always VALID becaus... cached page didn't
change captcha
-> captcha fails every time because... first execution of sent-form.php clear
previous captcha
solution exists request captcha.php by javascript:
script('type="text/javascript"');
var captchaAction = null;
function captcha() {
captchaDate = new Date;
if (captchaAction == null || captchaAction + 1000 < captchaDate.getTime()) {
// request new captcha code and ban old one
document.getElementById('captcha').src='captcha.php?'+Math.random();
}
captchaAction = captchaDate.getTime();
setTimeout('captcha()', 250);
}
timerID = setTimeout('captcha()', 250); <?php
</script>;
```
Original issue reported on code.google.com by `svecp...@gmail.com` on 31 Aug 2010 at 5:47 | 1.0 | caching problem - ```
next bug is in example-form.php
line 76 and followers: <img src="captcha.php" id="captcha" />
if example-form.php ... uses headers to force cache:
- Expires in future
- or Expires to current time
- or Cache-Control WITHOUT must-revalidate, post-check=0, pre-check=0
- or Pragma private or public
so when open example-form.php -> submit form -> i.e. to send-form.php and there
will found error in user data so show error and link with javascript
window.history.go(-1) ... or user self go back in browser
>> user will get back to example-form.php BUT this page WILL NOT BEEN
REQUESTED, BECAUSE IS CACHED and you get page with data that he writes
before... that OK
>> bug is in next step... WHEN USER SUBMIT FROM AGAIN...
-> captcha will be always SAME and always VALID becaus... cached page didn't
change captcha
-> captcha fails every time because... first execution of sent-form.php clear
previous captcha
solution exists request captcha.php by javascript:
script('type="text/javascript"');
var captchaAction = null;
function captcha() {
captchaDate = new Date;
if (captchaAction == null || captchaAction + 1000 < captchaDate.getTime()) {
// request new captcha code and ban old one
document.getElementById('captcha').src='captcha.php?'+Math.random();
}
captchaAction = captchaDate.getTime();
setTimeout('captcha()', 250);
}
timerID = setTimeout('captcha()', 250); <?php
</script>;
```
Original issue reported on code.google.com by `svecp...@gmail.com` on 31 Aug 2010 at 5:47 | defect | caching problem next bug is in example form php line and followers if example form php uses headers to force cache expires in future or expires to current time or cache control without must revalidate post check pre check or pragma private or public so when open example form php submit form i e to send form php and there will found error in user data so show error and link with javascript window history go or user self go back in browser user will get back to example form php but this page will not been requested because is cached and you get page with data that he writes before that ok bug is in next step when user submit from again captcha will be always same and always valid becaus cached page didn t change captcha captcha fails every time because first execution of sent form php clear previous captcha solution exists request captcha php by javascript script type text javascript var captchaaction null function captcha captchadate new date if captchaaction null captchaaction captchadate gettime request new captcha code and ban old one document getelementbyid captcha src captcha php math random captchaaction captchadate gettime settimeout captcha timerid settimeout captcha php original issue reported on code google com by svecp gmail com on aug at | 1 |
104,839 | 13,130,030,072 | IssuesEvent | 2020-08-06 14:46:10 | AvaloniaUI/Avalonia | https://api.github.com/repos/AvaloniaUI/Avalonia | closed | Improve error message when missing IAssetLoader in AvaloniaXamlLoader.Load | API designer | Right now, when using ```AvaloniaXamlLoader.Load``` inside the Application's constructor, you get the error: ```Could not create IAssetLoader : maybe Application.RegisterServices() wasn't called?```.
How about we improve this message in the case of initializing an application? To read:
```Could not create IAssetLoader : maybe Application.RegisterServices() or is AvaloniaXamlLoader.Load getting called in the Application's constructor?"
(Exact message text is of course up for discussion, my main point is that it hints about the location of the .Load call) | 1.0 | Improve error message when missing IAssetLoader in AvaloniaXamlLoader.Load - Right now, when using ```AvaloniaXamlLoader.Load``` inside the Application's constructor, you get the error: ```Could not create IAssetLoader : maybe Application.RegisterServices() wasn't called?```.
How about we improve this message in the case of initializing an application? To read:
```Could not create IAssetLoader : maybe Application.RegisterServices() or is AvaloniaXamlLoader.Load getting called in the Application's constructor?"
(Exact message text is of course up for discussion, my main point is that it hints about the location of the .Load call) | non_defect | improve error message when missing iassetloader in avaloniaxamlloader load right now when using avaloniaxamlloader load inside the application s constructor you get the error could not create iassetloader maybe application registerservices wasn t called how about we improve this message in the case of initializing an application to read could not create iassetloader maybe application registerservices or is avaloniaxamlloader load getting called in the application s constructor exact message text is of course up for discussion my main point is that it hints about the location of the load call | 0 |
52,484 | 6,258,606,211 | IssuesEvent | 2017-07-14 15:58:24 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | opened | Test: Multi Root Workspaces | testplan-item | Test for: Multi Root Workspaces
Complexity: 5
- [ ] Windows
- [ ] Linux
- [ ] macOS
In this milestone we rewrote how multi root workspaces surface in VS Code. The previous solution with having a `workspace` setting in user settings is obsolete (there is no migration). See https://github.com/Microsoft/vscode/issues/396#issuecomment-315079618 for more details on our approach.
`Basics`
Most workspace related operations center around a new submenu under the file menu:

Try to work with multi root workspaces and play around with the available actions. Transition between empty workspaces, single folder workspaces and multi-root workspaces. Some things to keep an eye on:
* explorer and search operations work as before in any of the contexts
* you can save "Untitled Workspace" to some location on disk and open them from there
* you can switch workspaces via `File > Open Recent` as well as the recently opened picker (F1 `>open recent`)
* you can add and remove root folders from a multi-root workspace
* you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders
* workspaces that are opened will restore in the same way as folder do (you can set `window.restoreWindows`: all to restore multiple windows)
* debugging (node.js and extension host debugging) work as before
`Data`
Once you are in a workspace context, we use the workspaces identifier to associate:
* UI state (e.g. the files you have opened as tabs)
* hot-exit state (e.g. dirty files you left dirty when quitting)
* extension storage (a location on disk where extensions can store data via the [`ExtensionContext.storagePath`](https://github.com/Microsoft/vscode/blob/master/src/vs/vscode.d.ts#L3513) API)
Verify:
* UI state you have inside a workspace is restored next time you open it
* dirty files are restored when you quit and reopen the workspace
* extensions have a stable `ExtensionContext.storagePath` location per workspace
`Settings`
Once you are in a workspace context, workspace settings are no longer stored within the `.vscode` folder, but within the workspace file. Verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual. Also verify that folder settings (the ones we do support, e.g. editor settings) still apply per resource you open of that folder.
| 1.0 | Test: Multi Root Workspaces - Test for: Multi Root Workspaces
Complexity: 5
- [ ] Windows
- [ ] Linux
- [ ] macOS
In this milestone we rewrote how multi root workspaces surface in VS Code. The previous solution with having a `workspace` setting in user settings is obsolete (there is no migration). See https://github.com/Microsoft/vscode/issues/396#issuecomment-315079618 for more details on our approach.
`Basics`
Most workspace related operations center around a new submenu under the file menu:

Try to work with multi root workspaces and play around with the available actions. Transition between empty workspaces, single folder workspaces and multi-root workspaces. Some things to keep an eye on:
* explorer and search operations work as before in any of the contexts
* you can save "Untitled Workspace" to some location on disk and open them from there
* you can switch workspaces via `File > Open Recent` as well as the recently opened picker (F1 `>open recent`)
* you can add and remove root folders from a multi-root workspace
* you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders
* workspaces that are opened will restore in the same way as folder do (you can set `window.restoreWindows`: all to restore multiple windows)
* debugging (node.js and extension host debugging) work as before
`Data`
Once you are in a workspace context, we use the workspaces identifier to associate:
* UI state (e.g. the files you have opened as tabs)
* hot-exit state (e.g. dirty files you left dirty when quitting)
* extension storage (a location on disk where extensions can store data via the [`ExtensionContext.storagePath`](https://github.com/Microsoft/vscode/blob/master/src/vs/vscode.d.ts#L3513) API)
Verify:
* UI state you have inside a workspace is restored next time you open it
* dirty files are restored when you quit and reopen the workspace
* extensions have a stable `ExtensionContext.storagePath` location per workspace
`Settings`
Once you are in a workspace context, workspace settings are no longer stored within the `.vscode` folder, but within the workspace file. Verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual. Also verify that folder settings (the ones we do support, e.g. editor settings) still apply per resource you open of that folder.
| non_defect | test multi root workspaces test for multi root workspaces complexity windows linux macos in this milestone we rewrote how multi root workspaces surface in vs code the previous solution with having a workspace setting in user settings is obsolete there is no migration see for more details on our approach basics most workspace related operations center around a new submenu under the file menu try to work with multi root workspaces and play around with the available actions transition between empty workspaces single folder workspaces and multi root workspaces some things to keep an eye on explorer and search operations work as before in any of the contexts you can save untitled workspace to some location on disk and open them from there you can switch workspaces via file open recent as well as the recently opened picker open recent you can add and remove root folders from a multi root workspace you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders workspaces that are opened will restore in the same way as folder do you can set window restorewindows all to restore multiple windows debugging node js and extension host debugging work as before data once you are in a workspace context we use the workspaces identifier to associate ui state e g the files you have opened as tabs hot exit state e g dirty files you left dirty when quitting extension storage a location on disk where extensions can store data via the api verify ui state you have inside a workspace is restored next time you open it dirty files are restored when you quit and reopen the workspace extensions have a stable extensioncontext storagepath location per workspace settings once you are in a workspace context workspace settings are no longer stored within the vscode folder but within the workspace file verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual also verify that folder settings the ones we do support e g editor settings still apply per resource you open of that folder | 0 |
18,736 | 11,045,938,311 | IssuesEvent | 2019-12-09 15:58:49 | opencb/opencga | https://api.github.com/repos/opencb/opencga | closed | DataResponse changes | catalog web services | DataResponse is now defined in datastore from java-common-libs repository. We need to:
- [x] Create a replica of the DataResponse from datastore in OpenCGA but containing a list of OpenCGAResults instead of DataResults.
- [x] Call the class RestResponse instead of DataResponse.
- [x] Add an _events_ field to the new RestResponse.
- [x] Avoid adding _apiVersion_ query parameter in the _params_ field. There is already an _apiVersion_ field within the RestResponse.
- [x] Upgrade apiVersion to v2. | 1.0 | DataResponse changes - DataResponse is now defined in datastore from java-common-libs repository. We need to:
- [x] Create a replica of the DataResponse from datastore in OpenCGA but containing a list of OpenCGAResults instead of DataResults.
- [x] Call the class RestResponse instead of DataResponse.
- [x] Add an _events_ field to the new RestResponse.
- [x] Avoid adding _apiVersion_ query parameter in the _params_ field. There is already an _apiVersion_ field within the RestResponse.
- [x] Upgrade apiVersion to v2. | non_defect | dataresponse changes dataresponse is now defined in datastore from java common libs repository we need to create a replica of the dataresponse from datastore in opencga but containing a list of opencgaresults instead of dataresults call the class restresponse instead of dataresponse add an events field to the new restresponse avoid adding apiversion query parameter in the params field there is already an apiversion field within the restresponse upgrade apiversion to | 0 |
3,133 | 12,027,111,919 | IssuesEvent | 2020-04-12 16:58:28 | tgstation/tgstation-server | https://api.github.com/repos/tgstation/tgstation-server | closed | Create separate versions | API Bridge Maintainability Issue | Core version. Purely related to release cycle: 4.\<Feature\>.\<Patch\>
HTTP API Version. C# API version inherits this. Semver: \<Major\>.\<Minor\>.\<Patch\>
DMAPI Version. Semver: \<Major\>.\<Minor\>.\<Patch\>
C# Client Version. Versioned similarly to the API based on its own functionality.
Web Client Version. Inherited from tgstation-server-control-panel
Downtime version. Indicates the earliest core version a soft upgrade can occur from.
We will use the GitHub prerelease flag until we are stable. | True | Create separate versions - Core version. Purely related to release cycle: 4.\<Feature\>.\<Patch\>
HTTP API Version. C# API version inherits this. Semver: \<Major\>.\<Minor\>.\<Patch\>
DMAPI Version. Semver: \<Major\>.\<Minor\>.\<Patch\>
C# Client Version. Versioned similarly to the API based on its own functionality.
Web Client Version. Inherited from tgstation-server-control-panel
Downtime version. Indicates the earliest core version a soft upgrade can occur from.
We will use the GitHub prerelease flag until we are stable. | non_defect | create separate versions core version purely related to release cycle http api version c api version inherits this semver dmapi version semver c client version versioned similarly to the api based on its own functionality web client version inherited from tgstation server control panel downtime version indicates the earliest core version a soft upgrade can occur from we will use the github prerelease flag until we are stable | 0 |
48,928 | 13,184,779,266 | IssuesEvent | 2020-08-12 20:04:44 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | Bug in log_fatal_stream() (Trac #470) | Incomplete Migration Migrated from Trac dataclasses defect | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/470
, reported by sflis and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:38:36",
"description": "Hi,\n\nI recall that you remodelled the Logging-module some time ago and I\nthink I found a bug in I3Logging.cxx:181 where I get the compiler error\nwith cxx-compiler:\n\n31%] Building CXX object\nHiveSplitter/CMakeFiles/HiveSplitter.dir/private/HiveSplitter/Hive-lib.cxx.o\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:494:0:\nerror: unterminated argument list invoking macro \"log_info\"\n/home/mzoll/i3/meta-\nprojects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:\nIn function \u2018int honey::ExpandToNextRing(honey::DOMHoneyCombRegister\n&)\u2019:\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:176:4:\nerror: \u2018s\u2019 was not declared in this scope\n\n\nwhere in code i have on that mentioned line a\n\nlog_fatal_stream(); // see below\n\n-statement.\n\n\nIf I compile with clang I get:\n\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:176:4:\nerror: use of undeclared identifier 's'\nlog_fatal_stream(\"There are non mutual registered strings\n\"<<combs_iter->first<<\" and \"<<*outer_stringPtr<<\" ; cannot ...\n^\n/home/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/I3Logging.h:181:30:\nnote: expanded from:\nthrow std::runtime_error(s.str() + \" (in \" + __PRETTY_FUNCTION__ + \")\");)\n^\n/home/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/I3Logging.h:99:54:\nnote: expanded from:\nid, file, line, func, _i3_str_logger_str.str()); epilogue }\n\n\nwhich is pointing to said I3Logging.cxx:181 with the following line:\n\n#define log_fatal_stream(msg) I3_STREAM_LOGGER(I3LOG_FATAL, \\\n__icetray_logger_id(), __FILE__, __LINE__, __PRETTY_FUNCTION__, msg, \\\nthrow std::runtime_error(s.str() + \" (in \" + __PRETTY_FUNCTION__ + \")\");)\n\nmy first guess solutions did unfortunately not work :/\ncould you please correct this ?",
"reporter": "sflis",
"cc": "",
"resolution": "wontfix",
"_ts": "1423683516164964",
"component": "dataclasses",
"summary": "Bug in log_fatal_stream()",
"priority": "normal",
"keywords": "Icetray",
"time": "2013-09-16T12:08:29",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Bug in log_fatal_stream() (Trac #470) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/470
, reported by sflis and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:38:36",
"description": "Hi,\n\nI recall that you remodelled the Logging-module some time ago and I\nthink I found a bug in I3Logging.cxx:181 where I get the compiler error\nwith cxx-compiler:\n\n31%] Building CXX object\nHiveSplitter/CMakeFiles/HiveSplitter.dir/private/HiveSplitter/Hive-lib.cxx.o\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:494:0:\nerror: unterminated argument list invoking macro \"log_info\"\n/home/mzoll/i3/meta-\nprojects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:\nIn function \u2018int honey::ExpandToNextRing(honey::DOMHoneyCombRegister\n&)\u2019:\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:176:4:\nerror: \u2018s\u2019 was not declared in this scope\n\n\nwhere in code i have on that mentioned line a\n\nlog_fatal_stream(); // see below\n\n-statement.\n\n\nIf I compile with clang I get:\n\n/home/mzoll/i3/meta-projects/icerec/trunk/src/HiveSplitter/private/HiveSplitter/Hive-lib.cxx:176:4:\nerror: use of undeclared identifier 's'\nlog_fatal_stream(\"There are non mutual registered strings\n\"<<combs_iter->first<<\" and \"<<*outer_stringPtr<<\" ; cannot ...\n^\n/home/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/I3Logging.h:181:30:\nnote: expanded from:\nthrow std::runtime_error(s.str() + \" (in \" + __PRETTY_FUNCTION__ + \")\");)\n^\n/home/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/I3Logging.h:99:54:\nnote: expanded from:\nid, file, line, func, _i3_str_logger_str.str()); epilogue }\n\n\nwhich is pointing to said I3Logging.cxx:181 with the following line:\n\n#define log_fatal_stream(msg) I3_STREAM_LOGGER(I3LOG_FATAL, \\\n__icetray_logger_id(), __FILE__, __LINE__, __PRETTY_FUNCTION__, msg, \\\nthrow std::runtime_error(s.str() + \" (in \" + __PRETTY_FUNCTION__ + \")\");)\n\nmy first guess solutions did unfortunately not work :/\ncould you please correct this ?",
"reporter": "sflis",
"cc": "",
"resolution": "wontfix",
"_ts": "1423683516164964",
"component": "dataclasses",
"summary": "Bug in log_fatal_stream()",
"priority": "normal",
"keywords": "Icetray",
"time": "2013-09-16T12:08:29",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
| defect | bug in log fatal stream trac migrated from reported by sflis and owned by blaufuss json status closed changetime description hi n ni recall that you remodelled the logging module some time ago and i nthink i found a bug in cxx where i get the compiler error nwith cxx compiler n building cxx object nhivesplitter cmakefiles hivesplitter dir private hivesplitter hive lib cxx o n home mzoll meta projects icerec trunk src hivesplitter private hivesplitter hive lib cxx nerror unterminated argument list invoking macro log info n home mzoll meta nprojects icerec trunk src hivesplitter private hivesplitter hive lib cxx nin function honey expandtonextring honey domhoneycombregister n n home mzoll meta projects icerec trunk src hivesplitter private hivesplitter hive lib cxx nerror was not declared in this scope n n nwhere in code i have on that mentioned line a n nlog fatal stream see below n n statement n n nif i compile with clang i get n n home mzoll meta projects icerec trunk src hivesplitter private hivesplitter hive lib cxx nerror use of undeclared identifier s nlog fatal stream there are non mutual registered strings n first and outer stringptr cannot n n home mzoll meta projects icerec trunk src icetray public icetray h nnote expanded from nthrow std runtime error s str in pretty function n n home mzoll meta projects icerec trunk src icetray public icetray h nnote expanded from nid file line func str logger str str epilogue n n nwhich is pointing to said cxx with the following line n n define log fatal stream msg stream logger fatal n icetray logger id file line pretty function msg nthrow std runtime error s str in pretty function n nmy first guess solutions did unfortunately not work ncould you please correct this reporter sflis cc resolution wontfix ts component dataclasses summary bug in log fatal stream priority normal keywords icetray time milestone owner blaufuss type defect | 1 |
37,890 | 8,562,840,924 | IssuesEvent | 2018-11-09 12:02:09 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | mapEvictionPolicy config becomes null when use JSON file to configure PCF service | Module: Config Team: Core Team: Integration Type: Defect | At the PCF side, we are using JSON file to start service(hazelcast instance). [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) takes the json file, converts it into Config object then starts hazelcast instance. While we are investigating one of the customer tickets reports that eviction does not work on PCF env, we have realized that `mapEvictionPolicy` is null though `evictionPolicy` is already configured at JSON file. The expected behaviour is, if `evictionPolicy` is configured but `mapEvictionPolicy` is **not**, `mapEvictionPolicy` should be configured into value of `evictionPolicy`.
`mapEvictionPolicy` becomes null because it is set only by XMLConfigBuilder class and at the [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) side, [gson](https://github.com/google/gson) library uses reflection of Config class to create it from JSON file:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/XmlConfigBuilder.java#L1375
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/MapConfig.java#L467
First me and @gurbuzali have tried to fix this issue at [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) side:
- We started with [jackson](https://github.com/FasterXML/jackson) library which maps the fields of a JSON object to fields in a Java object by matching the names of the JSON field to the getter and setter methods in the Java object as a default, but we encountered a lot errors.
- Then we tried to convert the JSON file into XML then create Config object from it but JSON --> XML conversion is very problematic, we need to handle a lot of hazelcast-xml specific edge cases and we need to update our [hazelcast-full.json](https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/pcf-integration/hazelcast-full.json).
As a result we have decided to this issue should be fixed at hazelcast side. One of the possible solution of this issue is, we can set value of `mapEvictionPolicy` at own getter ,not at `setEvictionPolicy()` thus it will not become null:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/MapConfig.java#L465-L492
| 1.0 | mapEvictionPolicy config becomes null when use JSON file to configure PCF service - At the PCF side, we are using JSON file to start service(hazelcast instance). [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) takes the json file, converts it into Config object then starts hazelcast instance. While we are investigating one of the customer tickets reports that eviction does not work on PCF env, we have realized that `mapEvictionPolicy` is null though `evictionPolicy` is already configured at JSON file. The expected behaviour is, if `evictionPolicy` is configured but `mapEvictionPolicy` is **not**, `mapEvictionPolicy` should be configured into value of `evictionPolicy`.
`mapEvictionPolicy` becomes null because it is set only by XMLConfigBuilder class and at the [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) side, [gson](https://github.com/google/gson) library uses reflection of Config class to create it from JSON file:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/XmlConfigBuilder.java#L1375
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/MapConfig.java#L467
First me and @gurbuzali have tried to fix this issue at [hazelcast-json-starter](https://github.com/hazelcast/hazelcast-json-starter/blob/master/src/main/java/GsonStarter.java) side:
- We started with [jackson](https://github.com/FasterXML/jackson) library which maps the fields of a JSON object to fields in a Java object by matching the names of the JSON field to the getter and setter methods in the Java object as a default, but we encountered a lot errors.
- Then we tried to convert the JSON file into XML then create Config object from it but JSON --> XML conversion is very problematic, we need to handle a lot of hazelcast-xml specific edge cases and we need to update our [hazelcast-full.json](https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/pcf-integration/hazelcast-full.json).
As a result we have decided to this issue should be fixed at hazelcast side. One of the possible solution of this issue is, we can set value of `mapEvictionPolicy` at own getter ,not at `setEvictionPolicy()` thus it will not become null:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/config/MapConfig.java#L465-L492
| defect | mapevictionpolicy config becomes null when use json file to configure pcf service at the pcf side we are using json file to start service hazelcast instance takes the json file converts it into config object then starts hazelcast instance while we are investigating one of the customer tickets reports that eviction does not work on pcf env we have realized that mapevictionpolicy is null though evictionpolicy is already configured at json file the expected behaviour is if evictionpolicy is configured but mapevictionpolicy is not mapevictionpolicy should be configured into value of evictionpolicy mapevictionpolicy becomes null because it is set only by xmlconfigbuilder class and at the side library uses reflection of config class to create it from json file first me and gurbuzali have tried to fix this issue at side we started with library which maps the fields of a json object to fields in a java object by matching the names of the json field to the getter and setter methods in the java object as a default but we encountered a lot errors then we tried to convert the json file into xml then create config object from it but json xml conversion is very problematic we need to handle a lot of hazelcast xml specific edge cases and we need to update our as a result we have decided to this issue should be fixed at hazelcast side one of the possible solution of this issue is we can set value of mapevictionpolicy at own getter not at setevictionpolicy thus it will not become null | 1 |
21,159 | 3,463,518,946 | IssuesEvent | 2015-12-21 10:29:02 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | ColumnToggler not compatible with Priority Columns | 5.2.17 5.3.4 defect | When responsive mode is enabled via priority columns, column toggler items should also be responsive. | 1.0 | ColumnToggler not compatible with Priority Columns - When responsive mode is enabled via priority columns, column toggler items should also be responsive. | defect | columntoggler not compatible with priority columns when responsive mode is enabled via priority columns column toggler items should also be responsive | 1 |
13,860 | 2,789,402,756 | IssuesEvent | 2015-05-08 19:11:01 | jimradford/superputty | https://api.github.com/repos/jimradford/superputty | closed | Setting GUI opacity to 0 is non recoverable. | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. In GUI tab set opacity to 0
2.
3.
What is the expected output? What do you see instead?
Makes window completely invisible without any chance of altering the setting
back.
What version of the product are you using? On what operating system?
Current stable version 1.4.0.5 Windows 7
Please provide any additional information below.
Is there a config file or registry value that can be changed?
Russell Rockefeller
http://russellrockefeller.wordpress.com
Twitter @RockefellerRuss
```
Original issue reported on code.google.com by `digitalt...@gmail.com` on 4 Apr 2015 at 1:00 | 1.0 | Setting GUI opacity to 0 is non recoverable. - ```
What steps will reproduce the problem?
1. In GUI tab set opacity to 0
2.
3.
What is the expected output? What do you see instead?
Makes window completely invisible without any chance of altering the setting
back.
What version of the product are you using? On what operating system?
Current stable version 1.4.0.5 Windows 7
Please provide any additional information below.
Is there a config file or registry value that can be changed?
Russell Rockefeller
http://russellrockefeller.wordpress.com
Twitter @RockefellerRuss
```
Original issue reported on code.google.com by `digitalt...@gmail.com` on 4 Apr 2015 at 1:00 | defect | setting gui opacity to is non recoverable what steps will reproduce the problem in gui tab set opacity to what is the expected output what do you see instead makes window completely invisible without any chance of altering the setting back what version of the product are you using on what operating system current stable version windows please provide any additional information below is there a config file or registry value that can be changed russell rockefeller twitter rockefellerruss original issue reported on code google com by digitalt gmail com on apr at | 1 |
625,537 | 19,751,950,636 | IssuesEvent | 2022-01-15 06:16:34 | TeamSparker/Spark-iOS | https://api.github.com/repos/TeamSparker/Spark-iOS | opened | [Feat] 방생성 플로우 화면 전환 연결 | Feat 🦹t없e맑은水빈 P1 / Priority High | ## 📌 Issue
<!-- 이슈에 대해 간략하게 설명해주세요 -->
`홈 -> 방생성 -> 대기방 ` 플로우 화면 전환 연결
## 📝 To-do
<!-- 진행할 작업에 대해 적어주세요 -->
- [ ] 네비게이션 컨트롤러 적용
- [ ] 화면 전환
- [ ] 레이아웃 확인 및 수정
| 1.0 | [Feat] 방생성 플로우 화면 전환 연결 - ## 📌 Issue
<!-- 이슈에 대해 간략하게 설명해주세요 -->
`홈 -> 방생성 -> 대기방 ` 플로우 화면 전환 연결
## 📝 To-do
<!-- 진행할 작업에 대해 적어주세요 -->
- [ ] 네비게이션 컨트롤러 적용
- [ ] 화면 전환
- [ ] 레이아웃 확인 및 수정
| non_defect | 방생성 플로우 화면 전환 연결 📌 issue 홈 방생성 대기방 플로우 화면 전환 연결 📝 to do 네비게이션 컨트롤러 적용 화면 전환 레이아웃 확인 및 수정 | 0 |
73,056 | 24,431,812,764 | IssuesEvent | 2022-10-06 08:40:50 | naev/naev | https://api.github.com/repos/naev/naev | closed | Failure to handle spobs suddenly selling commodities | Type-Defect Priority-Critical | The current economy code seems to go a bit bonkers when a spob suddenly starts dealing with commodities. The only case of this in the game is currently Antlejos I think.
For reference, a screenshot from LJ_Dude on discord:

| 1.0 | Failure to handle spobs suddenly selling commodities - The current economy code seems to go a bit bonkers when a spob suddenly starts dealing with commodities. The only case of this in the game is currently Antlejos I think.
For reference, a screenshot from LJ_Dude on discord:

| defect | failure to handle spobs suddenly selling commodities the current economy code seems to go a bit bonkers when a spob suddenly starts dealing with commodities the only case of this in the game is currently antlejos i think for reference a screenshot from lj dude on discord | 1 |
74,185 | 3,435,942,614 | IssuesEvent | 2015-12-12 01:26:21 | MoonRaker/pvlib-python | https://api.github.com/repos/MoonRaker/pvlib-python | closed | model variables | medium priority | A possibly incomplete list of problems with the model variables that we're requesting (or not requesting):
- [x] GFS: missing non-gust wind speed or components.
- [x] HRRR_ESRL: missing non-gust wind speed or components.
- [x] NAM: missing non-gust wind speed or components. What's the difference between ``'Downward_Short-Wave_Radiation_Flux_surface'`` and ``'Downward_Short-Wave_Radiation_Flux_surface_Mixed_intervals_Average'``?
- [x] We should either get surface pressure for all models or for no models (missing from NDFD only).
I just pulled anything that seemed reasonably interesting in the proof of concept test, but now we need to do it right. | 1.0 | model variables - A possibly incomplete list of problems with the model variables that we're requesting (or not requesting):
- [x] GFS: missing non-gust wind speed or components.
- [x] HRRR_ESRL: missing non-gust wind speed or components.
- [x] NAM: missing non-gust wind speed or components. What's the difference between ``'Downward_Short-Wave_Radiation_Flux_surface'`` and ``'Downward_Short-Wave_Radiation_Flux_surface_Mixed_intervals_Average'``?
- [x] We should either get surface pressure for all models or for no models (missing from NDFD only).
I just pulled anything that seemed reasonably interesting in the proof of concept test, but now we need to do it right. | non_defect | model variables a possibly incomplete list of problems with the model variables that we re requesting or not requesting gfs missing non gust wind speed or components hrrr esrl missing non gust wind speed or components nam missing non gust wind speed or components what s the difference between downward short wave radiation flux surface and downward short wave radiation flux surface mixed intervals average we should either get surface pressure for all models or for no models missing from ndfd only i just pulled anything that seemed reasonably interesting in the proof of concept test but now we need to do it right | 0 |
53,391 | 13,261,508,159 | IssuesEvent | 2020-08-20 20:01:33 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [iceprod2] global state is evil (Trac #1292) | Migrated from Trac defect iceprod | `iceprod.core.exe.config` is global state. Find a way to make this better.
This breaks the test suite when testing `core.i3exec` and `core.exe` in the same run.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1292">https://code.icecube.wisc.edu/projects/icecube/ticket/1292</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:28",
"_ts": "1550067088921308",
"description": "`iceprod.core.exe.config` is global state. Find a way to make this better.\n\nThis breaks the test suite when testing `core.i3exec` and `core.exe` in the same run.",
"reporter": "david.schultz",
"cc": "ddelventhal",
"resolution": "fixed",
"time": "2015-08-25T22:38:23",
"component": "iceprod",
"summary": "[iceprod2] global state is evil",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [iceprod2] global state is evil (Trac #1292) - `iceprod.core.exe.config` is global state. Find a way to make this better.
This breaks the test suite when testing `core.i3exec` and `core.exe` in the same run.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1292">https://code.icecube.wisc.edu/projects/icecube/ticket/1292</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:28",
"_ts": "1550067088921308",
"description": "`iceprod.core.exe.config` is global state. Find a way to make this better.\n\nThis breaks the test suite when testing `core.i3exec` and `core.exe` in the same run.",
"reporter": "david.schultz",
"cc": "ddelventhal",
"resolution": "fixed",
"time": "2015-08-25T22:38:23",
"component": "iceprod",
"summary": "[iceprod2] global state is evil",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | global state is evil trac iceprod core exe config is global state find a way to make this better this breaks the test suite when testing core and core exe in the same run migrated from json status closed changetime ts description iceprod core exe config is global state find a way to make this better n nthis breaks the test suite when testing core and core exe in the same run reporter david schultz cc ddelventhal resolution fixed time component iceprod summary global state is evil priority critical keywords milestone owner david schultz type defect | 1 |
256,838 | 22,104,631,495 | IssuesEvent | 2022-06-01 16:02:57 | ResetNetwork/recoding-tech | https://api.github.com/repos/ResetNetwork/recoding-tech | closed | standardize styling for "headlines & highlights" / further reading sections | needs testing | * Remove the author from display on the homepage (creates a strange hyphen when no author is present)
* Use the text styling from the homepage (ie smaller, italics text for the date)
| 1.0 | standardize styling for "headlines & highlights" / further reading sections - * Remove the author from display on the homepage (creates a strange hyphen when no author is present)
* Use the text styling from the homepage (ie smaller, italics text for the date)
| non_defect | standardize styling for headlines highlights further reading sections remove the author from display on the homepage creates a strange hyphen when no author is present use the text styling from the homepage ie smaller italics text for the date | 0 |
24,107 | 3,917,070,254 | IssuesEvent | 2016-04-21 06:23:36 | irnawansuprapti/openbiz-cubi | https://api.github.com/repos/irnawansuprapti/openbiz-cubi | closed | health 75005655 | auto-migrated Priority-Medium spam Type-Defect | ```
Indigenous people of many islands and continents are apparently showing better
health and skin condition that most of us. That can be because of the fact that
the natives have been practicing several ancient and indigenous regimens that
are known to help better take care of the body. The Aborigines of Australia are
no different. Through the years, many medical practitioners have been
monitoring and observing the natives' lifestyles to find out why they have
better health, particularly their skin condition. The secret to their natural
skin care lies in emu oil. http://www.strongmenmuscle.com/pure-moringa-slim/
```
Original issue reported on code.google.com by `OliverV...@gmail.com` on 16 Apr 2015 at 10:10 | 1.0 | health 75005655 - ```
Indigenous people of many islands and continents are apparently showing better
health and skin condition that most of us. That can be because of the fact that
the natives have been practicing several ancient and indigenous regimens that
are known to help better take care of the body. The Aborigines of Australia are
no different. Through the years, many medical practitioners have been
monitoring and observing the natives' lifestyles to find out why they have
better health, particularly their skin condition. The secret to their natural
skin care lies in emu oil. http://www.strongmenmuscle.com/pure-moringa-slim/
```
Original issue reported on code.google.com by `OliverV...@gmail.com` on 16 Apr 2015 at 10:10 | defect | health indigenous people of many islands and continents are apparently showing better health and skin condition that most of us that can be because of the fact that the natives have been practicing several ancient and indigenous regimens that are known to help better take care of the body the aborigines of australia are no different through the years many medical practitioners have been monitoring and observing the natives lifestyles to find out why they have better health particularly their skin condition the secret to their natural skin care lies in emu oil original issue reported on code google com by oliverv gmail com on apr at | 1 |
81,564 | 31,027,085,968 | IssuesEvent | 2023-08-10 09:51:43 | vector-im/element-x-ios | https://api.github.com/repos/vector-im/element-x-ios | closed | Turning on airplane mode then opening a room displays undismissable "Failed loading messages" error | T-Defect S-Major O-Occasional A-Offline | ### Steps to reproduce
1. Be on the room list view with some rooms populating the list
2. Switch on airplane mode
3. Tap on a room
4. Wait
### Outcome
#### What did you expect?
Either display cached messages or display a dismissible "Cannot load messages while offline" message or something like that.
#### What happened instead?
When tapping a room the OFFLINE pill turns into a Syncing pill that spins for several minutes. Once it times out it displays an error, "Failed loading messages" that I cannot seem to dismiss. Turning off airplane mode restores the connection and loads the messages as expected.

### Your phone model
iPhone 12 mini
### Operating system version
iOS 16.5.1
### Application version
1.1.8 (271)
### Homeserver
matrix.org
### Will you send logs?
Yes | 1.0 | Turning on airplane mode then opening a room displays undismissable "Failed loading messages" error - ### Steps to reproduce
1. Be on the room list view with some rooms populating the list
2. Switch on airplane mode
3. Tap on a room
4. Wait
### Outcome
#### What did you expect?
Either display cached messages or display a dismissible "Cannot load messages while offline" message or something like that.
#### What happened instead?
When tapping a room the OFFLINE pill turns into a Syncing pill that spins for several minutes. Once it times out it displays an error, "Failed loading messages" that I cannot seem to dismiss. Turning off airplane mode restores the connection and loads the messages as expected.

### Your phone model
iPhone 12 mini
### Operating system version
iOS 16.5.1
### Application version
1.1.8 (271)
### Homeserver
matrix.org
### Will you send logs?
Yes | defect | turning on airplane mode then opening a room displays undismissable failed loading messages error steps to reproduce be on the room list view with some rooms populating the list switch on airplane mode tap on a room wait outcome what did you expect either display cached messages or display a dismissible cannot load messages while offline message or something like that what happened instead when tapping a room the offline pill turns into a syncing pill that spins for several minutes once it times out it displays an error failed loading messages that i cannot seem to dismiss turning off airplane mode restores the connection and loads the messages as expected your phone model iphone mini operating system version ios application version homeserver matrix org will you send logs yes | 1 |
2,762 | 2,607,938,832 | IssuesEvent | 2015-02-26 00:29:58 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | Minifying method calls on bare Numbers fails | auto-migrated Priority-Medium Release-2.1.5 Type-Defect | ```
Minify commit/version: http://tweakimg.net/files/upload/jsminplus-1.4.zip
PHP version: 5.5.9-1ubuntu4.4
What steps will reproduce the problem?
<?php
require_once('jsminplus.php');
var_dump(JSMinPlus::minify('(6).toString()'));
Expected output:
string(12) "(6).toString()"
Actual output:
string(12) "6.toString()"
Note:
Actual output yields the following error in Chrome:
Error: Line 1: Unexpected token ILLEGAL
```
-----
Original issue reported on code.google.com by `corv...@gmail.com` on 29 Oct 2014 at 12:14 | 1.0 | Minifying method calls on bare Numbers fails - ```
Minify commit/version: http://tweakimg.net/files/upload/jsminplus-1.4.zip
PHP version: 5.5.9-1ubuntu4.4
What steps will reproduce the problem?
<?php
require_once('jsminplus.php');
var_dump(JSMinPlus::minify('(6).toString()'));
Expected output:
string(12) "(6).toString()"
Actual output:
string(12) "6.toString()"
Note:
Actual output yields the following error in Chrome:
Error: Line 1: Unexpected token ILLEGAL
```
-----
Original issue reported on code.google.com by `corv...@gmail.com` on 29 Oct 2014 at 12:14 | defect | minifying method calls on bare numbers fails minify commit version php version what steps will reproduce the problem php require once jsminplus php var dump jsminplus minify tostring expected output string tostring actual output string tostring note actual output yields the following error in chrome error line unexpected token illegal original issue reported on code google com by corv gmail com on oct at | 1 |
20,634 | 10,861,467,024 | IssuesEvent | 2019-11-14 11:08:49 | cuba-platform/fts | https://api.github.com/repos/cuba-platform/fts | closed | Limit the length of the hit information | state: fixed type: performance ver: 7.2.0 | The hit information is built in the `HitInfo.init()` method. The hit information is a string that displays in which field the search term was found and also there is a text surrounding the search term.
If for example, there is a file content indexed, then the search term may appear a lot of times there. And *each occurrence* will be reflected in the hit info string.
As a result the hit info will be huge and a lot of time will be spent to go through all the file and analyze each word in the file.

The solution may be to limit the number of search term occurrences in the field by, say 3 or event 1. If the number is exceeded then we should stop analyzing the field and switch to the next one.
The new application property name should be `fts.maxNumberOfSearchTermsInHitInfo`. The default value is `1` | True | Limit the length of the hit information - The hit information is built in the `HitInfo.init()` method. The hit information is a string that displays in which field the search term was found and also there is a text surrounding the search term.
If for example, there is a file content indexed, then the search term may appear a lot of times there. And *each occurrence* will be reflected in the hit info string.
As a result the hit info will be huge and a lot of time will be spent to go through all the file and analyze each word in the file.

The solution may be to limit the number of search term occurrences in the field by, say 3 or event 1. If the number is exceeded then we should stop analyzing the field and switch to the next one.
The new application property name should be `fts.maxNumberOfSearchTermsInHitInfo`. The default value is `1` | non_defect | limit the length of the hit information the hit information is built in the hitinfo init method the hit information is a string that displays in which field the search term was found and also there is a text surrounding the search term if for example there is a file content indexed then the search term may appear a lot of times there and each occurrence will be reflected in the hit info string as a result the hit info will be huge and a lot of time will be spent to go through all the file and analyze each word in the file the solution may be to limit the number of search term occurrences in the field by say or event if the number is exceeded then we should stop analyzing the field and switch to the next one the new application property name should be fts maxnumberofsearchtermsinhitinfo the default value is | 0 |
560,935 | 16,606,636,152 | IssuesEvent | 2021-06-02 05:18:43 | gardners/surveysystem | https://api.github.com/repos/gardners/surveysystem | closed | backend: fcgi - code separation and cleanup | Priority: HIGH backend task | fcgimain code has become unwieldy - separate helper functions from page handlers
also, get rid of some superfluous free calls due to (my) misunderstanding of kcgi responses
related: #462 | 1.0 | backend: fcgi - code separation and cleanup - fcgimain code has become unwieldy - separate helper functions from page handlers
also, get rid of some superfluous free calls due to (my) misunderstanding of kcgi responses
related: #462 | non_defect | backend fcgi code separation and cleanup fcgimain code has become unwieldy separate helper functions from page handlers also get rid of some superfluous free calls due to my misunderstanding of kcgi responses related | 0 |
80,328 | 3,560,902,468 | IssuesEvent | 2016-01-23 12:02:17 | matan-sh/commitment-wall | https://api.github.com/repos/matan-sh/commitment-wall | closed | the "Commitment Wall" web page (shows the visitors pictures). | Done Priority: High | number of screens (Connected together to be big one screen) that shows the pictures of the last 100 visitors that registered. | 1.0 | the "Commitment Wall" web page (shows the visitors pictures). - number of screens (Connected together to be big one screen) that shows the pictures of the last 100 visitors that registered. | non_defect | the commitment wall web page shows the visitors pictures number of screens connected together to be big one screen that shows the pictures of the last visitors that registered | 0 |
41,436 | 10,459,783,250 | IssuesEvent | 2019-09-20 11:54:05 | ba-st/Sagan | https://api.github.com/repos/ba-st/Sagan | closed | More than one repository not working | Severity: Blocker Type: Defect | When using more than one repository, during #configureMappingsIn: the first repository setups the session causing the descriptor system to perform a validation (see method: GlorpSession>>system:).
Due to how Sagan's configurable descriptor system works, this causes only the first repository's tables, class descriptors and mappings to be created. The rest only exist as definitions which are never evaluated because of lazy variable initializations. | 1.0 | More than one repository not working - When using more than one repository, during #configureMappingsIn: the first repository setups the session causing the descriptor system to perform a validation (see method: GlorpSession>>system:).
Due to how Sagan's configurable descriptor system works, this causes only the first repository's tables, class descriptors and mappings to be created. The rest only exist as definitions which are never evaluated because of lazy variable initializations. | defect | more than one repository not working when using more than one repository during configuremappingsin the first repository setups the session causing the descriptor system to perform a validation see method glorpsession system due to how sagan s configurable descriptor system works this causes only the first repository s tables class descriptors and mappings to be created the rest only exist as definitions which are never evaluated because of lazy variable initializations | 1 |
209,946 | 7,181,424,216 | IssuesEvent | 2018-02-01 04:55:36 | wso2/message-broker | https://api.github.com/repos/wso2/message-broker | opened | Expose pending message count through Admin REST API | Complexity/Moderate Module/broker-core Priority/High Severity/Major Type/Improvement enhancement | **Description:**
<!-- Give a brief description of the issue -->
Add a parameter for queue meta-data to show the pending (un-acknowledged) message count for the queue.
| 1.0 | Expose pending message count through Admin REST API - **Description:**
<!-- Give a brief description of the issue -->
Add a parameter for queue meta-data to show the pending (un-acknowledged) message count for the queue.
| non_defect | expose pending message count through admin rest api description add a parameter for queue meta data to show the pending un acknowledged message count for the queue | 0 |
20,465 | 3,358,729,106 | IssuesEvent | 2015-11-19 10:58:22 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | [TEST-FAILURE] ListenerLeakTestSmartRouting Timeout & Failures | Team: Client Type: Defect | ```
04:45:28 Running com.hazelcast.client.listeners.leak.ListenerLeakTestSmartRouting
06:23:44 Build timed out (after 180 minutes). Marking the build as aborted.
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.6/751/console
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-pr-builder/16875/console | 1.0 | [TEST-FAILURE] ListenerLeakTestSmartRouting Timeout & Failures - ```
04:45:28 Running com.hazelcast.client.listeners.leak.ListenerLeakTestSmartRouting
06:23:44 Build timed out (after 180 minutes). Marking the build as aborted.
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-IbmJDK1.6/751/console
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-pr-builder/16875/console | defect | listenerleaktestsmartrouting timeout failures running com hazelcast client listeners leak listenerleaktestsmartrouting build timed out after minutes marking the build as aborted | 1 |
17,847 | 10,817,513,948 | IssuesEvent | 2019-11-08 09:54:38 | goharbor/harbor | https://api.github.com/repos/goharbor/harbor | closed | [Scanner] Vul policy check still work on trivvy | area/API area/interrogation-service priority/high target/1.10.0 | 1. enable vul policy check with Critical
2. set trivvy to default
3. scan a image | 1.0 | [Scanner] Vul policy check still work on trivvy - 1. enable vul policy check with Critical
2. set trivvy to default
3. scan a image | non_defect | vul policy check still work on trivvy enable vul policy check with critical set trivvy to default scan a image | 0 |
9,088 | 2,615,128,415 | IssuesEvent | 2015-03-01 05:57:43 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | Link to hello world program is broken | auto-migrated Milestone-Version1.11.0 Priority-Medium Type-Defect | ```
Describe the problem.
The link that points to the HelloAnalyticsApiSample.java is broken.
http://code.google.com/p/google-api-java-client/source/browse/analytics-cmdline-
sample/src/main/java/com/google/api/services/samples/analytics/cmdline/HelloAnal
yticsApi.java?repo=samples
How would you expect it to be fixed?
Replace it with the correct link.
http://code.google.com/p/google-api-java-client/source/browse/analytics-cmdline-
sample/src/main/java/com/google/api/services/samples/analytics/cmdline/HelloAnal
yticsApiSample.java?repo=samples
```
Original issue reported on code.google.com by `anand...@gmail.com` on 25 Jul 2012 at 6:18 | 1.0 | Link to hello world program is broken - ```
Describe the problem.
The link that points to the HelloAnalyticsApiSample.java is broken.
http://code.google.com/p/google-api-java-client/source/browse/analytics-cmdline-
sample/src/main/java/com/google/api/services/samples/analytics/cmdline/HelloAnal
yticsApi.java?repo=samples
How would you expect it to be fixed?
Replace it with the correct link.
http://code.google.com/p/google-api-java-client/source/browse/analytics-cmdline-
sample/src/main/java/com/google/api/services/samples/analytics/cmdline/HelloAnal
yticsApiSample.java?repo=samples
```
Original issue reported on code.google.com by `anand...@gmail.com` on 25 Jul 2012 at 6:18 | defect | link to hello world program is broken describe the problem the link that points to the helloanalyticsapisample java is broken sample src main java com google api services samples analytics cmdline helloanal yticsapi java repo samples how would you expect it to be fixed replace it with the correct link sample src main java com google api services samples analytics cmdline helloanal yticsapisample java repo samples original issue reported on code google com by anand gmail com on jul at | 1 |
10,170 | 2,618,940,047 | IssuesEvent | 2015-03-03 00:03:45 | marmarek/test | https://api.github.com/repos/marmarek/test | closed | USB flashdrive not mounted automatically by AppVMs | C: core P: major R: fixed T: defect | **Reported by joanna on 26 Jun 40300225 16:53 UTC**
When one attaches a USB disk to an AppVM (via xm block-attach command issued in Dom0) them the USB disk is not mounted by the Dophin File Manager in the AppVM. User must start a shell, switch to root, and then manually mount it. This is inconvenient and should be fixed. | 1.0 | USB flashdrive not mounted automatically by AppVMs - **Reported by joanna on 26 Jun 40300225 16:53 UTC**
When one attaches a USB disk to an AppVM (via xm block-attach command issued in Dom0) them the USB disk is not mounted by the Dophin File Manager in the AppVM. User must start a shell, switch to root, and then manually mount it. This is inconvenient and should be fixed. | defect | usb flashdrive not mounted automatically by appvms reported by joanna on jun utc when one attaches a usb disk to an appvm via xm block attach command issued in them the usb disk is not mounted by the dophin file manager in the appvm user must start a shell switch to root and then manually mount it this is inconvenient and should be fixed | 1 |
60,293 | 17,023,389,510 | IssuesEvent | 2021-07-03 01:46:39 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Joining roads creates error when uploading to OSM | Component: merkaartor Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 4.39am, Friday, 24th April 2009]**
Using v.0.13.1 and API 0.6.
When joining two roads, it always causes an error when uploading it to OSM. Error seems to relate to the removal of the "old" road that has been joined into the other one. Error is "Bad request" and/or "Conflict". | 1.0 | Joining roads creates error when uploading to OSM - **[Submitted to the original trac issue database at 4.39am, Friday, 24th April 2009]**
Using v.0.13.1 and API 0.6.
When joining two roads, it always causes an error when uploading it to OSM. Error seems to relate to the removal of the "old" road that has been joined into the other one. Error is "Bad request" and/or "Conflict". | defect | joining roads creates error when uploading to osm using v and api when joining two roads it always causes an error when uploading it to osm error seems to relate to the removal of the old road that has been joined into the other one error is bad request and or conflict | 1 |
50,002 | 13,187,305,192 | IssuesEvent | 2020-08-13 02:59:36 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | DOMLauncher bug causing 25ns offset in SLC Launches (Trac #2427) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2427">https://code.icecube.wisc.edu/ticket/2427</a>, reported by mrongen and owned by mjansson</em></summary>
<p>
```json
{
"status": "assigned",
"changetime": "2020-06-24T12:32:17",
"description": "As reported by Jan Weldert (https://icecube-spno.slack.com/archives/CAJ193MB3/p1588754143032200) there is a ~25ns offset comparing the dT of MCPEs to HLC hits and SLC hits.\n\nThis was also checked against the LE datasets NuE_120150_000110 and NuE_120150_000111 and is likely to affect all simulation.\n\nChecking DOMLauncher this may be due to an off-by-one error in https://code.icecube.wisc.edu/projects/icecube/browser/IceCube/meta-projects/combo/trunk/DOMLauncher/private/DOMLauncher/I3InIceDOM.cxx#L135 but further investigation is needed.",
"reporter": "mrongen",
"cc": "",
"resolution": "",
"_ts": "1593001937450890",
"component": "combo simulation",
"summary": "DOMLauncher bug causing 25ns offset in SLC Launches",
"priority": "major",
"keywords": "DOMLauncher",
"time": "2020-05-08T08:57:59",
"milestone": "Autumnal Equinox 2020",
"owner": "mjansson",
"type": "defect"
}
```
</p>
</details>
| 1.0 | DOMLauncher bug causing 25ns offset in SLC Launches (Trac #2427) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2427">https://code.icecube.wisc.edu/ticket/2427</a>, reported by mrongen and owned by mjansson</em></summary>
<p>
```json
{
"status": "assigned",
"changetime": "2020-06-24T12:32:17",
"description": "As reported by Jan Weldert (https://icecube-spno.slack.com/archives/CAJ193MB3/p1588754143032200) there is a ~25ns offset comparing the dT of MCPEs to HLC hits and SLC hits.\n\nThis was also checked against the LE datasets NuE_120150_000110 and NuE_120150_000111 and is likely to affect all simulation.\n\nChecking DOMLauncher this may be due to an off-by-one error in https://code.icecube.wisc.edu/projects/icecube/browser/IceCube/meta-projects/combo/trunk/DOMLauncher/private/DOMLauncher/I3InIceDOM.cxx#L135 but further investigation is needed.",
"reporter": "mrongen",
"cc": "",
"resolution": "",
"_ts": "1593001937450890",
"component": "combo simulation",
"summary": "DOMLauncher bug causing 25ns offset in SLC Launches",
"priority": "major",
"keywords": "DOMLauncher",
"time": "2020-05-08T08:57:59",
"milestone": "Autumnal Equinox 2020",
"owner": "mjansson",
"type": "defect"
}
```
</p>
</details>
| defect | domlauncher bug causing offset in slc launches trac migrated from json status assigned changetime description as reported by jan weldert there is a offset comparing the dt of mcpes to hlc hits and slc hits n nthis was also checked against the le datasets nue and nue and is likely to affect all simulation n nchecking domlauncher this may be due to an off by one error in but further investigation is needed reporter mrongen cc resolution ts component combo simulation summary domlauncher bug causing offset in slc launches priority major keywords domlauncher time milestone autumnal equinox owner mjansson type defect | 1 |
56,900 | 15,437,940,371 | IssuesEvent | 2021-03-07 18:32:29 | martinrotter/rssguard | https://api.github.com/repos/martinrotter/rssguard | reopened | [BUG]: Windows version does not work due to missing Visual C++ runtime dependencies | Status-Invalid Type-Defect Type-Deployment | Following errors prevent it from opening:
The code execution cannot proceed because VCRUNTIME140_1.dll was not found. Reinstalling the program may fix this problem.
The code execution cannot proceed because MSVCP140_1.dll was not found. Reinstalling the program may fix this problem. | 1.0 | [BUG]: Windows version does not work due to missing Visual C++ runtime dependencies - Following errors prevent it from opening:
The code execution cannot proceed because VCRUNTIME140_1.dll was not found. Reinstalling the program may fix this problem.
The code execution cannot proceed because MSVCP140_1.dll was not found. Reinstalling the program may fix this problem. | defect | windows version does not work due to missing visual c runtime dependencies following errors prevent it from opening the code execution cannot proceed because dll was not found reinstalling the program may fix this problem the code execution cannot proceed because dll was not found reinstalling the program may fix this problem | 1 |
16,268 | 11,889,658,711 | IssuesEvent | 2020-03-28 14:52:31 | lorenzwalthert/precommit | https://api.github.com/repos/lorenzwalthert/precommit | closed | test when pre-commit was installed with a method other than coda | Complexity: Medium Priority: High Status: Unassigned Type: Infrastructure | Just one more travis build and install with pip or similar. One with installation on `$PATH`, one without. | 1.0 | test when pre-commit was installed with a method other than coda - Just one more travis build and install with pip or similar. One with installation on `$PATH`, one without. | non_defect | test when pre commit was installed with a method other than coda just one more travis build and install with pip or similar one with installation on path one without | 0 |
284,344 | 24,592,840,485 | IssuesEvent | 2022-10-14 05:04:50 | Do-you-wanna-study/do-you-wanna-study-backend | https://api.github.com/repos/Do-you-wanna-study/do-you-wanna-study-backend | opened | Auth 테스트 | TEST | ## 🚅 Issue 한 줄 요약
<!-- 구현할 기능에 대한 내용을 짧게 설명해주세요. -->
Auth 관련 테스트 진행
## 🤷 Issue 세부 내용
<!-- 해야 할 일들을 적어주세요. -->
- [ ] 로그인 성공
- [ ] 로그인 실패 - 잘못된 아이디
- [ ] 로그인 실패 - 잘못된 비밀번호
- [ ] 로그인 실패 - 비어있는 아이디
- [ ] 로그인 실패 - 비어있는 비밀번호
- [ ] 로그아웃 성공
- [ ] 로그아웃 실패 - 잘못된 토큰
- [ ] 회원가입 성공
- [ ] 회원가입 실패 - 잘못된 이메일 패턴
- [ ] 회원가입 실패 - 너무 짧은 비밀번호
- [ ] 회원가입 실패 - 너무 긴 비밀번호
- [ ] 회원가입 실패 - 비어있는 칸 존재
- [ ] 회원가입 실패 - 이미 회원가입 되어있는 아이디
- [ ] 회원가입 실패 - 이미 존재하는 닉네임
| 1.0 | Auth 테스트 - ## 🚅 Issue 한 줄 요약
<!-- 구현할 기능에 대한 내용을 짧게 설명해주세요. -->
Auth 관련 테스트 진행
## 🤷 Issue 세부 내용
<!-- 해야 할 일들을 적어주세요. -->
- [ ] 로그인 성공
- [ ] 로그인 실패 - 잘못된 아이디
- [ ] 로그인 실패 - 잘못된 비밀번호
- [ ] 로그인 실패 - 비어있는 아이디
- [ ] 로그인 실패 - 비어있는 비밀번호
- [ ] 로그아웃 성공
- [ ] 로그아웃 실패 - 잘못된 토큰
- [ ] 회원가입 성공
- [ ] 회원가입 실패 - 잘못된 이메일 패턴
- [ ] 회원가입 실패 - 너무 짧은 비밀번호
- [ ] 회원가입 실패 - 너무 긴 비밀번호
- [ ] 회원가입 실패 - 비어있는 칸 존재
- [ ] 회원가입 실패 - 이미 회원가입 되어있는 아이디
- [ ] 회원가입 실패 - 이미 존재하는 닉네임
| non_defect | auth 테스트 🚅 issue 한 줄 요약 auth 관련 테스트 진행 🤷 issue 세부 내용 로그인 성공 로그인 실패 잘못된 아이디 로그인 실패 잘못된 비밀번호 로그인 실패 비어있는 아이디 로그인 실패 비어있는 비밀번호 로그아웃 성공 로그아웃 실패 잘못된 토큰 회원가입 성공 회원가입 실패 잘못된 이메일 패턴 회원가입 실패 너무 짧은 비밀번호 회원가입 실패 너무 긴 비밀번호 회원가입 실패 비어있는 칸 존재 회원가입 실패 이미 회원가입 되어있는 아이디 회원가입 실패 이미 존재하는 닉네임 | 0 |
129,636 | 10,581,180,988 | IssuesEvent | 2019-10-08 08:39:42 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | opened | Enable core-apiserver-proxy test | priority/critical test-failing test-missing | <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
As we agreed in https://github.com/kyma-project/community/issues/369 we are turning off flaky tests in [this](https://github.com/kyma-project/kyma/pull/5901) PR, and `core-apiserver-proxy` is one of them. This needs to be re-enabled
| 2.0 | Enable core-apiserver-proxy test - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
As we agreed in https://github.com/kyma-project/community/issues/369 we are turning off flaky tests in [this](https://github.com/kyma-project/kyma/pull/5901) PR, and `core-apiserver-proxy` is one of them. This needs to be re-enabled
| non_defect | enable core apiserver proxy test thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description as we agreed in we are turning off flaky tests in pr and core apiserver proxy is one of them this needs to be re enabled | 0 |
545,321 | 15,948,111,678 | IssuesEvent | 2021-04-15 05:09:37 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Additional property rest api should return in a proper format | API-M 4.0.0 Priority/Normal Type/Improvement | $subject should be in [ { name : abc, value: 123, display: false}] format and breaking changes to UI should be fixed | 1.0 | Additional property rest api should return in a proper format - $subject should be in [ { name : abc, value: 123, display: false}] format and breaking changes to UI should be fixed | non_defect | additional property rest api should return in a proper format subject should be in format and breaking changes to ui should be fixed | 0 |
22,327 | 2,648,760,707 | IssuesEvent | 2015-03-14 07:07:15 | pyroscope/pyroscope | https://api.github.com/repos/pyroscope/pyroscope | closed | Announce URL mass editing | auto-migrated Component-WebUI Milestone-WebUI Priority-Low Type-Enhancement Type-Task | ```
Two entry fields for old and new URL. Auto-completion on the first field,
and require that the old URL is an exact match to an existing one.
```
Original issue reported on code.google.com by `pyroscope.project` on 27 Jun 2009 at 5:29 | 1.0 | Announce URL mass editing - ```
Two entry fields for old and new URL. Auto-completion on the first field,
and require that the old URL is an exact match to an existing one.
```
Original issue reported on code.google.com by `pyroscope.project` on 27 Jun 2009 at 5:29 | non_defect | announce url mass editing two entry fields for old and new url auto completion on the first field and require that the old url is an exact match to an existing one original issue reported on code google com by pyroscope project on jun at | 0 |
7,863 | 2,611,054,152 | IssuesEvent | 2015-02-27 00:24:58 | alistairreilly/andors-trail | https://api.github.com/repos/alistairreilly/andors-trail | closed | defeat Vacor over and over with rest | auto-migrated Milestone-0.6.7 Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.Defeat vacor
2.rest in tavern
3.walk past vacor's trail area
What is the expected output? What do you see instead?
I expect he would stay dead, I was able to kill him twice and receive all his
3items twice
What version of the product are you using? On what operating system?
0.6.7b
Please provide any additional information below.
After you kill him a second time, and go to unzel with vacor's ring, he does
Not reward you a second time with gold/experience
Love this game!
```
Original issue reported on code.google.com by `JamesMR...@gmail.com` on 19 Dec 2010 at 7:07 | 1.0 | defeat Vacor over and over with rest - ```
What steps will reproduce the problem?
1.Defeat vacor
2.rest in tavern
3.walk past vacor's trail area
What is the expected output? What do you see instead?
I expect he would stay dead, I was able to kill him twice and receive all his
3items twice
What version of the product are you using? On what operating system?
0.6.7b
Please provide any additional information below.
After you kill him a second time, and go to unzel with vacor's ring, he does
Not reward you a second time with gold/experience
Love this game!
```
Original issue reported on code.google.com by `JamesMR...@gmail.com` on 19 Dec 2010 at 7:07 | defect | defeat vacor over and over with rest what steps will reproduce the problem defeat vacor rest in tavern walk past vacor s trail area what is the expected output what do you see instead i expect he would stay dead i was able to kill him twice and receive all his twice what version of the product are you using on what operating system please provide any additional information below after you kill him a second time and go to unzel with vacor s ring he does not reward you a second time with gold experience love this game original issue reported on code google com by jamesmr gmail com on dec at | 1 |
133,075 | 10,788,769,319 | IssuesEvent | 2019-11-05 10:26:43 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | [Failing Test] apg-conformance-stable-k8s-master (ci-cluster-api-provider-gcp-make-conformance-stable-k8s-ci-artifacts) | kind/failing-test | **Which jobs are failing**:
```
apg-conformance-stable-k8s-master (ci-cluster-api-provider-gcp-make-conformance-stable-k8s-ci-artifacts)
```
**Since when has it been failing**:
`4th Nov 15:20 PST`
**Testgrid link**:
https://testgrid.k8s.io/sig-release-master-informing#capg-conformance-stable-k8s-master
**Reason for failure**:
```console
Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
+ ./scripts/ci-e2e.sh
/usr/local/bin/runner.sh: line 101: ./scripts/ci-e2e.sh: No such file or directory
```
**Anything else we need to know**:
/cc @alenkacz @hasheddan @alejandrox1
/milestone v1.17
/priority critical-urgent
| 1.0 | [Failing Test] apg-conformance-stable-k8s-master (ci-cluster-api-provider-gcp-make-conformance-stable-k8s-ci-artifacts) - **Which jobs are failing**:
```
apg-conformance-stable-k8s-master (ci-cluster-api-provider-gcp-make-conformance-stable-k8s-ci-artifacts)
```
**Since when has it been failing**:
`4th Nov 15:20 PST`
**Testgrid link**:
https://testgrid.k8s.io/sig-release-master-informing#capg-conformance-stable-k8s-master
**Reason for failure**:
```console
Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
+ ./scripts/ci-e2e.sh
/usr/local/bin/runner.sh: line 101: ./scripts/ci-e2e.sh: No such file or directory
```
**Anything else we need to know**:
/cc @alenkacz @hasheddan @alejandrox1
/milestone v1.17
/priority critical-urgent
| non_defect | apg conformance stable master ci cluster api provider gcp make conformance stable ci artifacts which jobs are failing apg conformance stable master ci cluster api provider gcp make conformance stable ci artifacts since when has it been failing nov pst testgrid link reason for failure console activated service account credentials for scripts ci sh usr local bin runner sh line scripts ci sh no such file or directory anything else we need to know cc alenkacz hasheddan milestone priority critical urgent | 0 |
65,765 | 19,682,939,991 | IssuesEvent | 2022-01-11 18:39:11 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Images never loading after waking up laptop | T-Defect X-Needs-Info X-Regression S-Minor A-Media | The last couple of days I've opened my laptop and there's been an image in the room element was open in that has arrived overnight, but it's just a blurhash and never loads. Switch rooms and back to the same room again makes it load. | 1.0 | Images never loading after waking up laptop - The last couple of days I've opened my laptop and there's been an image in the room element was open in that has arrived overnight, but it's just a blurhash and never loads. Switch rooms and back to the same room again makes it load. | defect | images never loading after waking up laptop the last couple of days i ve opened my laptop and there s been an image in the room element was open in that has arrived overnight but it s just a blurhash and never loads switch rooms and back to the same room again makes it load | 1 |
24,291 | 3,954,987,813 | IssuesEvent | 2016-04-29 19:05:00 | vim/vim | https://api.github.com/repos/vim/vim | closed | Add support for marshalling JSON | auto-migrated Priority-Medium Type-Defect | ```
Could vim have a built-in tojson({value}) and fromjson({json}) helpers to
serialize and deserialize JSON?
JSON is used in lots of vim plugins like
https://github.com/Valloric/YouCompleteMe,
https://github.com/google/vim-maktaba,
https://github.com/MarcWeber/vim-addon-manager, and eventually Vundle
(https://github.com/VundleVim/Vundle.vim/pull/560). These can either use slow
hacks or depend on python support, but it would be best if vim just had native,
performant support for JSON marshalling built in.
Expected behavior
:echo tojson({'a': [1, 'foo'], 'b': 2.1}) ==# '{"a": [1, "foo"], "b": 2.1}'
1
:echo fromjson("[1.0, {}, []]") ==# [1.0, {}, []]
1
:echo tojson(fromjson('[null, true, false]')) ==# '[null, true, false]'
1
Note in the last example there needs to be a way to represent null, true, and
false unambiguously even though vim doesn't have these primitives. Also,
fromjson() could use an option to translate into standard vim equivalents like
'', 1, and 0.
```
Original issue reported on code.google.com by `daviebd...@gmail.com` on 13 Jul 2015 at 11:30 | 1.0 | Add support for marshalling JSON - ```
Could vim have a built-in tojson({value}) and fromjson({json}) helpers to
serialize and deserialize JSON?
JSON is used in lots of vim plugins like
https://github.com/Valloric/YouCompleteMe,
https://github.com/google/vim-maktaba,
https://github.com/MarcWeber/vim-addon-manager, and eventually Vundle
(https://github.com/VundleVim/Vundle.vim/pull/560). These can either use slow
hacks or depend on python support, but it would be best if vim just had native,
performant support for JSON marshalling built in.
Expected behavior
:echo tojson({'a': [1, 'foo'], 'b': 2.1}) ==# '{"a": [1, "foo"], "b": 2.1}'
1
:echo fromjson("[1.0, {}, []]") ==# [1.0, {}, []]
1
:echo tojson(fromjson('[null, true, false]')) ==# '[null, true, false]'
1
Note in the last example there needs to be a way to represent null, true, and
false unambiguously even though vim doesn't have these primitives. Also,
fromjson() could use an option to translate into standard vim equivalents like
'', 1, and 0.
```
Original issue reported on code.google.com by `daviebd...@gmail.com` on 13 Jul 2015 at 11:30 | defect | add support for marshalling json could vim have a built in tojson value and fromjson json helpers to serialize and deserialize json json is used in lots of vim plugins like and eventually vundle these can either use slow hacks or depend on python support but it would be best if vim just had native performant support for json marshalling built in expected behavior echo tojson a b a b echo fromjson echo tojson fromjson note in the last example there needs to be a way to represent null true and false unambiguously even though vim doesn t have these primitives also fromjson could use an option to translate into standard vim equivalents like and original issue reported on code google com by daviebd gmail com on jul at | 1 |
45,098 | 7,158,340,722 | IssuesEvent | 2018-01-27 00:03:06 | machinabio/crucible | https://api.github.com/repos/machinabio/crucible | closed | Document specifications for UV light source | crucible documentation hardware integration | Document the specifications for the UV light source for use with Crucible.
| 1.0 | Document specifications for UV light source - Document the specifications for the UV light source for use with Crucible.
| non_defect | document specifications for uv light source document the specifications for the uv light source for use with crucible | 0 |
36,455 | 7,936,777,345 | IssuesEvent | 2018-07-09 10:32:15 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | reopened | With ReplicatedMap there's still a Memory leak in SecondsBasedEntryTaskScheduler | Interface: ReplicatedMap Team: Core Type: Defect | Like this old post: https://github.com/hazelcast/hazelcast/issues/2343, i am facing the same problem. When i use a ReplicatedMap with the 3.9.3 version (I tried the 3.10.2 it's the same), i set a TTL when i put an entry into my replicated map. But when i clear the map, the heap still contains a lot of instances of SecondsBasedEntryTaskScheduler (~3 Go). Any tips to make it work with ReplicatedMap ? Thanks in advance. | 1.0 | With ReplicatedMap there's still a Memory leak in SecondsBasedEntryTaskScheduler - Like this old post: https://github.com/hazelcast/hazelcast/issues/2343, i am facing the same problem. When i use a ReplicatedMap with the 3.9.3 version (I tried the 3.10.2 it's the same), i set a TTL when i put an entry into my replicated map. But when i clear the map, the heap still contains a lot of instances of SecondsBasedEntryTaskScheduler (~3 Go). Any tips to make it work with ReplicatedMap ? Thanks in advance. | defect | with replicatedmap there s still a memory leak in secondsbasedentrytaskscheduler like this old post i am facing the same problem when i use a replicatedmap with the version i tried the it s the same i set a ttl when i put an entry into my replicated map but when i clear the map the heap still contains a lot of instances of secondsbasedentrytaskscheduler go any tips to make it work with replicatedmap thanks in advance | 1 |
96,378 | 8,609,012,732 | IssuesEvent | 2018-11-18 17:29:18 | wearerequired/traduttore | https://api.github.com/repos/wearerequired/traduttore | opened | Add GlotPress requirement check | [Component] CLI [Component] Tests [Type] Enhancement | **Issue Overview**
Right now the plugin just assumes that GlotPress is installed, without verifying whether it's actually the case.
Function calls and constants like `DATE_MYSQL` and `GP_VERSION` are not available without GlotPress.
For example, running `wp traduttore info` without GlotPress being active errors because of that.
**Expected behavior**
Limited functionality and/or warnings when GlotPress is not active. | 1.0 | Add GlotPress requirement check - **Issue Overview**
Right now the plugin just assumes that GlotPress is installed, without verifying whether it's actually the case.
Function calls and constants like `DATE_MYSQL` and `GP_VERSION` are not available without GlotPress.
For example, running `wp traduttore info` without GlotPress being active errors because of that.
**Expected behavior**
Limited functionality and/or warnings when GlotPress is not active. | non_defect | add glotpress requirement check issue overview right now the plugin just assumes that glotpress is installed without verifying whether it s actually the case function calls and constants like date mysql and gp version are not available without glotpress for example running wp traduttore info without glotpress being active errors because of that expected behavior limited functionality and or warnings when glotpress is not active | 0 |
849 | 16,085,828,317 | IssuesEvent | 2021-04-26 11:04:17 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | Adjustable width of ListPeoplePicker suggestions | Component: PeoplePicker Fluent UI react Needs: Backlog review Resolution: Soft Close Type: Feature | Is there a property that we can set to ensure the width of the suggestions box is same as the input box width? Combo Box seems to have a property useComboBoxAsMenuWidth but not ListPeoplePicker.

| 1.0 | Adjustable width of ListPeoplePicker suggestions - Is there a property that we can set to ensure the width of the suggestions box is same as the input box width? Combo Box seems to have a property useComboBoxAsMenuWidth but not ListPeoplePicker.

| non_defect | adjustable width of listpeoplepicker suggestions is there a property that we can set to ensure the width of the suggestions box is same as the input box width combo box seems to have a property usecomboboxasmenuwidth but not listpeoplepicker | 0 |
97,690 | 28,441,909,006 | IssuesEvent | 2023-04-16 01:49:06 | blueprint-freespeech/gosling | https://api.github.com/repos/blueprint-freespeech/gosling | opened | Publish native shared+static libraries on Github | build sponsor 2 packaging | Once we're feature complete we should tag+publish releases on Github | 1.0 | Publish native shared+static libraries on Github - Once we're feature complete we should tag+publish releases on Github | non_defect | publish native shared static libraries on github once we re feature complete we should tag publish releases on github | 0 |
70,293 | 23,107,489,023 | IssuesEvent | 2022-07-27 10:03:27 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Replying to a message with /fireworks /snowball or /spaceinvaders fails to include the original message | T-Defect S-Minor A-Replies A-Effects O-Uncommon | ### Steps to reproduce
Click to reply on a message.
Type: `/fireworks congrats!`

### Outcome
#### What did you expect?
I expect to see the original message and the reply.
#### What happened instead?
Instead, the message is sent without the original message and the reply prompt remains open waiting for another reply:

### Operating system
Windows
### Browser information
Version 103.0.5060.114 (Official Build) (64-bit)
### URL for webapp
app.element.io
### Application version
Element version: 1.11.0 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Replying to a message with /fireworks /snowball or /spaceinvaders fails to include the original message - ### Steps to reproduce
Click to reply on a message.
Type: `/fireworks congrats!`

### Outcome
#### What did you expect?
I expect to see the original message and the reply.
#### What happened instead?
Instead, the message is sent without the original message and the reply prompt remains open waiting for another reply:

### Operating system
Windows
### Browser information
Version 103.0.5060.114 (Official Build) (64-bit)
### URL for webapp
app.element.io
### Application version
Element version: 1.11.0 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No | defect | replying to a message with fireworks snowball or spaceinvaders fails to include the original message steps to reproduce click to reply on a message type fireworks congrats outcome what did you expect i expect to see the original message and the reply what happened instead instead the message is sent without the original message and the reply prompt remains open waiting for another reply operating system windows browser information version official build bit url for webapp app element io application version element version olm version homeserver no response will you send logs no | 1 |
297,486 | 22,359,543,598 | IssuesEvent | 2022-06-15 18:59:55 | golang/go | https://api.github.com/repos/golang/go | closed | doc: cmd/go: document use with modules "from the ground up" | Documentation help wanted modules | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.3 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
NA
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
NA
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I am a go newbie trying to set up several folders with go code in a monorepo (which also contains code built in other languages). Some of these folders will contain services, others library utilities.
### What did you expect to see?
I would like documentation based on the new "module" paradigm that explain:
* module
* package
* file
* folder
and their relationship to each other "from the ground up" (without assuming knowledge of how things used to work). In particular, in a monorepo,
* should there be an overall "go.mod", or one in each folder (or both??)
* if there is only one "go.mod" how do I specify different dependencies for each folder? If not,
is there a way to specify common dependencies to force version synchronization (or if there isn't
doc should flag as gotcha).
* how do I refer to code in one folder from another?
### What did you see instead?
The tutorials don't cover modules. The official doc for modules are geared to explain what is different for experienced go users. There seems to be no good documentation anywhere on
use of modules in monorepos.
| 1.0 | doc: cmd/go: document use with modules "from the ground up" - <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.11.3 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
NA
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
NA
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
I am a go newbie trying to set up several folders with go code in a monorepo (which also contains code built in other languages). Some of these folders will contain services, others library utilities.
### What did you expect to see?
I would like documentation based on the new "module" paradigm that explain:
* module
* package
* file
* folder
and their relationship to each other "from the ground up" (without assuming knowledge of how things used to work). In particular, in a monorepo,
* should there be an overall "go.mod", or one in each folder (or both??)
* if there is only one "go.mod" how do I specify different dependencies for each folder? If not,
is there a way to specify common dependencies to force version synchronization (or if there isn't
doc should flag as gotcha).
* how do I refer to code in one folder from another?
### What did you see instead?
The tutorials don't cover modules. The official doc for modules are geared to explain what is different for experienced go users. There seems to be no good documentation anywhere on
use of modules in monorepos.
| non_defect | doc cmd go document use with modules from the ground up what version of go are you using go version go version go version darwin does this issue reproduce with the latest release na what operating system and processor architecture are you using go env go env output go env na what did you do if possible provide a recipe for reproducing the error a complete runnable program is good a link on play golang org is best i am a go newbie trying to set up several folders with go code in a monorepo which also contains code built in other languages some of these folders will contain services others library utilities what did you expect to see i would like documentation based on the new module paradigm that explain module package file folder and their relationship to each other from the ground up without assuming knowledge of how things used to work in particular in a monorepo should there be an overall go mod or one in each folder or both if there is only one go mod how do i specify different dependencies for each folder if not is there a way to specify common dependencies to force version synchronization or if there isn t doc should flag as gotcha how do i refer to code in one folder from another what did you see instead the tutorials don t cover modules the official doc for modules are geared to explain what is different for experienced go users there seems to be no good documentation anywhere on use of modules in monorepos | 0 |
8,475 | 3,754,498,680 | IssuesEvent | 2016-03-12 01:57:47 | HeavensGate/Eternal | https://api.github.com/repos/HeavensGate/Eternal | closed | Roundstart with no players ready | bug code help wanted Woothie | Just what it says on the box. The round starts based on number of players online (not ready, just online), while we want it to be based of number of players ready. This to circumvent empty rounds with no admins or players, new players join to an empty, powerless station. Also allow admins to override this, but start-now already does this persumably (but check it anyways). On that note, redo number of players required, assume a couple more players will latejoin as the round goes. Codenote: Each gamemode .dm handles required players, go check them. For numbers, ask Core, wait for Jakk/Dav/Static response or start discussion. | 1.0 | Roundstart with no players ready - Just what it says on the box. The round starts based on number of players online (not ready, just online), while we want it to be based of number of players ready. This to circumvent empty rounds with no admins or players, new players join to an empty, powerless station. Also allow admins to override this, but start-now already does this persumably (but check it anyways). On that note, redo number of players required, assume a couple more players will latejoin as the round goes. Codenote: Each gamemode .dm handles required players, go check them. For numbers, ask Core, wait for Jakk/Dav/Static response or start discussion. | non_defect | roundstart with no players ready just what it says on the box the round starts based on number of players online not ready just online while we want it to be based of number of players ready this to circumvent empty rounds with no admins or players new players join to an empty powerless station also allow admins to override this but start now already does this persumably but check it anyways on that note redo number of players required assume a couple more players will latejoin as the round goes codenote each gamemode dm handles required players go check them for numbers ask core wait for jakk dav static response or start discussion | 0 |
312,080 | 23,415,019,943 | IssuesEvent | 2022-08-12 22:52:32 | dogecoinfoundation/dogecoin.com | https://api.github.com/repos/dogecoinfoundation/dogecoin.com | closed | Wallets Page: Including Open/Closed Source | documentation help wanted | I think it is important to include in the Wallets page also whether a wallet is closed or open source, exactly as they do on the Bitcoin's website.
I know MyDoge had plans to open up their source code a while ago, but since their app now includes a lot of additional functionality unrelated to basic Dogecoin wallet functionality, this might be unlikely.
I think this is important.
| 1.0 | Wallets Page: Including Open/Closed Source - I think it is important to include in the Wallets page also whether a wallet is closed or open source, exactly as they do on the Bitcoin's website.
I know MyDoge had plans to open up their source code a while ago, but since their app now includes a lot of additional functionality unrelated to basic Dogecoin wallet functionality, this might be unlikely.
I think this is important.
| non_defect | wallets page including open closed source i think it is important to include in the wallets page also whether a wallet is closed or open source exactly as they do on the bitcoin s website i know mydoge had plans to open up their source code a while ago but since their app now includes a lot of additional functionality unrelated to basic dogecoin wallet functionality this might be unlikely i think this is important | 0 |
18,633 | 3,077,761,588 | IssuesEvent | 2015-08-21 04:00:47 | netty/netty | https://api.github.com/repos/netty/netty | closed | OSGi manifests in javadocs/sources jars | defect | This problem affected many people more than year ago, but it's still here: https://stackoverflow.com/questions/23149966/classnotfoundexception-for-a-type-that-is-available-to-the-osgi-runtime-io-net
You are including OSGi manifests in sources/javadocs jars, so osgi container treats them as correct dependencies when resolving from OBR repository. So runtime fails with non-descriptive ClassNotFoundException.
Fix it ASAP please. | 1.0 | OSGi manifests in javadocs/sources jars - This problem affected many people more than year ago, but it's still here: https://stackoverflow.com/questions/23149966/classnotfoundexception-for-a-type-that-is-available-to-the-osgi-runtime-io-net
You are including OSGi manifests in sources/javadocs jars, so osgi container treats them as correct dependencies when resolving from OBR repository. So runtime fails with non-descriptive ClassNotFoundException.
Fix it ASAP please. | defect | osgi manifests in javadocs sources jars this problem affected many people more than year ago but it s still here you are including osgi manifests in sources javadocs jars so osgi container treats them as correct dependencies when resolving from obr repository so runtime fails with non descriptive classnotfoundexception fix it asap please | 1 |
16,576 | 2,919,077,027 | IssuesEvent | 2015-06-24 12:20:55 | akvo/akvo-flow | https://api.github.com/repos/akvo/akvo-flow | opened | Restrict 'Use as data point name' only for registration forms in monitored surveys | 1 - Defect | Currently the user can create a data point name based on a property in any form. However, when monitoring is enabled, with the registration form you fill in the general information that identifies the data point for later monitoring. So, the data point name should also be created from this report only.
| 1.0 | Restrict 'Use as data point name' only for registration forms in monitored surveys - Currently the user can create a data point name based on a property in any form. However, when monitoring is enabled, with the registration form you fill in the general information that identifies the data point for later monitoring. So, the data point name should also be created from this report only.
| defect | restrict use as data point name only for registration forms in monitored surveys currently the user can create a data point name based on a property in any form however when monitoring is enabled with the registration form you fill in the general information that identifies the data point for later monitoring so the data point name should also be created from this report only | 1 |
48,583 | 13,152,196,975 | IssuesEvent | 2020-08-09 20:50:52 | OpenMS/OpenMS | https://api.github.com/repos/OpenMS/OpenMS | closed | Private Thirdparty tests | (Unit) Tests TOPP defect minor wontfix | I added Msfragger and novor to the nightlies but they fail. On win the allocated jvm memory is probably too high for both. For Mac/Linux there are some differences in the results like missing search engine parameters. Not sure what XTandem is doing there. | 1.0 | Private Thirdparty tests - I added Msfragger and novor to the nightlies but they fail. On win the allocated jvm memory is probably too high for both. For Mac/Linux there are some differences in the results like missing search engine parameters. Not sure what XTandem is doing there. | defect | private thirdparty tests i added msfragger and novor to the nightlies but they fail on win the allocated jvm memory is probably too high for both for mac linux there are some differences in the results like missing search engine parameters not sure what xtandem is doing there | 1 |
25,548 | 4,381,829,000 | IssuesEvent | 2016-08-06 13:56:55 | rafael2k/darkice | https://api.github.com/repos/rafael2k/darkice | closed | stream audio without a sound card | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
trying to use audio that is coming into a Raspberry PI2 running Fedora21 via a
VOIP application. There really isn't any connected sound card. The application
I am using provide for input and out put using a Logitech headset, But I don't
want to stream the audio from the headset. The problem is that Darkice sees
only the headset as sound card. I just want to stream the audio coming in from
the internet.
Is it doable?
What version of the product are you using? On what operating system?
Darkice 1.0 on Fedora21
Please provide any additional information below.
```
Original issue reported on code.google.com by `FranMi...@gmail.com` on 21 Mar 2015 at 7:28 | 1.0 | stream audio without a sound card - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
trying to use audio that is coming into a Raspberry PI2 running Fedora21 via a
VOIP application. There really isn't any connected sound card. The application
I am using provide for input and out put using a Logitech headset, But I don't
want to stream the audio from the headset. The problem is that Darkice sees
only the headset as sound card. I just want to stream the audio coming in from
the internet.
Is it doable?
What version of the product are you using? On what operating system?
Darkice 1.0 on Fedora21
Please provide any additional information below.
```
Original issue reported on code.google.com by `FranMi...@gmail.com` on 21 Mar 2015 at 7:28 | defect | stream audio without a sound card what steps will reproduce the problem what is the expected output what do you see instead trying to use audio that is coming into a raspberry running via a voip application there really isn t any connected sound card the application i am using provide for input and out put using a logitech headset but i don t want to stream the audio from the headset the problem is that darkice sees only the headset as sound card i just want to stream the audio coming in from the internet is it doable what version of the product are you using on what operating system darkice on please provide any additional information below original issue reported on code google com by franmi gmail com on mar at | 1 |
113,484 | 9,648,272,438 | IssuesEvent | 2019-05-17 15:50:02 | microsoft/azure-pipelines-tasks | https://api.github.com/repos/microsoft/azure-pipelines-tasks | closed | PublishTestResults reports incorrect test titles for .trx produced by mocha-trx-reporter | Area: Test bug | **Bug**
**Task Name**: PublishTestResults@2
## Environment
- Server - https://devdiv.visualstudio.com
- Build - https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=2554821
## Issue Description
We have VSCode unit tests that use mocha. The test runner reports the results with mocha-trx-reporter@3.2.0, saves the results to a .trx file, and then PublishTestResults@2 task publishes the results from the .trx file to the run.
All test names in the `Tests` tab of the run are wrong for all the tests. All tests have the same "<Embeddable /> binds Editor.code to state.code" name.
The trx file is fine though. I could open it in VS and see the test names correct. It doesn't contain any "<Embeddable /> binds Editor.code to state.code" string at all.
### Task logs
[log_25_2554821.zip](https://github.com/Microsoft/azure-pipelines-tasks/files/3040539/log_25_2554821.zip)
### Error logs
I didn't see any error in the build logs | 1.0 | PublishTestResults reports incorrect test titles for .trx produced by mocha-trx-reporter - **Bug**
**Task Name**: PublishTestResults@2
## Environment
- Server - https://devdiv.visualstudio.com
- Build - https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=2554821
## Issue Description
We have VSCode unit tests that use mocha. The test runner reports the results with mocha-trx-reporter@3.2.0, saves the results to a .trx file, and then PublishTestResults@2 task publishes the results from the .trx file to the run.
All test names in the `Tests` tab of the run are wrong for all the tests. All tests have the same "<Embeddable /> binds Editor.code to state.code" name.
The trx file is fine though. I could open it in VS and see the test names correct. It doesn't contain any "<Embeddable /> binds Editor.code to state.code" string at all.
### Task logs
[log_25_2554821.zip](https://github.com/Microsoft/azure-pipelines-tasks/files/3040539/log_25_2554821.zip)
### Error logs
I didn't see any error in the build logs | non_defect | publishtestresults reports incorrect test titles for trx produced by mocha trx reporter bug task name publishtestresults environment server build issue description we have vscode unit tests that use mocha the test runner reports the results with mocha trx reporter saves the results to a trx file and then publishtestresults task publishes the results from the trx file to the run all test names in the tests tab of the run are wrong for all the tests all tests have the same binds editor code to state code name the trx file is fine though i could open it in vs and see the test names correct it doesn t contain any binds editor code to state code string at all task logs error logs i didn t see any error in the build logs | 0 |
15,109 | 2,849,128,763 | IssuesEvent | 2015-05-30 12:15:34 | sierkb/portsnotifier | https://api.github.com/repos/sierkb/portsnotifier | closed | Source tarballs are not available for downloading (ideally from SourceForge.net servers) | auto-migrated Priority-Medium Type-Defect | ```
I am writing a Portfile for this application and would prefer to be able to use
the source tarball
corresponding to the code in the .dmg at SourceForge.net.
```
Original issue reported on code.google.com by `randall....@gmail.com` on 14 Jul 2007 at 10:02 | 1.0 | Source tarballs are not available for downloading (ideally from SourceForge.net servers) - ```
I am writing a Portfile for this application and would prefer to be able to use
the source tarball
corresponding to the code in the .dmg at SourceForge.net.
```
Original issue reported on code.google.com by `randall....@gmail.com` on 14 Jul 2007 at 10:02 | defect | source tarballs are not available for downloading ideally from sourceforge net servers i am writing a portfile for this application and would prefer to be able to use the source tarball corresponding to the code in the dmg at sourceforge net original issue reported on code google com by randall gmail com on jul at | 1 |
404,143 | 27,451,299,472 | IssuesEvent | 2023-03-02 17:34:21 | linkml/linkml | https://api.github.com/repos/linkml/linkml | closed | vote: switch to material as default theme? | documentation help wanted Low severity | CCDH prefers material
https://cancerdhc.github.io/ccdhmodel/v1.0.1/
- 👍 if you think material should be the default (including for linkml model itself)
- 👎 to vote against, and stick with rtd | 1.0 | vote: switch to material as default theme? - CCDH prefers material
https://cancerdhc.github.io/ccdhmodel/v1.0.1/
- 👍 if you think material should be the default (including for linkml model itself)
- 👎 to vote against, and stick with rtd | non_defect | vote switch to material as default theme ccdh prefers material 👍 if you think material should be the default including for linkml model itself 👎 to vote against and stick with rtd | 0 |
68,157 | 21,524,013,809 | IssuesEvent | 2022-04-28 16:33:55 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Event text over multiple rows overflow the bubble | T-Defect S-Minor A-Message-Bubbles O-Uncommon | ### Steps to reproduce
1. Change the message layout to bubble message
2. Create a test room
2. Change your display name to a long one
3. Check the event text
Also:
1. Set a maximized widget
2. Open the chat panel
### Outcome
#### What did you expect?
The event text should not overflow the bubble and the chat panel.

#### What happened instead?
On the main panel, the event text overflows the bubble.

On the chat panel, the event text overflows the space for bubbles and even the right edge of the panel.

See: https://github.com/vector-im/element-web/issues/21774
### Operating system
Debian
### Browser information
Firefox
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Event text over multiple rows overflow the bubble - ### Steps to reproduce
1. Change the message layout to bubble message
2. Create a test room
2. Change your display name to a long one
3. Check the event text
Also:
1. Set a maximized widget
2. Open the chat panel
### Outcome
#### What did you expect?
The event text should not overflow the bubble and the chat panel.

#### What happened instead?
On the main panel, the event text overflows the bubble.

On the chat panel, the event text overflows the space for bubbles and even the right edge of the panel.

See: https://github.com/vector-im/element-web/issues/21774
### Operating system
Debian
### Browser information
Firefox
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | defect | event text over multiple rows overflow the bubble steps to reproduce change the message layout to bubble message create a test room change your display name to a long one check the event text also set a maximized widget open the chat panel outcome what did you expect the event text should not overflow the bubble and the chat panel what happened instead on the main panel the event text overflows the bubble on the chat panel the event text overflows the space for bubbles and even the right edge of the panel see operating system debian browser information firefox url for webapp localhost application version develop branch homeserver no response will you send logs no | 1 |
7,989 | 2,611,071,377 | IssuesEvent | 2015-02-27 00:33:20 | alistairreilly/andors-trail | https://api.github.com/repos/alistairreilly/andors-trail | opened | minor aesthetic problem with health and experience bars | auto-migrated Type-Defect | ```
In empty health and experience bars of player and health bars of monsters the
"shadow" is not at full width. In the center there is an other "shadow" than at
the ends. See attachment.
What is the expected output? What do you see instead?
The "shadow" of the bars should be the same and not change within the bar.
What version of the product are you using? On what device?
Nexus 4 (Android 4.2.2)
```
Original issue reported on code.google.com by `haefn...@gmail.com` on 28 Feb 2013 at 4:20
Attachments:
* [2013-02-28 16.08.45.png](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-339/comment-0/2013-02-28 16.08.45.png)
| 1.0 | minor aesthetic problem with health and experience bars - ```
In empty health and experience bars of player and health bars of monsters the
"shadow" is not at full width. In the center there is an other "shadow" than at
the ends. See attachment.
What is the expected output? What do you see instead?
The "shadow" of the bars should be the same and not change within the bar.
What version of the product are you using? On what device?
Nexus 4 (Android 4.2.2)
```
Original issue reported on code.google.com by `haefn...@gmail.com` on 28 Feb 2013 at 4:20
Attachments:
* [2013-02-28 16.08.45.png](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-339/comment-0/2013-02-28 16.08.45.png)
| defect | minor aesthetic problem with health and experience bars in empty health and experience bars of player and health bars of monsters the shadow is not at full width in the center there is an other shadow than at the ends see attachment what is the expected output what do you see instead the shadow of the bars should be the same and not change within the bar what version of the product are you using on what device nexus android original issue reported on code google com by haefn gmail com on feb at attachments png | 1 |
230,849 | 18,719,434,250 | IssuesEvent | 2021-11-03 10:04:06 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_time_series·ts - visualize app visualize ciGroup11 visual builder Time Series Elastic charts should display correct chart data, label names and area colors for min aggregation when split by filters | blocker Feature:TSVB Team:VisEditors failed-test v8.0.0 skipped-test | A test failed on a tracked branch
```
Error: expected [ '#54B399', 'rgba(114,207,194,1)' ] to sort of equal [ 'rgba(0,188,163,1)', 'rgba(114,207,194,1)' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/functional/apps/visualize/_tsvb_time_series.ts:305:33)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n "#54B399"\n "rgba(114,207,194,1)"\n]',
expected: '[\n "rgba(0,188,163,1)"\n "rgba(114,207,194,1)"\n]',
showDiff: true
}
```
First failure: [CI Build - master](https://buildkite.com/elastic/kibana-hourly/builds/1610#f7df2b85-17f8-4289-8e61-6c96d52478d3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_time_series·ts","test.name":"visualize app visualize ciGroup11 visual builder Time Series Elastic charts should display correct chart data, label names and area colors for min aggregation when split by filters","test.failCount":4}} --> | 2.0 | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_time_series·ts - visualize app visualize ciGroup11 visual builder Time Series Elastic charts should display correct chart data, label names and area colors for min aggregation when split by filters - A test failed on a tracked branch
```
Error: expected [ '#54B399', 'rgba(114,207,194,1)' ] to sort of equal [ 'rgba(0,188,163,1)', 'rgba(114,207,194,1)' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/functional/apps/visualize/_tsvb_time_series.ts:305:33)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n "#54B399"\n "rgba(114,207,194,1)"\n]',
expected: '[\n "rgba(0,188,163,1)"\n "rgba(114,207,194,1)"\n]',
showDiff: true
}
```
First failure: [CI Build - master](https://buildkite.com/elastic/kibana-hourly/builds/1610#f7df2b85-17f8-4289-8e61-6c96d52478d3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_tsvb_time_series·ts","test.name":"visualize app visualize ciGroup11 visual builder Time Series Elastic charts should display correct chart data, label names and area colors for min aggregation when split by filters","test.failCount":4}} --> | non_defect | failing test chrome ui functional tests test functional apps visualize tsvb time series·ts visualize app visualize visual builder time series elastic charts should display correct chart data label names and area colors for min aggregation when split by filters a test failed on a tracked branch error expected to sort of equal at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at context test functional apps visualize tsvb time series ts at runmicrotasks at processticksandrejections node internal process task queues at object apply node modules kbn test target node functional test runner lib mocha wrap function js actual expected showdiff true first failure | 0 |
149,640 | 13,285,472,412 | IssuesEvent | 2020-08-24 08:10:46 | fga-eps-mds/2020-1-Grupo-4 | https://api.github.com/repos/fga-eps-mds/2020-1-Grupo-4 | opened | Gestão e planejamento de Sprint | documentation organization study | **<h2>Descrição</h2>**
Gestão e planejamento de Sprint.
**<h2>Tarefas</h2>**
- [ ] Estudar o framework Scrum;
- [ ] Ler o Guia do Scrum;
- [ ] Definir na próxima reunião o timebox das sprints;
- [ ] Definir horário da Daily;
- [ ] Planejar a próxima sprint;
- [ ] Difundir para o grupo, na próxima reunião, tudo o que foi aprendido de forma sintetizada;
**<h2>Critério de aceitação</h2>**
- [ ] Difundir para o grupo como será a gestão das sprints;
- [ ] Planejar a próxima sprint
| 1.0 | Gestão e planejamento de Sprint - **<h2>Descrição</h2>**
Gestão e planejamento de Sprint.
**<h2>Tarefas</h2>**
- [ ] Estudar o framework Scrum;
- [ ] Ler o Guia do Scrum;
- [ ] Definir na próxima reunião o timebox das sprints;
- [ ] Definir horário da Daily;
- [ ] Planejar a próxima sprint;
- [ ] Difundir para o grupo, na próxima reunião, tudo o que foi aprendido de forma sintetizada;
**<h2>Critério de aceitação</h2>**
- [ ] Difundir para o grupo como será a gestão das sprints;
- [ ] Planejar a próxima sprint
| non_defect | gestão e planejamento de sprint descrição gestão e planejamento de sprint tarefas estudar o framework scrum ler o guia do scrum definir na próxima reunião o timebox das sprints definir horário da daily planejar a próxima sprint difundir para o grupo na próxima reunião tudo o que foi aprendido de forma sintetizada critério de aceitação difundir para o grupo como será a gestão das sprints planejar a próxima sprint | 0 |
61,627 | 25,578,160,623 | IssuesEvent | 2022-12-01 00:42:15 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Banco de Talentos] Pessoa Desenvolvedora iOS Júnior na [CUBOS] | SALVADOR MVC GIT IOS SWIFT WEBSERVICES COCOAPODS ALAMOFIRE MOYA Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Pessoa Desenvolvedora iOS Júnior
- Como dev iOS você trabalhará desenvolvendo soluções tecnológicas que geram um impacto e que trazem significado para a vida das pessoas.
## Local
- Salvador
## Benefícios
Não descontamos benefícios do salário.
- Vale refeição ou alimentação (R$ 26,00 por dia);
- Plano de saúde;
- Plano odontológico;
- Vale transporte (também pago em um cartão de crédito);
- Happy hour mensal;
- Aulas semanais multidisciplinares.
## Requisitos
**Obrigatórios:**
- Conhecimentos em Swift;
- Noção de gerenciamento de dependência;
- Conhecimentos em gerenciamento de dependências com CocoaPods (saber adicionar um pod);
- Conhecimentos em webservices (Alamofire, Moya);
- Conhecimentos em controle de versão (Git).
**Desejáveis:**
- Conhecimentos nas guild lines de desenvolvimento e design da Apple;
- Conhecimentos no padrão MVC;
- Conhecimentos em gerenciamento de threads e programação assíncrona;
- Conhecimentos em metodologias de desenvolvimento ágil.
## Contratação
- a combinar
## CUBOS
- Aprender e compartilhar é o nosso valor mais intenso;
- Nos preocupamos verdadeiramente com as nossas entregas;
- Somos uma empresa focada no desenvolvimento das pessoas;
- Não nos limitamos pelo padrão e buscamos trabalhar com tecnologias novas;
- Queremos ver os cúbicos crescendo e evoluindo sempre.
## Como se candidatar
- Link: https://cubos.gupy.io/jobs/65489?jobBoardSource=gupy_public_page | 1.0 | [Banco de Talentos] Pessoa Desenvolvedora iOS Júnior na [CUBOS] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Pessoa Desenvolvedora iOS Júnior
- Como dev iOS você trabalhará desenvolvendo soluções tecnológicas que geram um impacto e que trazem significado para a vida das pessoas.
## Local
- Salvador
## Benefícios
Não descontamos benefícios do salário.
- Vale refeição ou alimentação (R$ 26,00 por dia);
- Plano de saúde;
- Plano odontológico;
- Vale transporte (também pago em um cartão de crédito);
- Happy hour mensal;
- Aulas semanais multidisciplinares.
## Requisitos
**Obrigatórios:**
- Conhecimentos em Swift;
- Noção de gerenciamento de dependência;
- Conhecimentos em gerenciamento de dependências com CocoaPods (saber adicionar um pod);
- Conhecimentos em webservices (Alamofire, Moya);
- Conhecimentos em controle de versão (Git).
**Desejáveis:**
- Conhecimentos nas guild lines de desenvolvimento e design da Apple;
- Conhecimentos no padrão MVC;
- Conhecimentos em gerenciamento de threads e programação assíncrona;
- Conhecimentos em metodologias de desenvolvimento ágil.
## Contratação
- a combinar
## CUBOS
- Aprender e compartilhar é o nosso valor mais intenso;
- Nos preocupamos verdadeiramente com as nossas entregas;
- Somos uma empresa focada no desenvolvimento das pessoas;
- Não nos limitamos pelo padrão e buscamos trabalhar com tecnologias novas;
- Queremos ver os cúbicos crescendo e evoluindo sempre.
## Como se candidatar
- Link: https://cubos.gupy.io/jobs/65489?jobBoardSource=gupy_public_page | non_defect | pessoa desenvolvedora ios júnior na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na pessoa desenvolvedora ios júnior como dev ios você trabalhará desenvolvendo soluções tecnológicas que geram um impacto e que trazem significado para a vida das pessoas local salvador benefícios não descontamos benefícios do salário vale refeição ou alimentação r por dia plano de saúde plano odontológico vale transporte também pago em um cartão de crédito happy hour mensal aulas semanais multidisciplinares requisitos obrigatórios conhecimentos em swift noção de gerenciamento de dependência conhecimentos em gerenciamento de dependências com cocoapods saber adicionar um pod conhecimentos em webservices alamofire moya conhecimentos em controle de versão git desejáveis conhecimentos nas guild lines de desenvolvimento e design da apple conhecimentos no padrão mvc conhecimentos em gerenciamento de threads e programação assíncrona conhecimentos em metodologias de desenvolvimento ágil contratação a combinar cubos aprender e compartilhar é o nosso valor mais intenso nos preocupamos verdadeiramente com as nossas entregas somos uma empresa focada no desenvolvimento das pessoas não nos limitamos pelo padrão e buscamos trabalhar com tecnologias novas queremos ver os cúbicos crescendo e evoluindo sempre como se candidatar link | 0 |
40,455 | 9,999,722,277 | IssuesEvent | 2019-07-12 11:29:22 | contao/contao | https://api.github.com/repos/contao/contao | closed | Cache is not cleared after TTL-time | defect | **Affected version(s)**
4.4.18 - 4.4.35
**Description**
It seems that some pages cached in http_cache (i have just examined files in folder 'en') are not deleted, even when they are older than the TTL-time that was given in website root.
By this the cache grows permanently. In huge installation with many sites to cache this can be problematic, as we have experienced. We ran into a disk space error caused by a 30GB+ cache.
**How to reproduce**
1. Enable serverside cache in website root.
2. Set server-cache time to i.e. 6 hours (this is what i have tested with).
3. Examin cache entries after cache time has elapsed.
Several entries are still in the cache although they have been created more than 6 hours ago. This has nothing to do with issue #231, as i first thought. You can also find entries that are not 404-pages, but older than the TTL.
| 1.0 | Cache is not cleared after TTL-time - **Affected version(s)**
4.4.18 - 4.4.35
**Description**
It seems that some pages cached in http_cache (i have just examined files in folder 'en') are not deleted, even when they are older than the TTL-time that was given in website root.
By this the cache grows permanently. In huge installation with many sites to cache this can be problematic, as we have experienced. We ran into a disk space error caused by a 30GB+ cache.
**How to reproduce**
1. Enable serverside cache in website root.
2. Set server-cache time to i.e. 6 hours (this is what i have tested with).
3. Examin cache entries after cache time has elapsed.
Several entries are still in the cache although they have been created more than 6 hours ago. This has nothing to do with issue #231, as i first thought. You can also find entries that are not 404-pages, but older than the TTL.
| defect | cache is not cleared after ttl time affected version s description it seems that some pages cached in http cache i have just examined files in folder en are not deleted even when they are older than the ttl time that was given in website root by this the cache grows permanently in huge installation with many sites to cache this can be problematic as we have experienced we ran into a disk space error caused by a cache how to reproduce enable serverside cache in website root set server cache time to i e hours this is what i have tested with examin cache entries after cache time has elapsed several entries are still in the cache although they have been created more than hours ago this has nothing to do with issue as i first thought you can also find entries that are not pages but older than the ttl | 1 |
53,882 | 13,262,417,124 | IssuesEvent | 2020-08-20 21:45:04 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [gcdserver] Import IC79+ data from I3OmDb (Trac #2234) | Migrated from Trac analysis defect | Mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries. We'll draw straws on who gets first shot with the sledgehammer when we drag the server hosting I3OmDb out to the field.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2234">https://code.icecube.wisc.edu/projects/icecube/ticket/2234</a>, reported by jbraunand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-09-18T07:51:29",
"_ts": "1568793089537035",
"description": "Mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries. We'll draw straws on who gets first shot with the sledgehammer when we drag the server hosting I3OmDb out to the field.",
"reporter": "jbraun",
"cc": "",
"resolution": "insufficient resources",
"time": "2019-01-18T22:44:51",
"component": "analysis",
"summary": "[gcdserver] Import IC79+ data from I3OmDb",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [gcdserver] Import IC79+ data from I3OmDb (Trac #2234) - Mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries. We'll draw straws on who gets first shot with the sledgehammer when we drag the server hosting I3OmDb out to the field.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2234">https://code.icecube.wisc.edu/projects/icecube/ticket/2234</a>, reported by jbraunand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-09-18T07:51:29",
"_ts": "1568793089537035",
"description": "Mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries. We'll draw straws on who gets first shot with the sledgehammer when we drag the server hosting I3OmDb out to the field.",
"reporter": "jbraun",
"cc": "",
"resolution": "insufficient resources",
"time": "2019-01-18T22:44:51",
"component": "analysis",
"summary": "[gcdserver] Import IC79+ data from I3OmDb",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| defect | import data from trac mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries we ll draw straws on who gets first shot with the sledgehammer when we drag the server hosting out to the field migrated from json status closed changetime ts description mark the old data as archival and sort on that field to avoid needing to process these documents in operational queries we ll draw straws on who gets first shot with the sledgehammer when we drag the server hosting out to the field reporter jbraun cc resolution insufficient resources time component analysis summary import data from priority normal keywords milestone long term future owner jbraun type defect | 1 |
193,424 | 6,884,893,894 | IssuesEvent | 2017-11-21 14:33:12 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | sporadic bad RAM pointer error under qemu_nios2 | area: Kernel bug priority: low | **_Reported by Andrew Boie:_**
08:55:56 ---------------------sanity-out/qemu_nios2/tests/kernel/test_mutex/test/run.log---------------------
08:55:56 make[1]: Entering directory `/jenkins/workspace/zephyr-verify/tests/kernel/test_mutex'
08:55:56 make[2]: Entering directory `/jenkins/workspace/zephyr-verify'
08:55:56 make[3]: Entering directory `/jenkins/workspace/zephyr-verify/sanity-out/qemu_nios2/tests/kernel/test_mutex/test'
08:55:56 Using /jenkins/workspace/zephyr-verify as source for kernel
08:55:56 GEN ./Makefile
08:55:56 CHK include/generated/version.h
08:55:56 CHK misc/generated/configs.c
08:55:56 CHK include/generated/offsets.h
08:55:56 CHK misc/generated/sysgen/prj.mdef
08:55:56 [QEMU] CPU: nios2
08:55:56 QEMU 2.1.3 monitor - type 'help' for more information
08:55:56 (qemu) Bad ram pointer (nil)
08:55:56 make[3]: *** [qemu] Aborted (core dumped)
08:55:56 make[3]: Leaving directory `/jenkins/workspace/zephyr-verify/sanity-out/qemu_nios2/tests/kernel/test_mutex/test'
08:55:56 make[2]: *** [sub-make] Error 2
08:55:56 make[2]: Target `qemu' not remade because of errors.
08:55:56 make[2]: Leaving directory `/jenkins/workspace/zephyr-verify'
08:55:56 make[1]: *** [qemu] Error 2
08:55:56 make[1]: Leaving directory `/jenkins/workspace/zephyr-verify/tests/kernel/test_mutex'
09:02:53 ---------------------sanity-out/qemu_nios2/tests/kernel/test_mutex/test/run.log---------------------
This issue is only intermittently reproducible. It's not known whether this can affect any test or is specific to test_mutex. The crash appears to happen before any console output is displayed.
(Imported from Jira ZEP-678) | 1.0 | sporadic bad RAM pointer error under qemu_nios2 - **_Reported by Andrew Boie:_**
08:55:56 ---------------------sanity-out/qemu_nios2/tests/kernel/test_mutex/test/run.log---------------------
08:55:56 make[1]: Entering directory `/jenkins/workspace/zephyr-verify/tests/kernel/test_mutex'
08:55:56 make[2]: Entering directory `/jenkins/workspace/zephyr-verify'
08:55:56 make[3]: Entering directory `/jenkins/workspace/zephyr-verify/sanity-out/qemu_nios2/tests/kernel/test_mutex/test'
08:55:56 Using /jenkins/workspace/zephyr-verify as source for kernel
08:55:56 GEN ./Makefile
08:55:56 CHK include/generated/version.h
08:55:56 CHK misc/generated/configs.c
08:55:56 CHK include/generated/offsets.h
08:55:56 CHK misc/generated/sysgen/prj.mdef
08:55:56 [QEMU] CPU: nios2
08:55:56 QEMU 2.1.3 monitor - type 'help' for more information
08:55:56 (qemu) Bad ram pointer (nil)
08:55:56 make[3]: *** [qemu] Aborted (core dumped)
08:55:56 make[3]: Leaving directory `/jenkins/workspace/zephyr-verify/sanity-out/qemu_nios2/tests/kernel/test_mutex/test'
08:55:56 make[2]: *** [sub-make] Error 2
08:55:56 make[2]: Target `qemu' not remade because of errors.
08:55:56 make[2]: Leaving directory `/jenkins/workspace/zephyr-verify'
08:55:56 make[1]: *** [qemu] Error 2
08:55:56 make[1]: Leaving directory `/jenkins/workspace/zephyr-verify/tests/kernel/test_mutex'
09:02:53 ---------------------sanity-out/qemu_nios2/tests/kernel/test_mutex/test/run.log---------------------
This issue is only intermittently reproducible. It's not known whether this can affect any test or is specific to test_mutex. The crash appears to happen before any console output is displayed.
(Imported from Jira ZEP-678) | non_defect | sporadic bad ram pointer error under qemu reported by andrew boie sanity out qemu tests kernel test mutex test run log make entering directory jenkins workspace zephyr verify tests kernel test mutex make entering directory jenkins workspace zephyr verify make entering directory jenkins workspace zephyr verify sanity out qemu tests kernel test mutex test using jenkins workspace zephyr verify as source for kernel gen makefile chk include generated version h chk misc generated configs c chk include generated offsets h chk misc generated sysgen prj mdef cpu qemu monitor type help for more information qemu bad ram pointer nil make aborted core dumped make leaving directory jenkins workspace zephyr verify sanity out qemu tests kernel test mutex test make error make target qemu not remade because of errors make leaving directory jenkins workspace zephyr verify make error make leaving directory jenkins workspace zephyr verify tests kernel test mutex sanity out qemu tests kernel test mutex test run log this issue is only intermittently reproducible it s not known whether this can affect any test or is specific to test mutex the crash appears to happen before any console output is displayed imported from jira zep | 0 |
53,007 | 13,260,069,723 | IssuesEvent | 2020-08-20 17:38:35 | jkoan/test-navit | https://api.github.com/repos/jkoan/test-navit | closed | navit seg faults when all mapsets are disabled (Trac #48) | Incomplete Migration Migrated from Trac core defect/bug somebody | Migrated from http://trac.navit-project.org/ticket/48
```json
{
"status": "closed",
"changetime": "2008-01-29T16:54:27",
"_ts": "1201625667000000",
"description": "Navit gets segfaults when all mapsets are disabled and you click on the destination button in the menubar.[[BR]]\nThe function mapset_search_new() can't search on nothing.[[BR]]\nThe function navit_get_mapset() can't return nothing.[[BR]]\n[[BR]]\nI think navit should load (finds no maps), print an information dialog, and disable the destination button.",
"reporter": "reddog@mastersword.de",
"cc": "",
"resolution": "fixed",
"time": "2007-12-14T12:46:47",
"component": "core",
"summary": "navit seg faults when all mapsets are disabled",
"priority": "critical",
"keywords": "",
"version": "0.0.3",
"milestone": "version 0.0.4",
"owner": "somebody",
"type": "defect/bug",
"severity": ""
}
```
| 1.0 | navit seg faults when all mapsets are disabled (Trac #48) - Migrated from http://trac.navit-project.org/ticket/48
```json
{
"status": "closed",
"changetime": "2008-01-29T16:54:27",
"_ts": "1201625667000000",
"description": "Navit gets segfaults when all mapsets are disabled and you click on the destination button in the menubar.[[BR]]\nThe function mapset_search_new() can't search on nothing.[[BR]]\nThe function navit_get_mapset() can't return nothing.[[BR]]\n[[BR]]\nI think navit should load (finds no maps), print an information dialog, and disable the destination button.",
"reporter": "reddog@mastersword.de",
"cc": "",
"resolution": "fixed",
"time": "2007-12-14T12:46:47",
"component": "core",
"summary": "navit seg faults when all mapsets are disabled",
"priority": "critical",
"keywords": "",
"version": "0.0.3",
"milestone": "version 0.0.4",
"owner": "somebody",
"type": "defect/bug",
"severity": ""
}
```
| defect | navit seg faults when all mapsets are disabled trac migrated from json status closed changetime ts description navit gets segfaults when all mapsets are disabled and you click on the destination button in the menubar nthe function mapset search new can t search on nothing nthe function navit get mapset can t return nothing n ni think navit should load finds no maps print an information dialog and disable the destination button reporter reddog mastersword de cc resolution fixed time component core summary navit seg faults when all mapsets are disabled priority critical keywords version milestone version owner somebody type defect bug severity | 1 |
65,106 | 19,096,981,848 | IssuesEvent | 2021-11-29 17:42:44 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Important messages such as "This room is a continuation of another room" appear above new room intro text | T-Defect | ### Steps to reproduce

I got quite confused when I got an invite for a room that I was sure already existed in another form - yet I couldn't find that other room in my room list.
Turns out that this new room was the old room, upgraded. That's why I couldn't find the old room. I had missed the message about this room being a continuation of a previous room, I *believe* because it was far up in the timeline (above the new room copy).
### Outcome
#### What did you expect?
To not be confused.
#### What happened instead?
I was confused and had to ask in the room about what happened. How foolish I looked!
### Operating system
Arch Linux
### Application version
Element version: 1.9.5 Olm version: 3.2.3
### How did you install the app?
The AUR
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Important messages such as "This room is a continuation of another room" appear above new room intro text - ### Steps to reproduce

I got quite confused when I got an invite for a room that I was sure already existed in another form - yet I couldn't find that other room in my room list.
Turns out that this new room was the old room, upgraded. That's why I couldn't find the old room. I had missed the message about this room being a continuation of a previous room, I *believe* because it was far up in the timeline (above the new room copy).
### Outcome
#### What did you expect?
To not be confused.
#### What happened instead?
I was confused and had to ask in the room about what happened. How foolish I looked!
### Operating system
Arch Linux
### Application version
Element version: 1.9.5 Olm version: 3.2.3
### How did you install the app?
The AUR
### Homeserver
_No response_
### Will you send logs?
No | defect | important messages such as this room is a continuation of another room appear above new room intro text steps to reproduce i got quite confused when i got an invite for a room that i was sure already existed in another form yet i couldn t find that other room in my room list turns out that this new room was the old room upgraded that s why i couldn t find the old room i had missed the message about this room being a continuation of a previous room i believe because it was far up in the timeline above the new room copy outcome what did you expect to not be confused what happened instead i was confused and had to ask in the room about what happened how foolish i looked operating system arch linux application version element version olm version how did you install the app the aur homeserver no response will you send logs no | 1 |
38,457 | 15,702,699,635 | IssuesEvent | 2021-03-26 12:58:50 | SwissDataScienceCenter/renku-graph | https://api.github.com/repos/SwissDataScienceCenter/renku-graph | closed | Commits synchronisation events to be initiated from the event-log for single projects | event-log refactoring webhook-service | As a renku-graph maintainer, I'd like the commits synchronisation process to be based on the subscription model. The commits synchronization process is a safety net in case we deploy and the webhooks don't work.
### Curent design
- There is a batch job on Webhook which checks now and then (about every hour) if there are commits to be synchronized. There is a major problem: scalability. We can't put more than one instance. Each of the new instances will be doing synchronizing by itself. On Prod, this would kill GitLab. Even with one instance when `WebHook` service polls GitLab, it creates too much load.
**Acceptance-criteria:**
- [x] a new `COMMITS_SYNC` subscription category to be created on the EL;
[x] the subscription payload should look like this:
```json
{
"categoryName": "COMMITS_SYNC",
"subscriber": {
"url": "http://host/path",
"id": "20210302140653-8641"
}
}
```
- [x] a sync event should be issued for a single project;
- [x] a sync event should be issued every:
- [x] for projects with `latest_event_date` <= 7 days -> hour
- [x] for projects with `latest_event_date` > 7 days -> day
- [x] the category should update a relevant row in the `subscription_category_sync_time` on issuing an event;
- [x] update README;
- [x] a new XXX service to be created with the logic copied from the relevant process in the webhook-service
- [x] the name of the service should be good enough to describe the purpose of it (commits synchronisation, detect and create new events for force push cases, project renames, project deletions, new commit events creation from a push event etc)
- [x] the subscription should be built using the relevant functionality for the graph commons;
- [x] the service to expose a `POST /events` endpoint for accepting the `COMMITS_SYNC` events;
- [x] the event from EL to Commit Event Service should look like follows:
```json
{
"categoryName": "COMMITS_SYNC",
"id": "df654c3b1bd105a29d658f78f6380a842feac879",
"project": {
"id": 123,
"path": "namespace/project-name"
},
"lastSynced": "2001-09-04T10:48:29.457Z"
}
```
- [x] for a single project, compare the two commit Ids: one from EL and the other from GitLab to check latest commit.
- [x] if there is a difference, sync events for that project and post them to event log
- [x] the endpoint should have the same logic as the `POST /events` in EL or TG;
- [ ] it should call the toEvent method like in EL
- [x] the service should subscribe to the `COMMITS_SYNC` category
- [x] the service should be able to process `COMMITS_SYNC` events sent by EL (the logic can be found in the webhook-service, see `MissedEventsLoader#loadEvents`)
- [x] create README;
- [x] create helm chart and make the new service to be built during charpress run;
- [x] clean-up webhook-service;
- [ ] add panels showing JVM metrics for the new service;
- [x] add acceptance test (similar to `ZombieEventDetectionSpec`);
- [x] the `GET /events?latest_per_project=true` on the EL should be removed;
- update README;
| 1.0 | Commits synchronisation events to be initiated from the event-log for single projects - As a renku-graph maintainer, I'd like the commits synchronisation process to be based on the subscription model. The commits synchronization process is a safety net in case we deploy and the webhooks don't work.
### Curent design
- There is a batch job on Webhook which checks now and then (about every hour) if there are commits to be synchronized. There is a major problem: scalability. We can't put more than one instance. Each of the new instances will be doing synchronizing by itself. On Prod, this would kill GitLab. Even with one instance when `WebHook` service polls GitLab, it creates too much load.
**Acceptance-criteria:**
- [x] a new `COMMITS_SYNC` subscription category to be created on the EL;
[x] the subscription payload should look like this:
```json
{
"categoryName": "COMMITS_SYNC",
"subscriber": {
"url": "http://host/path",
"id": "20210302140653-8641"
}
}
```
- [x] a sync event should be issued for a single project;
- [x] a sync event should be issued every:
- [x] for projects with `latest_event_date` <= 7 days -> hour
- [x] for projects with `latest_event_date` > 7 days -> day
- [x] the category should update a relevant row in the `subscription_category_sync_time` on issuing an event;
- [x] update README;
- [x] a new XXX service to be created with the logic copied from the relevant process in the webhook-service
- [x] the name of the service should be good enough to describe the purpose of it (commits synchronisation, detect and create new events for force push cases, project renames, project deletions, new commit events creation from a push event etc)
- [x] the subscription should be built using the relevant functionality for the graph commons;
- [x] the service to expose a `POST /events` endpoint for accepting the `COMMITS_SYNC` events;
- [x] the event from EL to Commit Event Service should look like follows:
```json
{
"categoryName": "COMMITS_SYNC",
"id": "df654c3b1bd105a29d658f78f6380a842feac879",
"project": {
"id": 123,
"path": "namespace/project-name"
},
"lastSynced": "2001-09-04T10:48:29.457Z"
}
```
- [x] for a single project, compare the two commit Ids: one from EL and the other from GitLab to check latest commit.
- [x] if there is a difference, sync events for that project and post them to event log
- [x] the endpoint should have the same logic as the `POST /events` in EL or TG;
- [ ] it should call the toEvent method like in EL
- [x] the service should subscribe to the `COMMITS_SYNC` category
- [x] the service should be able to process `COMMITS_SYNC` events sent by EL (the logic can be found in the webhook-service, see `MissedEventsLoader#loadEvents`)
- [x] create README;
- [x] create helm chart and make the new service to be built during charpress run;
- [x] clean-up webhook-service;
- [ ] add panels showing JVM metrics for the new service;
- [x] add acceptance test (similar to `ZombieEventDetectionSpec`);
- [x] the `GET /events?latest_per_project=true` on the EL should be removed;
- update README;
| non_defect | commits synchronisation events to be initiated from the event log for single projects as a renku graph maintainer i d like the commits synchronisation process to be based on the subscription model the commits synchronization process is a safety net in case we deploy and the webhooks don t work curent design there is a batch job on webhook which checks now and then about every hour if there are commits to be synchronized there is a major problem scalability we can t put more than one instance each of the new instances will be doing synchronizing by itself on prod this would kill gitlab even with one instance when webhook service polls gitlab it creates too much load acceptance criteria a new commits sync subscription category to be created on the el the subscription payload should look like this json categoryname commits sync subscriber url id a sync event should be issued for a single project a sync event should be issued every for projects with latest event date hour for projects with latest event date days day the category should update a relevant row in the subscription category sync time on issuing an event update readme a new xxx service to be created with the logic copied from the relevant process in the webhook service the name of the service should be good enough to describe the purpose of it commits synchronisation detect and create new events for force push cases project renames project deletions new commit events creation from a push event etc the subscription should be built using the relevant functionality for the graph commons the service to expose a post events endpoint for accepting the commits sync events the event from el to commit event service should look like follows json categoryname commits sync id project id path namespace project name lastsynced for a single project compare the two commit ids one from el and the other from gitlab to check latest commit if there is a difference sync events for that project and post them to event log the endpoint should have the same logic as the post events in el or tg it should call the toevent method like in el the service should subscribe to the commits sync category the service should be able to process commits sync events sent by el the logic can be found in the webhook service see missedeventsloader loadevents create readme create helm chart and make the new service to be built during charpress run clean up webhook service add panels showing jvm metrics for the new service add acceptance test similar to zombieeventdetectionspec the get events latest per project true on the el should be removed update readme | 0 |
55,140 | 7,963,075,505 | IssuesEvent | 2018-07-13 16:14:36 | ga4gh/dockstore | https://api.github.com/repos/ga4gh/dockstore | closed | License on the Swagger UI page is LGPL | bug documentation | ## Bug Report
The license on the [Dockstore Swagger UI](https://dockstore.org:8443/static/swagger-ui/index.html) is LGPL. The license in the [source code](https://github.com/ga4gh/dockstore/blob/develop/LICENSE) is Apache 2.
Note the that license displayed in Swagger UI comes from dockstore-webservice/src/main/java/io/dockstore/webservice/resources/Description.java
Shouldn't they be the same, or is there some reason for them to be different? | 1.0 | License on the Swagger UI page is LGPL - ## Bug Report
The license on the [Dockstore Swagger UI](https://dockstore.org:8443/static/swagger-ui/index.html) is LGPL. The license in the [source code](https://github.com/ga4gh/dockstore/blob/develop/LICENSE) is Apache 2.
Note the that license displayed in Swagger UI comes from dockstore-webservice/src/main/java/io/dockstore/webservice/resources/Description.java
Shouldn't they be the same, or is there some reason for them to be different? | non_defect | license on the swagger ui page is lgpl bug report the license on the is lgpl the license in the is apache note the that license displayed in swagger ui comes from dockstore webservice src main java io dockstore webservice resources description java shouldn t they be the same or is there some reason for them to be different | 0 |
32,014 | 12,058,620,044 | IssuesEvent | 2020-04-15 17:47:09 | istio/istio | https://api.github.com/repos/istio/istio | closed | SDS is not yet working with istio/installer istio/operator | area/environments/installer area/security lifecycle/needs-escalation lifecycle/stale | SDS is not yet working with istio/installer istio/operator. There are a couple of issues:
1. Pilot, Galley, and telemetry-Mixer are still using file mount certs in their bootstrap config, which means they will read their file mounts certificates. This will break Istio if the control plane is set to use an SDS-based CA (pods like ingress gateway will fail to connect to Pilot, etc. due to having different root certs).
2. Policy-Mixer has a sidecar but it's talking to Galley directly using file mounts cert. | True | SDS is not yet working with istio/installer istio/operator - SDS is not yet working with istio/installer istio/operator. There are a couple of issues:
1. Pilot, Galley, and telemetry-Mixer are still using file mount certs in their bootstrap config, which means they will read their file mounts certificates. This will break Istio if the control plane is set to use an SDS-based CA (pods like ingress gateway will fail to connect to Pilot, etc. due to having different root certs).
2. Policy-Mixer has a sidecar but it's talking to Galley directly using file mounts cert. | non_defect | sds is not yet working with istio installer istio operator sds is not yet working with istio installer istio operator there are a couple of issues pilot galley and telemetry mixer are still using file mount certs in their bootstrap config which means they will read their file mounts certificates this will break istio if the control plane is set to use an sds based ca pods like ingress gateway will fail to connect to pilot etc due to having different root certs policy mixer has a sidecar but it s talking to galley directly using file mounts cert | 0 |
54,576 | 13,771,915,703 | IssuesEvent | 2020-10-07 23:06:30 | mozilla/jetstream | https://api.github.com/repos/mozilla/jetstream | opened | bug-1665467-pref-shirley-omnibus-experiment-release-81-82 failing | Defect | Just noticed that the experiment `bug-1665467-pref-shirley-omnibus-experiment-release-81-82` has an [external config](https://github.com/mozilla/jetstream-config/blob/main/bug-1665467-pref-shirley-omnibus-experiment-release-81-82.toml) and that re-running raises the following error:
```
Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/jetstream/cli.py", line 207, in rerun Analysis(project_id, dataset_id, config).run(date) File "/usr/local/lib/python3.8/site-packages/jetstream/analysis.py", line 362, in run self._calculate_statistics(metrics_table, period) File "/usr/local/lib/python3.8/site-packages/jetstream/analysis.py", line 214, in _calculate_statistics stats = m.run(segment_data, self.config.experiment).set_segment(segment) File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 49, in run return self.statistic.apply(data, self.metric.name, experiment) File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 154, in apply statistic_result_collection.data += self.transform( File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 516, in transform start = group[metric].nsmallest(2)[1] File "/usr/local/lib/python3.8/site-packages/pandas/core/series.py", line 882, in __getitem__ return self._get_value(key) File "/usr/local/lib/python3.8/site-packages/pandas/core/series.py", line 991, in _get_value loc = self.index.get_loc(label) File "/usr/local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2891, in get_loc raise KeyError(key) from err KeyError: 1
``` | 1.0 | bug-1665467-pref-shirley-omnibus-experiment-release-81-82 failing - Just noticed that the experiment `bug-1665467-pref-shirley-omnibus-experiment-release-81-82` has an [external config](https://github.com/mozilla/jetstream-config/blob/main/bug-1665467-pref-shirley-omnibus-experiment-release-81-82.toml) and that re-running raises the following error:
```
Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/jetstream/cli.py", line 207, in rerun Analysis(project_id, dataset_id, config).run(date) File "/usr/local/lib/python3.8/site-packages/jetstream/analysis.py", line 362, in run self._calculate_statistics(metrics_table, period) File "/usr/local/lib/python3.8/site-packages/jetstream/analysis.py", line 214, in _calculate_statistics stats = m.run(segment_data, self.config.experiment).set_segment(segment) File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 49, in run return self.statistic.apply(data, self.metric.name, experiment) File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 154, in apply statistic_result_collection.data += self.transform( File "/usr/local/lib/python3.8/site-packages/jetstream/statistics.py", line 516, in transform start = group[metric].nsmallest(2)[1] File "/usr/local/lib/python3.8/site-packages/pandas/core/series.py", line 882, in __getitem__ return self._get_value(key) File "/usr/local/lib/python3.8/site-packages/pandas/core/series.py", line 991, in _get_value loc = self.index.get_loc(label) File "/usr/local/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2891, in get_loc raise KeyError(key) from err KeyError: 1
``` | defect | bug pref shirley omnibus experiment release failing just noticed that the experiment bug pref shirley omnibus experiment release has an and that re running raises the following error traceback most recent call last file usr local lib site packages jetstream cli py line in rerun analysis project id dataset id config run date file usr local lib site packages jetstream analysis py line in run self calculate statistics metrics table period file usr local lib site packages jetstream analysis py line in calculate statistics stats m run segment data self config experiment set segment segment file usr local lib site packages jetstream statistics py line in run return self statistic apply data self metric name experiment file usr local lib site packages jetstream statistics py line in apply statistic result collection data self transform file usr local lib site packages jetstream statistics py line in transform start group nsmallest file usr local lib site packages pandas core series py line in getitem return self get value key file usr local lib site packages pandas core series py line in get value loc self index get loc label file usr local lib site packages pandas core indexes base py line in get loc raise keyerror key from err keyerror | 1 |
51,776 | 13,211,304,922 | IssuesEvent | 2020-08-15 22:10:49 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | wavedeform - heap buffer overflow in I3Wavedeform.cxx (Trac #1024) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1024">https://code.icecube.wisc.edu/projects/icecube/ticket/1024</a>, reported by negaand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-06-19T02:50:17",
"_ts": "1434682217191362",
"description": "at the bootcamp we discovered a heap-buffer-overflow in `I3Wavedeform.cxx:612`.\n\n{{{\n#!c\n608 basis = cholmod_l_allocate_sparse(basis_trip->nrow, basis_trip->ncol,\n609: basis_trip->nnz, true, true, 0, CHOLMOD_REAL, &c); \n610: for (int i = 0, accum = 0; i < nspes; ++i) { \n611: ((long *)(basis->p))[i] = accum; \n612: accum += col_counts[i]; \n613: } \n614: std::vector<long> col_indices(nspes,0); \n}}}\n\nASan output:\n{{{\n=================================================================\n==23247== ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60700001dfe8 at pc 0x7fd5b962bbb1 bp 0x7fff110978d0 sp 0x7fff110978c8\nREAD of size 4 at 0x60700001dfe8 thread T0\n #0 0x7fd5b962bbb0 in I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:612\n #1 0x7fd5b9627492 in I3Wavedeform::DAQ(boost::shared_ptr<I3Frame>) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:213\n #2 0x7fd5b45ecdac in I3Module::Process() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:226\n #3 0x7fd5b45eb86c in I3Module::Process_() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:182\n #4 0x7fd5b45ea014 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:111\n #5 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #6 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #7 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #8 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #9 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #10 0x7fd5b45d29d6 in I3Tray::Execute(unsigned int) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Tray.cxx:494\n #11 0x4f599a in local_test_routine_PulseTemplateTest() /home/nega/i3/combo/build/DOMLauncher/../../src/DOMLauncher/private/test/PulseTemplateTests.cxx:158\n #12 0x482196 in I3Test::test_group::run(std::string const&, bool) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:182\n #13 0x483ebf in I3Test::test_suite::run(std::string const&) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:372\n #14 0x485069 in main /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:563\n #15 0x7fd5aae99ec4 in __libc_start_main /build/buildd/eglibc-2.19/csu/libc-start.c:287\n #16 0x480fa8 in _start (/home/nega/i3/combo/build/bin/DOMLauncher-test+0x480fa8)\n0x60700001dfe8 is located 0 bytes to the right of 7912-byte region [0x60700001c100,0x60700001dfe8)\nallocated by thread T0 here:\n #0 0x7fd5bb14e81a in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.0+0x1181a)\n #1 0x7fd5ba93a7b6 in __gnu_cxx::new_allocator<int>::allocate(unsigned long, void const*) /usr/include/c++/4.8/ext/new_allocator.h:104\n #2 0x7fd5ba93407c in std::_Vector_base<int, std::allocator<int> >::_M_allocate(unsigned long) /usr/include/c++/4.8/bits/stl_vector.h:168\n #3 0x7fd5b96402a8 in void std::vector<int, std::allocator<int> >::_M_initialize_dispatch<int>(int, int, std::__true_type) /usr/include/c++/4.8/bits/stl_vector.h:1163\n #4 0x7fd5b9637a82 in std::vector<int, std::allocator<int> >::vector<int>(int, int, std::allocator<int> const&) /usr/include/c++/4.8/bits/stl_vector.h:404\n #5 0x7fd5b962b0f1 in I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:551\n #6 0x7fd5b9627492 in I3Wavedeform::DAQ(boost::shared_ptr<I3Frame>) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:213\n #7 0x7fd5b45ecdac in I3Module::Process() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:226\n #8 0x7fd5b45eb86c in I3Module::Process_() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:182\n #9 0x7fd5b45ea014 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:111\n #10 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #11 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #12 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #13 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #14 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #15 0x7fd5b45d29d6 in I3Tray::Execute(unsigned int) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Tray.cxx:494\n #16 0x4f599a in local_test_routine_PulseTemplateTest() /home/nega/i3/combo/build/DOMLauncher/../../src/DOMLauncher/private/test/PulseTemplateTests.cxx:158\n #17 0x482196 in I3Test::test_group::run(std::string const&, bool) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:182\n #18 0x483ebf in I3Test::test_suite::run(std::string const&) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:372\n #19 0x485069 in main /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:563\n #20 0x7fd5aae99ec4 in __libc_start_main /build/buildd/eglibc-2.19/csu/libc-start.c:287\nSUMMARY: AddressSanitizer: heap-buffer-overflow /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:612 I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double)\nShadow bytes around the buggy address:\n 0x0c0e7fffbba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n=>0x0c0e7fffbbf0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa\n 0x0c0e7fffbc00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x0c0e7fffbc10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x0c0e7fffbc20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbc30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbc40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\nShadow byte legend (one shadow byte represents 8 application bytes):\n Addressable: 00\n Partially addressable: 01 02 03 04 05 06 07 \n Heap left redzone: fa\n Heap righ redzone: fb\n Freed Heap region: fd\n Stack left redzone: f1\n Stack mid redzone: f2\n Stack right redzone: f3\n Stack partial redzone: f4\n Stack after return: f5\n Stack use after scope: f8\n Global redzone: f9\n Global init order: f6\n Poisoned by user: f7\n ASan internal: fe\n==23247== ABORTING\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2015-06-18T22:18:27",
"component": "combo reconstruction",
"summary": "wavedeform - heap buffer overflow in I3Wavedeform.cxx",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| 1.0 | wavedeform - heap buffer overflow in I3Wavedeform.cxx (Trac #1024) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1024">https://code.icecube.wisc.edu/projects/icecube/ticket/1024</a>, reported by negaand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-06-19T02:50:17",
"_ts": "1434682217191362",
"description": "at the bootcamp we discovered a heap-buffer-overflow in `I3Wavedeform.cxx:612`.\n\n{{{\n#!c\n608 basis = cholmod_l_allocate_sparse(basis_trip->nrow, basis_trip->ncol,\n609: basis_trip->nnz, true, true, 0, CHOLMOD_REAL, &c); \n610: for (int i = 0, accum = 0; i < nspes; ++i) { \n611: ((long *)(basis->p))[i] = accum; \n612: accum += col_counts[i]; \n613: } \n614: std::vector<long> col_indices(nspes,0); \n}}}\n\nASan output:\n{{{\n=================================================================\n==23247== ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60700001dfe8 at pc 0x7fd5b962bbb1 bp 0x7fff110978d0 sp 0x7fff110978c8\nREAD of size 4 at 0x60700001dfe8 thread T0\n #0 0x7fd5b962bbb0 in I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:612\n #1 0x7fd5b9627492 in I3Wavedeform::DAQ(boost::shared_ptr<I3Frame>) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:213\n #2 0x7fd5b45ecdac in I3Module::Process() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:226\n #3 0x7fd5b45eb86c in I3Module::Process_() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:182\n #4 0x7fd5b45ea014 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:111\n #5 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #6 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #7 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #8 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #9 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #10 0x7fd5b45d29d6 in I3Tray::Execute(unsigned int) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Tray.cxx:494\n #11 0x4f599a in local_test_routine_PulseTemplateTest() /home/nega/i3/combo/build/DOMLauncher/../../src/DOMLauncher/private/test/PulseTemplateTests.cxx:158\n #12 0x482196 in I3Test::test_group::run(std::string const&, bool) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:182\n #13 0x483ebf in I3Test::test_suite::run(std::string const&) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:372\n #14 0x485069 in main /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:563\n #15 0x7fd5aae99ec4 in __libc_start_main /build/buildd/eglibc-2.19/csu/libc-start.c:287\n #16 0x480fa8 in _start (/home/nega/i3/combo/build/bin/DOMLauncher-test+0x480fa8)\n0x60700001dfe8 is located 0 bytes to the right of 7912-byte region [0x60700001c100,0x60700001dfe8)\nallocated by thread T0 here:\n #0 0x7fd5bb14e81a in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.0+0x1181a)\n #1 0x7fd5ba93a7b6 in __gnu_cxx::new_allocator<int>::allocate(unsigned long, void const*) /usr/include/c++/4.8/ext/new_allocator.h:104\n #2 0x7fd5ba93407c in std::_Vector_base<int, std::allocator<int> >::_M_allocate(unsigned long) /usr/include/c++/4.8/bits/stl_vector.h:168\n #3 0x7fd5b96402a8 in void std::vector<int, std::allocator<int> >::_M_initialize_dispatch<int>(int, int, std::__true_type) /usr/include/c++/4.8/bits/stl_vector.h:1163\n #4 0x7fd5b9637a82 in std::vector<int, std::allocator<int> >::vector<int>(int, int, std::allocator<int> const&) /usr/include/c++/4.8/bits/stl_vector.h:404\n #5 0x7fd5b962b0f1 in I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:551\n #6 0x7fd5b9627492 in I3Wavedeform::DAQ(boost::shared_ptr<I3Frame>) /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:213\n #7 0x7fd5b45ecdac in I3Module::Process() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:226\n #8 0x7fd5b45eb86c in I3Module::Process_() /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:182\n #9 0x7fd5b45ea014 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:111\n #10 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #11 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #12 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #13 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #14 0x7fd5b45ea457 in I3Module::Do(void (I3Module::*)()) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Module.cxx:132\n #15 0x7fd5b45d29d6 in I3Tray::Execute(unsigned int) /home/nega/i3/combo/build/icetray/../../src/icetray/private/icetray/I3Tray.cxx:494\n #16 0x4f599a in local_test_routine_PulseTemplateTest() /home/nega/i3/combo/build/DOMLauncher/../../src/DOMLauncher/private/test/PulseTemplateTests.cxx:158\n #17 0x482196 in I3Test::test_group::run(std::string const&, bool) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:182\n #18 0x483ebf in I3Test::test_suite::run(std::string const&) /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:372\n #19 0x485069 in main /home/nega/i3/combo/build/DOMLauncher/../../src/cmake/tool-patches/common/I3TestMain.ixx:563\n #20 0x7fd5aae99ec4 in __libc_start_main /build/buildd/eglibc-2.19/csu/libc-start.c:287\nSUMMARY: AddressSanitizer: heap-buffer-overflow /home/nega/i3/combo/build/wavedeform/../../src/wavedeform/private/wavedeform/I3Wavedeform.cxx:612 I3Wavedeform::GetPulses(__gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, __gnu_cxx::__normal_iterator<I3Waveform const*, std::vector<I3Waveform, std::allocator<I3Waveform> > >, OMKey const&, bool, WaveformTemplate const&, I3DOMCalibration const&, double)\nShadow bytes around the buggy address:\n 0x0c0e7fffbba0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbbe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n=>0x0c0e7fffbbf0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa\n 0x0c0e7fffbc00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x0c0e7fffbc10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x0c0e7fffbc20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbc30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\n 0x0c0e7fffbc40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00\nShadow byte legend (one shadow byte represents 8 application bytes):\n Addressable: 00\n Partially addressable: 01 02 03 04 05 06 07 \n Heap left redzone: fa\n Heap righ redzone: fb\n Freed Heap region: fd\n Stack left redzone: f1\n Stack mid redzone: f2\n Stack right redzone: f3\n Stack partial redzone: f4\n Stack after return: f5\n Stack use after scope: f8\n Global redzone: f9\n Global init order: f6\n Poisoned by user: f7\n ASan internal: fe\n==23247== ABORTING\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2015-06-18T22:18:27",
"component": "combo reconstruction",
"summary": "wavedeform - heap buffer overflow in I3Wavedeform.cxx",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| defect | wavedeform heap buffer overflow in cxx trac migrated from json status closed changetime ts description at the bootcamp we discovered a heap buffer overflow in cxx n n n c basis cholmod l allocate sparse basis trip nrow basis trip ncol basis trip nnz true true cholmod real c for int i accum i p accum accum col counts std vector col indices nspes n n nasan output n n n error addresssanitizer heap buffer overflow on address at pc bp sp nread of size at thread n in getpulses gnu cxx normal iterator gnu cxx normal iterator omkey const bool waveformtemplate const const double home nega combo build wavedeform src wavedeform private wavedeform cxx n in daq boost shared ptr home nega combo build wavedeform src wavedeform private wavedeform cxx n in process home nega combo build icetray src icetray private icetray cxx n in process home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in do void home nega combo build icetray src icetray private icetray cxx n in execute unsigned int home nega combo build icetray src icetray private icetray cxx n in local test routine pulsetemplatetest home nega combo build domlauncher src domlauncher private test pulsetemplatetests cxx n in test group run std string const bool home nega combo build domlauncher src cmake tool patches common ixx n in test suite run std string const home nega combo build domlauncher src cmake tool patches common ixx n in main home nega combo build domlauncher src cmake tool patches common ixx n in libc start main build buildd eglibc csu libc start c n in start home nega combo build bin domlauncher test is located bytes to the right of byte region fa fa n fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa n fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa n n n nshadow byte legend one shadow byte represents application bytes n addressable n partially addressable n heap left redzone fa n heap righ redzone fb n freed heap region fd n stack left redzone n stack mid redzone n stack right redzone n stack partial redzone n stack after return n stack use after scope n global redzone n global init order n poisoned by user n asan internal fe n aborting n reporter nega cc resolution fixed time component combo reconstruction summary wavedeform heap buffer overflow in cxx priority normal keywords milestone owner jbraun type defect | 1 |
18,723 | 13,168,406,229 | IssuesEvent | 2020-08-11 12:04:32 | topcoder-platform/qa-fun | https://api.github.com/repos/topcoder-platform/qa-fun | closed | [TCO-20 REGIONAL] [Web-Chrome] Contact us page should provide a form for user to write a message | UX/Usability | Steps:
1. In [web-chrome] click on contact us link down the bottom
Expected result:
This page should provide a form where the user can write a message to the company and provide their name.
Actual result:
No form for message is provided.
Screenshot:

| True | [TCO-20 REGIONAL] [Web-Chrome] Contact us page should provide a form for user to write a message - Steps:
1. In [web-chrome] click on contact us link down the bottom
Expected result:
This page should provide a form where the user can write a message to the company and provide their name.
Actual result:
No form for message is provided.
Screenshot:

| non_defect | contact us page should provide a form for user to write a message steps in click on contact us link down the bottom expected result this page should provide a form where the user can write a message to the company and provide their name actual result no form for message is provided screenshot | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.