Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
355,747 | 25,176,013,013 | IssuesEvent | 2022-11-11 09:19:47 | boredcoco/pe | https://api.github.com/repos/boredcoco/pe | opened | ListCommandParser UML inaccurate diagram | type.DocumentationBug severity.Medium | 
I don't see any part of the code which destroys the ListCommandParser this diagram is slightly inaccurate
<!--session: 1668153950994-46ee6784-85ca-4802-99de-176c3e01881c-->
<!--Version: Web v3.4.4--> | 1.0 | ListCommandParser UML inaccurate diagram - 
I don't see any part of the code which destroys the ListCommandParser this diagram is slightly inaccurate
<!--session: 1668153950994-46ee6784-85ca-4802-99de-176c3e01881c-->
<!--Version: Web v3.4.4--> | non_defect | listcommandparser uml inaccurate diagram i don t see any part of the code which destroys the listcommandparser this diagram is slightly inaccurate | 0 |
624,028 | 19,684,781,012 | IssuesEvent | 2022-01-11 20:45:06 | UniVE-SSV/lisa | https://api.github.com/repos/UniVE-SSV/lisa | closed | [FEATURE REQUEST] Make `CartesianProduct` methods non-final (and check more classes) | enhancement resolved priority-p1 | **Description**
`CartesianProduct` (and probably more classes) have final methods that prevent their extensibility even in cases where it makes sense. | 1.0 | [FEATURE REQUEST] Make `CartesianProduct` methods non-final (and check more classes) - **Description**
`CartesianProduct` (and probably more classes) have final methods that prevent their extensibility even in cases where it makes sense. | non_defect | make cartesianproduct methods non final and check more classes description cartesianproduct and probably more classes have final methods that prevent their extensibility even in cases where it makes sense | 0 |
37,279 | 5,109,766,637 | IssuesEvent | 2017-01-05 21:50:01 | owtf/owtf | https://api.github.com/repos/owtf/owtf | closed | Session_Management_Schema@OWTF-SM-001 now breaks OWTF tests | Bug Priority Medium Testing | The plugin `Session_Management_Schema@OWTF-SM-001` is currently breaking the OWTF tests.
Since forever, the plugin was actually doing nothing (`return ([])`) but it has been re-enabled with https://github.com/owtf/owtf/commit/9207e4bccf41d155067e7135e653ed2ae5447051#diff-40bec1612db6779ef9810c15e2e79e23L24
However, since it has been re-enabled, it breaks the OWTF tests due to unknown side-effects. The issue should be troubleshooted in order to ensure that the plugin is working properly.
The plugin has been temporarily disabled in the testing configuration but should be re-enabled as soon as possible. | 1.0 | Session_Management_Schema@OWTF-SM-001 now breaks OWTF tests - The plugin `Session_Management_Schema@OWTF-SM-001` is currently breaking the OWTF tests.
Since forever, the plugin was actually doing nothing (`return ([])`) but it has been re-enabled with https://github.com/owtf/owtf/commit/9207e4bccf41d155067e7135e653ed2ae5447051#diff-40bec1612db6779ef9810c15e2e79e23L24
However, since it has been re-enabled, it breaks the OWTF tests due to unknown side-effects. The issue should be troubleshooted in order to ensure that the plugin is working properly.
The plugin has been temporarily disabled in the testing configuration but should be re-enabled as soon as possible. | non_defect | session management schema owtf sm now breaks owtf tests the plugin session management schema owtf sm is currently breaking the owtf tests since forever the plugin was actually doing nothing return but it has been re enabled with however since it has been re enabled it breaks the owtf tests due to unknown side effects the issue should be troubleshooted in order to ensure that the plugin is working properly the plugin has been temporarily disabled in the testing configuration but should be re enabled as soon as possible | 0 |
97,628 | 28,374,887,198 | IssuesEvent | 2023-04-12 19:59:49 | MicrosoftDocs/powerbi-docs | https://api.github.com/repos/MicrosoftDocs/powerbi-docs | closed | Confusing text, "When you want the data in your Power BI report and in your Report Builder report to be the same, " | powerbi/svc Pri2 report-builder/subsvc | What is a _Report Builder report _? And as far as I know there are two types of Power BI reports, Standard and Paginated. Which Power BI report type is it referring to here?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: ed0cd82d-e7c4-4b5b-700f-1152a2a6bf38
* Version Independent ID: b2d4aeb5-1a98-ad43-4cf6-9539d9073600
* Content: [Create a paginated report based on a Power BI shared dataset - Power BI](https://learn.microsoft.com/en-us/power-bi/paginated-reports/report-builder-shared-datasets)
* Content Source: [powerbi-docs/paginated-reports/report-builder-shared-datasets.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/main/powerbi-docs/paginated-reports/report-builder-shared-datasets.md)
* Service: **powerbi**
* Sub-service: **report-builder**
* GitHub Login: @maggiesMSFT
* Microsoft Alias: **maggies** | 1.0 | Confusing text, "When you want the data in your Power BI report and in your Report Builder report to be the same, " - What is a _Report Builder report _? And as far as I know there are two types of Power BI reports, Standard and Paginated. Which Power BI report type is it referring to here?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: ed0cd82d-e7c4-4b5b-700f-1152a2a6bf38
* Version Independent ID: b2d4aeb5-1a98-ad43-4cf6-9539d9073600
* Content: [Create a paginated report based on a Power BI shared dataset - Power BI](https://learn.microsoft.com/en-us/power-bi/paginated-reports/report-builder-shared-datasets)
* Content Source: [powerbi-docs/paginated-reports/report-builder-shared-datasets.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/main/powerbi-docs/paginated-reports/report-builder-shared-datasets.md)
* Service: **powerbi**
* Sub-service: **report-builder**
* GitHub Login: @maggiesMSFT
* Microsoft Alias: **maggies** | non_defect | confusing text when you want the data in your power bi report and in your report builder report to be the same what is a report builder report and as far as i know there are two types of power bi reports standard and paginated which power bi report type is it referring to here document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service powerbi sub service report builder github login maggiesmsft microsoft alias maggies | 0 |
107,244 | 16,751,740,813 | IssuesEvent | 2021-06-12 02:01:39 | turkdevops/graphql-tools | https://api.github.com/repos/turkdevops/graphql-tools | opened | CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz | security vulnerability | ## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: graphql-tools/website/package.json</p>
<p>Path to vulnerable library: graphql-tools/website/node_modules/dns-packet/package.json,graphql-tools/docs/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.20.18.tgz (Root Library)
- webpack-dev-server-3.10.3.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution: dns-packet - 5.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23386 (Medium) detected in dns-packet-1.3.1.tgz - ## CVE-2021-23386 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dns-packet-1.3.1.tgz</b></p></summary>
<p>An abstract-encoding compliant module for encoding / decoding DNS packets</p>
<p>Library home page: <a href="https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz">https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.1.tgz</a></p>
<p>Path to dependency file: graphql-tools/website/package.json</p>
<p>Path to vulnerable library: graphql-tools/website/node_modules/dns-packet/package.json,graphql-tools/docs/node_modules/dns-packet/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.20.18.tgz (Root Library)
- webpack-dev-server-3.10.3.tgz
- bonjour-3.5.0.tgz
- multicast-dns-6.2.3.tgz
- :x: **dns-packet-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package dns-packet before 5.2.2. It creates buffers with allocUnsafe and does not always fill them before forming network packets. This can expose internal application memory over unencrypted network when querying crafted invalid domain names.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23386>CVE-2021-23386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23386</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution: dns-packet - 5.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in dns packet tgz cve medium severity vulnerability vulnerable library dns packet tgz an abstract encoding compliant module for encoding decoding dns packets library home page a href path to dependency file graphql tools website package json path to vulnerable library graphql tools website node modules dns packet package json graphql tools docs node modules dns packet package json dependency hierarchy gatsby tgz root library webpack dev server tgz bonjour tgz multicast dns tgz x dns packet tgz vulnerable library found in head commit a href found in base branch vulnerability details this affects the package dns packet before it creates buffers with allocunsafe and does not always fill them before forming network packets this can expose internal application memory over unencrypted network when querying crafted invalid domain names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dns packet step up your open source security game with whitesource | 0 |
116,763 | 9,882,951,268 | IssuesEvent | 2019-06-24 18:10:05 | astropy/astropy | https://api.github.com/repos/astropy/astropy | opened | TST: Mirror pandas intersphinx | Docs testing | `html-doc` job on CircleCI is failing like this: https://circleci.com/gh/astropy/astropy/35218
```
failed to reach any of the inventories with the following issues:
intersphinx inventory 'http://pandas.pydata.org/pandas-docs/stable/objects.inv' not fetchable
due to <class 'requests.exceptions.ConnectionError'>:
HTTPConnectionPool(host='pandas.pydata.org', port=80):
Max retries exceeded with url:
/pandas-docs/stable/objects.inv (Caused by NewConnectionError...
... [Errno 110] Connection timed out')
Sphinx Documentation subprocess failed with return code 2
Exited with code 2
```
@Cadair thinks it is related to https://news.ycombinator.com/item?id=20262214 . @bsipocz suggested that we can mirror it as a workaround.
| 1.0 | TST: Mirror pandas intersphinx - `html-doc` job on CircleCI is failing like this: https://circleci.com/gh/astropy/astropy/35218
```
failed to reach any of the inventories with the following issues:
intersphinx inventory 'http://pandas.pydata.org/pandas-docs/stable/objects.inv' not fetchable
due to <class 'requests.exceptions.ConnectionError'>:
HTTPConnectionPool(host='pandas.pydata.org', port=80):
Max retries exceeded with url:
/pandas-docs/stable/objects.inv (Caused by NewConnectionError...
... [Errno 110] Connection timed out')
Sphinx Documentation subprocess failed with return code 2
Exited with code 2
```
@Cadair thinks it is related to https://news.ycombinator.com/item?id=20262214 . @bsipocz suggested that we can mirror it as a workaround.
| non_defect | tst mirror pandas intersphinx html doc job on circleci is failing like this failed to reach any of the inventories with the following issues intersphinx inventory not fetchable due to httpconnectionpool host pandas pydata org port max retries exceeded with url pandas docs stable objects inv caused by newconnectionerror connection timed out sphinx documentation subprocess failed with return code exited with code cadair thinks it is related to bsipocz suggested that we can mirror it as a workaround | 0 |
105,258 | 13,172,543,014 | IssuesEvent | 2020-08-11 18:37:39 | Opentrons/opentrons | https://api.github.com/repos/Opentrons/opentrons | closed | PD refactor: _mmFromBottom instead of _tip_position for new delay fields | :spider: SPDDRS protocol designer refactor | # Overview
Oops, we mis-named a field! These two:
```
aspirate_delay_tip_position
dispense_delay_tip_position
should be renamed:
```
aspirate_delay_mmFromBottom
dispense_delay_mmFromBottom
```
so that they all match the `_mmFromBottom` convention have for the other fields (as specified in https://github.com/Opentrons/opentrons/issues/6004)
# Implementation notes
- Make sure the `..._tip_position` fields didn't exist in previous PD release | 1.0 | PD refactor: _mmFromBottom instead of _tip_position for new delay fields - # Overview
Oops, we mis-named a field! These two:
```
aspirate_delay_tip_position
dispense_delay_tip_position
should be renamed:
```
aspirate_delay_mmFromBottom
dispense_delay_mmFromBottom
```
so that they all match the `_mmFromBottom` convention have for the other fields (as specified in https://github.com/Opentrons/opentrons/issues/6004)
# Implementation notes
- Make sure the `..._tip_position` fields didn't exist in previous PD release | non_defect | pd refactor mmfrombottom instead of tip position for new delay fields overview oops we mis named a field these two aspirate delay tip position dispense delay tip position should be renamed aspirate delay mmfrombottom dispense delay mmfrombottom so that they all match the mmfrombottom convention have for the other fields as specified in implementation notes make sure the tip position fields didn t exist in previous pd release | 0 |
53,479 | 13,261,730,522 | IssuesEvent | 2020-08-20 20:25:55 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [cvmfs] gfortran issue and nugen (Trac #1501) | Migrated from Trac cvmfs defect | I'm getting th following error when attempting to run nugen:
Loading neutrino-generator................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (I3Tray.py:34 in load)
Traceback (most recent call last):
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/neutrino-generator/resources/scripts/NuGen.py", line 13, in <module>
from icecube import icetray, dataclasses, phys_services, sim_services, dataio, neutrino_generator
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/neutrino_generator/__init__.py", line 6, in <module>
I3Tray.load("neutrino-generator")
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/I3Tray.py", line 34, in load
icetray.logging.log_fatal("Failed to load library (%s): %s" % (sys.exc_info()[0], sys.exc_info()[1]), "I3Tray")
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (in load)
Seems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1501">https://code.icecube.wisc.edu/projects/icecube/ticket/1501</a>, reported by olivasand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"_ts": "1458335655846260",
"description": "I'm getting th following error when attempting to run nugen:\n\nLoading neutrino-generator................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (I3Tray.py:34 in load)\nTraceback (most recent call last):\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/neutrino-generator/resources/scripts/NuGen.py\", line 13, in <module>\n from icecube import icetray, dataclasses, phys_services, sim_services, dataio, neutrino_generator\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/neutrino_generator/__init__.py\", line 6, in <module>\n I3Tray.load(\"neutrino-generator\")\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/I3Tray.py\", line 34, in load\n icetray.logging.log_fatal(\"Failed to load library (%s): %s\" % (sys.exc_info()[0], sys.exc_info()[1]), \"I3Tray\")\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (in load)\n\nSeems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution.\n",
"reporter": "olivas",
"cc": "",
"resolution": "invalid",
"time": "2016-01-06T23:24:29",
"component": "cvmfs",
"summary": "[cvmfs] gfortran issue and nugen",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [cvmfs] gfortran issue and nugen (Trac #1501) - I'm getting th following error when attempting to run nugen:
Loading neutrino-generator................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (I3Tray.py:34 in load)
Traceback (most recent call last):
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/neutrino-generator/resources/scripts/NuGen.py", line 13, in <module>
from icecube import icetray, dataclasses, phys_services, sim_services, dataio, neutrino_generator
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/neutrino_generator/__init__.py", line 6, in <module>
I3Tray.load("neutrino-generator")
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/I3Tray.py", line 34, in load
icetray.logging.log_fatal("Failed to load library (%s): %s" % (sys.exc_info()[0], sys.exc_info()[1]), "I3Tray")
File "/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (in load)
Seems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1501">https://code.icecube.wisc.edu/projects/icecube/ticket/1501</a>, reported by olivasand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:15",
"_ts": "1458335655846260",
"description": "I'm getting th following error when attempting to run nugen:\n\nLoading neutrino-generator................................FATAL (I3Tray): Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (I3Tray.py:34 in load)\nTraceback (most recent call last):\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/neutrino-generator/resources/scripts/NuGen.py\", line 13, in <module>\n from icecube import icetray, dataclasses, phys_services, sim_services, dataio, neutrino_generator\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/neutrino_generator/__init__.py\", line 6, in <module>\n I3Tray.load(\"neutrino-generator\")\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/I3Tray.py\", line 34, in load\n icetray.logging.log_fatal(\"Failed to load library (%s): %s\" % (sys.exc_info()[0], sys.exc_info()[1]), \"I3Tray\")\n File \"/cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/metaprojects/simulation/V04-00-12/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: Failed to load library (<type 'exceptions.RuntimeError'>): dlopen() dynamic loading error: /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/tools/gfortran/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /cvmfs/icecube.opensciencegrid.org/py2-v1/Ubuntu_14_x86_64/i3ports/lib/libPythia6.so) (in load)\n\nSeems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution.\n",
"reporter": "olivas",
"cc": "",
"resolution": "invalid",
"time": "2016-01-06T23:24:29",
"component": "cvmfs",
"summary": "[cvmfs] gfortran issue and nugen",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | gfortran issue and nugen trac i m getting th following error when attempting to run nugen loading neutrino generator fatal failed to load library dlopen dynamic loading error cvmfs icecube opensciencegrid org ubuntu tools gfortran libgfortran so version gfortran not found required by cvmfs icecube opensciencegrid org ubuntu lib so py in load traceback most recent call last file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation neutrino generator resources scripts nugen py line in from icecube import icetray dataclasses phys services sim services dataio neutrino generator file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib icecube neutrino generator init py line in load neutrino generator file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib py line in load icetray logging log fatal failed to load library s s sys exc info sys exc info file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib icecube icetray py line in log fatal raise runtimeerror message in tb runtimeerror failed to load library dlopen dynamic loading error cvmfs icecube opensciencegrid org ubuntu tools gfortran libgfortran so version gfortran not found required by cvmfs icecube opensciencegrid org ubuntu lib so in load seems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution migrated from json status closed changetime ts description i m getting th following error when attempting to run nugen n nloading neutrino generator fatal failed to load library dlopen dynamic loading error cvmfs icecube opensciencegrid org ubuntu tools gfortran libgfortran so version gfortran not found required by cvmfs icecube opensciencegrid org ubuntu lib so py in load ntraceback most recent call last n file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation neutrino generator resources scripts nugen py line in n from icecube import icetray dataclasses phys services sim services dataio neutrino generator n file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib icecube neutrino generator init py line in n load neutrino generator n file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib py line in load n icetray logging log fatal failed to load library s s sys exc info sys exc info n file cvmfs icecube opensciencegrid org ubuntu metaprojects simulation lib icecube icetray py line in log fatal n raise runtimeerror message in tb nruntimeerror failed to load library dlopen dynamic loading error cvmfs icecube opensciencegrid org ubuntu tools gfortran libgfortran so version gfortran not found required by cvmfs icecube opensciencegrid org ubuntu lib so in load n nseems like an inconsistent toolset where the pythia lib was built against a different version of gfortran than was bundled with the distribution n reporter olivas cc resolution invalid time component cvmfs summary gfortran issue and nugen priority major keywords milestone owner david schultz type defect | 1 |
210,147 | 23,737,492,238 | IssuesEvent | 2022-08-31 09:23:10 | NilFoundation/crypto3-blueprint | https://api.github.com/repos/NilFoundation/crypto3-blueprint | opened | Substitute component-oriented selector with gate-oriented | security efficiency undefined behaviour | Component-oriented selector choice is incorrect and may lead to potential efficiency issues, since we can not distinguish two instances of one component with different input params (parametrized with different input variables) - such instances will have same selectors. And if we directly use variables from params instead of using copy constraints (it may take place in small components, for example), it will lead to undefined behaviour - constraints are different, but selectors are the same.
Most obvious way to fix this is to switch to gate-oriented selectors. But it doesn't sound like an easy task, since it will require building gate ID based on it's content. Most likely we will do it after implementing stable math expression type: https://github.com/NilFoundation/crypto3-math/issues/5 .
**Until we closed this issue - we always must use copy constraints for params variables!** | True | Substitute component-oriented selector with gate-oriented - Component-oriented selector choice is incorrect and may lead to potential efficiency issues, since we can not distinguish two instances of one component with different input params (parametrized with different input variables) - such instances will have same selectors. And if we directly use variables from params instead of using copy constraints (it may take place in small components, for example), it will lead to undefined behaviour - constraints are different, but selectors are the same.
Most obvious way to fix this is to switch to gate-oriented selectors. But it doesn't sound like an easy task, since it will require building gate ID based on it's content. Most likely we will do it after implementing stable math expression type: https://github.com/NilFoundation/crypto3-math/issues/5 .
**Until we closed this issue - we always must use copy constraints for params variables!** | non_defect | substitute component oriented selector with gate oriented component oriented selector choice is incorrect and may lead to potential efficiency issues since we can not distinguish two instances of one component with different input params parametrized with different input variables such instances will have same selectors and if we directly use variables from params instead of using copy constraints it may take place in small components for example it will lead to undefined behaviour constraints are different but selectors are the same most obvious way to fix this is to switch to gate oriented selectors but it doesn t sound like an easy task since it will require building gate id based on it s content most likely we will do it after implementing stable math expression type until we closed this issue we always must use copy constraints for params variables | 0 |
53,956 | 13,262,555,611 | IssuesEvent | 2020-08-20 22:02:47 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Error in copy-constructor of I3MCTree when adding simprod DetectorSim to the tray (Trac #2387) | Migrated from Trac combo core defect | The following code runs fine if the DetectorSim traysegment is not added after calling the I3MCTree copy-constructor.
Failure can be caused by executing the script with --fail
```text
from icecube import dataclasses as dc
from icecube.icetray import I3Frame
from icecube import dataio, phys_services
from I3Tray import *
from icecube.simprod import segments
from icecube.filterscripts.offlineL2.level2_Reconstruction_SLOP import SLOPLevel2
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--fail", action="store_true")
args = parser.parse_args()
gcdFile = '/cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_IC86.All_Pass3.i3.gz'
tray = I3Tray()
tray.Add("I3Reader", Filename="/data/user/sdharani/signal/22/trial.i3.zst")
def replace(frame):
tree = frame["I3MCTree"]
primary = tree.get_primaries()[0]
particle = dc.I3Particle()
tree.replace(primary.id, particle)
print("Copy construct")
tree = dc.I3MCTree(tree)
print("Done")
del frame["I3MCTree"]
frame.Put("I3MCTree", tree)
print(tree.get_primaries()[0])
tray.Add(replace, Streams=[I3Frame.DAQ])
randomService = phys_services.I3GSLRandomService(
seed = 500,
track_state=True)
tray.AddService("I3GSLRandomServiceFactory","random")
if args.fail:
tray.AddSegment(segments.DetectorSim, "DetectorSim",
RandomService = "I3RandomService",
RunID = 1234,
GCDFile = gcdFile,
InputPESeriesMapName = "I3MCPESeriesMap",
KeepMCHits = True,
SkipNoiseGenerator = False,
FilterTrigger = False,
)
tray.Execute()
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2387">https://code.icecube.wisc.edu/projects/icecube/ticket/2387</a>, reported by chaackand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "The following code runs fine if the DetectorSim traysegment is not added after calling the I3MCTree copy-constructor.\nFailure can be caused by executing the script with --fail\n{{{\nfrom icecube import dataclasses as dc\nfrom icecube.icetray import I3Frame\nfrom icecube import dataio, phys_services\nfrom I3Tray import *\nfrom icecube.simprod import segments\nfrom icecube.filterscripts.offlineL2.level2_Reconstruction_SLOP import SLOPLevel2\n\u200b\nfrom argparse import ArgumentParser\n\u200b\nparser = ArgumentParser()\nparser.add_argument(\"--fail\", action=\"store_true\")\nargs = parser.parse_args()\n\u200b\ngcdFile = '/cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_IC86.All_Pass3.i3.gz'\ntray = I3Tray()\ntray.Add(\"I3Reader\", Filename=\"/data/user/sdharani/signal/22/trial.i3.zst\")\n\u200b\ndef replace(frame):\n\u200b\n tree = frame[\"I3MCTree\"]\n primary = tree.get_primaries()[0]\n particle = dc.I3Particle()\n tree.replace(primary.id, particle)\n print(\"Copy construct\")\n tree = dc.I3MCTree(tree)\n print(\"Done\")\n del frame[\"I3MCTree\"]\n frame.Put(\"I3MCTree\", tree)\n print(tree.get_primaries()[0])\n\u200b\ntray.Add(replace, Streams=[I3Frame.DAQ])\nrandomService = phys_services.I3GSLRandomService(\n seed = 500,\n track_state=True)\n\u200b\ntray.AddService(\"I3GSLRandomServiceFactory\",\"random\")\nif args.fail:\n tray.AddSegment(segments.DetectorSim, \"DetectorSim\",\n RandomService = \"I3RandomService\",\n RunID = 1234,\n GCDFile = gcdFile,\n InputPESeriesMapName = \"I3MCPESeriesMap\",\n KeepMCHits = True,\n SkipNoiseGenerator = False,\n FilterTrigger = False,\n )\ntray.Execute()\n}}}\n",
"reporter": "chaack",
"cc": "",
"resolution": "fixed",
"time": "2019-12-18T16:41:25",
"component": "combo core",
"summary": "Error in copy-constructor of I3MCTree when adding simprod DetectorSim to the tray",
"priority": "blocker",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Error in copy-constructor of I3MCTree when adding simprod DetectorSim to the tray (Trac #2387) - The following code runs fine if the DetectorSim traysegment is not added after calling the I3MCTree copy-constructor.
Failure can be caused by executing the script with --fail
```text
from icecube import dataclasses as dc
from icecube.icetray import I3Frame
from icecube import dataio, phys_services
from I3Tray import *
from icecube.simprod import segments
from icecube.filterscripts.offlineL2.level2_Reconstruction_SLOP import SLOPLevel2
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--fail", action="store_true")
args = parser.parse_args()
gcdFile = '/cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_IC86.All_Pass3.i3.gz'
tray = I3Tray()
tray.Add("I3Reader", Filename="/data/user/sdharani/signal/22/trial.i3.zst")
def replace(frame):
tree = frame["I3MCTree"]
primary = tree.get_primaries()[0]
particle = dc.I3Particle()
tree.replace(primary.id, particle)
print("Copy construct")
tree = dc.I3MCTree(tree)
print("Done")
del frame["I3MCTree"]
frame.Put("I3MCTree", tree)
print(tree.get_primaries()[0])
tray.Add(replace, Streams=[I3Frame.DAQ])
randomService = phys_services.I3GSLRandomService(
seed = 500,
track_state=True)
tray.AddService("I3GSLRandomServiceFactory","random")
if args.fail:
tray.AddSegment(segments.DetectorSim, "DetectorSim",
RandomService = "I3RandomService",
RunID = 1234,
GCDFile = gcdFile,
InputPESeriesMapName = "I3MCPESeriesMap",
KeepMCHits = True,
SkipNoiseGenerator = False,
FilterTrigger = False,
)
tray.Execute()
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2387">https://code.icecube.wisc.edu/projects/icecube/ticket/2387</a>, reported by chaackand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "The following code runs fine if the DetectorSim traysegment is not added after calling the I3MCTree copy-constructor.\nFailure can be caused by executing the script with --fail\n{{{\nfrom icecube import dataclasses as dc\nfrom icecube.icetray import I3Frame\nfrom icecube import dataio, phys_services\nfrom I3Tray import *\nfrom icecube.simprod import segments\nfrom icecube.filterscripts.offlineL2.level2_Reconstruction_SLOP import SLOPLevel2\n\u200b\nfrom argparse import ArgumentParser\n\u200b\nparser = ArgumentParser()\nparser.add_argument(\"--fail\", action=\"store_true\")\nargs = parser.parse_args()\n\u200b\ngcdFile = '/cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_IC86.All_Pass3.i3.gz'\ntray = I3Tray()\ntray.Add(\"I3Reader\", Filename=\"/data/user/sdharani/signal/22/trial.i3.zst\")\n\u200b\ndef replace(frame):\n\u200b\n tree = frame[\"I3MCTree\"]\n primary = tree.get_primaries()[0]\n particle = dc.I3Particle()\n tree.replace(primary.id, particle)\n print(\"Copy construct\")\n tree = dc.I3MCTree(tree)\n print(\"Done\")\n del frame[\"I3MCTree\"]\n frame.Put(\"I3MCTree\", tree)\n print(tree.get_primaries()[0])\n\u200b\ntray.Add(replace, Streams=[I3Frame.DAQ])\nrandomService = phys_services.I3GSLRandomService(\n seed = 500,\n track_state=True)\n\u200b\ntray.AddService(\"I3GSLRandomServiceFactory\",\"random\")\nif args.fail:\n tray.AddSegment(segments.DetectorSim, \"DetectorSim\",\n RandomService = \"I3RandomService\",\n RunID = 1234,\n GCDFile = gcdFile,\n InputPESeriesMapName = \"I3MCPESeriesMap\",\n KeepMCHits = True,\n SkipNoiseGenerator = False,\n FilterTrigger = False,\n )\ntray.Execute()\n}}}\n",
"reporter": "chaack",
"cc": "",
"resolution": "fixed",
"time": "2019-12-18T16:41:25",
"component": "combo core",
"summary": "Error in copy-constructor of I3MCTree when adding simprod DetectorSim to the tray",
"priority": "blocker",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | error in copy constructor of when adding simprod detectorsim to the tray trac the following code runs fine if the detectorsim traysegment is not added after calling the copy constructor failure can be caused by executing the script with fail text from icecube import dataclasses as dc from icecube icetray import from icecube import dataio phys services from import from icecube simprod import segments from icecube filterscripts reconstruction slop import from argparse import argumentparser parser argumentparser parser add argument fail action store true args parser parse args gcdfile cvmfs icecube opensciencegrid org data gcd geocalibdetectorstatus all gz tray tray add filename data user sdharani signal trial zst def replace frame tree frame primary tree get primaries particle dc tree replace primary id particle print copy construct tree dc tree print done del frame frame put tree print tree get primaries tray add replace streams randomservice phys services seed track state true tray addservice random if args fail tray addsegment segments detectorsim detectorsim randomservice runid gcdfile gcdfile inputpeseriesmapname keepmchits true skipnoisegenerator false filtertrigger false tray execute migrated from json status closed changetime ts description the following code runs fine if the detectorsim traysegment is not added after calling the copy constructor nfailure can be caused by executing the script with fail n nfrom icecube import dataclasses as dc nfrom icecube icetray import nfrom icecube import dataio phys services nfrom import nfrom icecube simprod import segments nfrom icecube filterscripts reconstruction slop import n nfrom argparse import argumentparser n nparser argumentparser nparser add argument fail action store true nargs parser parse args n ngcdfile cvmfs icecube opensciencegrid org data gcd geocalibdetectorstatus all gz ntray ntray add filename data user sdharani signal trial zst n ndef replace frame n n tree frame n primary tree get primaries n particle dc n tree replace primary id particle n print copy construct n tree dc tree n print done n del frame n frame put tree n print tree get primaries n ntray add replace streams nrandomservice phys services n seed n track state true n ntray addservice random nif args fail n tray addsegment segments detectorsim detectorsim n randomservice n runid n gcdfile gcdfile n inputpeseriesmapname n keepmchits true n skipnoisegenerator false n filtertrigger false n ntray execute n n reporter chaack cc resolution fixed time component combo core summary error in copy constructor of when adding simprod detectorsim to the tray priority blocker keywords milestone autumnal equinox owner david schultz type defect | 1 |
68,699 | 21,788,308,301 | IssuesEvent | 2022-05-14 13:55:05 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | With CP Subsystem disabled (unsafe mode) cannot get hold of the cp subsystem mgmt service | Type: Defect | Using version 5.1.1.
Have CPSubsystem disabled since my configuration is as follows :
final CPSubsystemConfig cpSubsystemConfig = new CPSubsystemConfig();
cpSubsystemConfig.setCPMemberCount(0);
cpSubsystemConfig.setGroupSize(0);
cpSubsystemConfig.setSessionTimeToLiveSeconds(30);
cpSubsystemConfig.setSessionHeartbeatIntervalSeconds(5);
cpSubsystemConfig.setMissingCPMemberAutoRemovalSeconds(60);
config.setCPSubsystemConfig(cpSubsystemConfig);
When I do the following
hazelcastInstance.getCPSubsystem().getCPSessionManagementService()
I get an exception because the subsystem is not enabled.
But with the subsystem disabled, I still can end up in a situation were I want to force stop a session if I do not want to wait for the sessionTimeToLiveSeconds when say a session owner dies. Due to the above restriction this is not possible, and have to wait for the timeout for the session to be removed.
We need ability to forceStopSession as we do not want to set the sessionTTLSeconds to a too small a value, but we want the flexibility to force stop when we are sure a node is down.
Happens with Java 11, Windows and Linux. Project running in latest spring boot. | 1.0 | With CP Subsystem disabled (unsafe mode) cannot get hold of the cp subsystem mgmt service - Using version 5.1.1.
Have CPSubsystem disabled since my configuration is as follows :
final CPSubsystemConfig cpSubsystemConfig = new CPSubsystemConfig();
cpSubsystemConfig.setCPMemberCount(0);
cpSubsystemConfig.setGroupSize(0);
cpSubsystemConfig.setSessionTimeToLiveSeconds(30);
cpSubsystemConfig.setSessionHeartbeatIntervalSeconds(5);
cpSubsystemConfig.setMissingCPMemberAutoRemovalSeconds(60);
config.setCPSubsystemConfig(cpSubsystemConfig);
When I do the following
hazelcastInstance.getCPSubsystem().getCPSessionManagementService()
I get an exception because the subsystem is not enabled.
But with the subsystem disabled, I still can end up in a situation were I want to force stop a session if I do not want to wait for the sessionTimeToLiveSeconds when say a session owner dies. Due to the above restriction this is not possible, and have to wait for the timeout for the session to be removed.
We need ability to forceStopSession as we do not want to set the sessionTTLSeconds to a too small a value, but we want the flexibility to force stop when we are sure a node is down.
Happens with Java 11, Windows and Linux. Project running in latest spring boot. | defect | with cp subsystem disabled unsafe mode cannot get hold of the cp subsystem mgmt service using version have cpsubsystem disabled since my configuration is as follows final cpsubsystemconfig cpsubsystemconfig new cpsubsystemconfig cpsubsystemconfig setcpmembercount cpsubsystemconfig setgroupsize cpsubsystemconfig setsessiontimetoliveseconds cpsubsystemconfig setsessionheartbeatintervalseconds cpsubsystemconfig setmissingcpmemberautoremovalseconds config setcpsubsystemconfig cpsubsystemconfig when i do the following hazelcastinstance getcpsubsystem getcpsessionmanagementservice i get an exception because the subsystem is not enabled but with the subsystem disabled i still can end up in a situation were i want to force stop a session if i do not want to wait for the sessiontimetoliveseconds when say a session owner dies due to the above restriction this is not possible and have to wait for the timeout for the session to be removed we need ability to forcestopsession as we do not want to set the sessionttlseconds to a too small a value but we want the flexibility to force stop when we are sure a node is down happens with java windows and linux project running in latest spring boot | 1 |
55,505 | 14,526,284,464 | IssuesEvent | 2020-12-14 14:02:26 | SAP/fundamental-ngx | https://api.github.com/repos/SAP/fundamental-ngx | closed | Bug: (docs) Tabs example – Programmatic Selection works incorrect | Defect Hunting bug documentation | #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
When the user click "Select Tab 2" and then click "Select Tab 1", tabs do not switching to "tab 1":

#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
fundamental-ngx: v 0.25.0
#### If this is a bug, please provide steps for reproducing it.
#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
Chrome | 1.0 | Bug: (docs) Tabs example – Programmatic Selection works incorrect - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
When the user click "Select Tab 2" and then click "Select Tab 1", tabs do not switching to "tab 1":

#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
fundamental-ngx: v 0.25.0
#### If this is a bug, please provide steps for reproducing it.
#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
Chrome | defect | bug docs tabs example – programmatic selection works incorrect is this a bug enhancement or feature request bug briefly describe your proposal when the user click select tab and then click select tab tabs do not switching to tab which versions of angular and fundamental library for angular are affected if this is a feature request use current version fundamental ngx v if this is a bug please provide steps for reproducing it please provide relevant source code if applicable is there anything else we should know chrome | 1 |
26,800 | 4,789,082,764 | IssuesEvent | 2016-10-30 21:55:56 | CompEvol/beast2 | https://api.github.com/repos/CompEvol/beast2 | reopened | Beauti 2.4.3 freezes on Mac Sierra | defect HIGH priority | Hi all,
Im trying to open Beauti 2.4.3 on a recent updated IMac Sierra and it freezes, does anyone is having the same issue?
Thanks
Jose | 1.0 | Beauti 2.4.3 freezes on Mac Sierra - Hi all,
Im trying to open Beauti 2.4.3 on a recent updated IMac Sierra and it freezes, does anyone is having the same issue?
Thanks
Jose | defect | beauti freezes on mac sierra hi all im trying to open beauti on a recent updated imac sierra and it freezes does anyone is having the same issue thanks jose | 1 |
16,358 | 31,143,043,309 | IssuesEvent | 2023-08-16 02:46:31 | hackforla/HomeUniteUs | https://api.github.com/repos/hackforla/HomeUniteUs | closed | Section 1 | SubFlow 1a: Coordinator Account Creation | Role: PM Role: UI/UX p-Feature: IAM points: 1 requirements Section 1 | ## Problem Alignment
### The Problem:
Currently, Coordinators do not have a way to create an account for Home Unite Us. As a result, Coordinators cannot access Home Unite Usto view and/or manage Guest and Host applications.
#### User Story:
As a Coordinator, I want to login to Home Unite Us so that I can track and manage the Guests and Hosts in the system.
### High Level Approach:
Enable Coordinators to create a Home Unite Us user account.
## Solution Alignment
### Considerations:
* In the future (not this phase), the ability for Coordinators to add other Coordinators may be needed
### Goals & Success:
Success is if Coordinators can create a user account from the Home Unite Us website.
### Key Features:
* Button to create a new account on the Home Unite Us webpage
* Option to select Coordinator (other option: Host)
* Field to enter an email address, or continue with Google or Apple
* Fields to input and confirm password, with validation
* A verification email that is sent to the inputted email address, containing a verification link
### Acceptance Criteria:
* Users can select create new account from the dev Home Unite Us homepage (dev.homeunite.us)
* Users can select user type (Coordinator, Host)
* Users can enter an email address or continue with Google, Apple
* Users can create a password
* If the password meets the validation requirements, a verification email is sent to the provided email containing a verification link
* If the password does not meet the validation requirements, the user will see which validation requirements were not met and can try again with another password
* Users will click the verification link in the email which will confirm the user account and redirect the user to the login portal
* The user can then enter an email address and password or continue with Google or Apple
* If the login was successful, users will be redirected to the Coordinator home page
* If the login was not successful, the user will see an error message and can try logging in again
* User flows above can be completed on desktop, mobile, and tablet
### Designs:
- [x] Review existing designs on Figma
- [x] Edit existing designs as needed
- [x] Designs are edited and ready for engineering. Link to the Figma: (share Figma link here when ready)
### Key Decisions:
* After successfully logging in for the first time, the user will be redirected to the app (versus the default redirect to an AWS page). The rationale is that users will have a better first experience with the app if redirected to the app versus the AWS page. This will require additional dev effort - if significant dev effort is needed will revisit. | 1.0 | Section 1 | SubFlow 1a: Coordinator Account Creation - ## Problem Alignment
### The Problem:
Currently, Coordinators do not have a way to create an account for Home Unite Us. As a result, Coordinators cannot access Home Unite Usto view and/or manage Guest and Host applications.
#### User Story:
As a Coordinator, I want to login to Home Unite Us so that I can track and manage the Guests and Hosts in the system.
### High Level Approach:
Enable Coordinators to create a Home Unite Us user account.
## Solution Alignment
### Considerations:
* In the future (not this phase), the ability for Coordinators to add other Coordinators may be needed
### Goals & Success:
Success is if Coordinators can create a user account from the Home Unite Us website.
### Key Features:
* Button to create a new account on the Home Unite Us webpage
* Option to select Coordinator (other option: Host)
* Field to enter an email address, or continue with Google or Apple
* Fields to input and confirm password, with validation
* A verification email that is sent to the inputted email address, containing a verification link
### Acceptance Criteria:
* Users can select create new account from the dev Home Unite Us homepage (dev.homeunite.us)
* Users can select user type (Coordinator, Host)
* Users can enter an email address or continue with Google, Apple
* Users can create a password
* If the password meets the validation requirements, a verification email is sent to the provided email containing a verification link
* If the password does not meet the validation requirements, the user will see which validation requirements were not met and can try again with another password
* Users will click the verification link in the email which will confirm the user account and redirect the user to the login portal
* The user can then enter an email address and password or continue with Google or Apple
* If the login was successful, users will be redirected to the Coordinator home page
* If the login was not successful, the user will see an error message and can try logging in again
* User flows above can be completed on desktop, mobile, and tablet
### Designs:
- [x] Review existing designs on Figma
- [x] Edit existing designs as needed
- [x] Designs are edited and ready for engineering. Link to the Figma: (share Figma link here when ready)
### Key Decisions:
* After successfully logging in for the first time, the user will be redirected to the app (versus the default redirect to an AWS page). The rationale is that users will have a better first experience with the app if redirected to the app versus the AWS page. This will require additional dev effort - if significant dev effort is needed will revisit. | non_defect | section subflow coordinator account creation problem alignment the problem currently coordinators do not have a way to create an account for home unite us as a result coordinators cannot access home unite usto view and or manage guest and host applications user story as a coordinator i want to login to home unite us so that i can track and manage the guests and hosts in the system high level approach enable coordinators to create a home unite us user account solution alignment considerations in the future not this phase the ability for coordinators to add other coordinators may be needed goals success success is if coordinators can create a user account from the home unite us website key features button to create a new account on the home unite us webpage option to select coordinator other option host field to enter an email address or continue with google or apple fields to input and confirm password with validation a verification email that is sent to the inputted email address containing a verification link acceptance criteria users can select create new account from the dev home unite us homepage dev homeunite us users can select user type coordinator host users can enter an email address or continue with google apple users can create a password if the password meets the validation requirements a verification email is sent to the provided email containing a verification link if the password does not meet the validation requirements the user will see which validation requirements were not met and can try again with another password users will click the verification link in the email which will confirm the user account and redirect the user to the login portal the user can then enter an email address and password or continue with google or apple if the login was successful users will be redirected to the coordinator home page if the login was not successful the user will see an error message and can try logging in again user flows above can be completed on desktop mobile and tablet designs review existing designs on figma edit existing designs as needed designs are edited and ready for engineering link to the figma share figma link here when ready key decisions after successfully logging in for the first time the user will be redirected to the app versus the default redirect to an aws page the rationale is that users will have a better first experience with the app if redirected to the app versus the aws page this will require additional dev effort if significant dev effort is needed will revisit | 0 |
14,714 | 2,831,388,640 | IssuesEvent | 2015-05-24 15:53:31 | nobodyguy/dslrdashboard | https://api.github.com/repos/nobodyguy/dslrdashboard | closed | Nikon D4S | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.Do not work on Nikon D4S
2.push the lv button
3.illegal stop the app.
What is the expected output? What do you see instead?
Illegal stop.
What version of the product are you using? On what operating system?
Dslrdashboard : 30.32 test1
Android : 4.4.2
Please provide any additional information below.
```
Original issue reported on code.google.com by `cocoa.pe...@gmail.com` on 6 Mar 2014 at 2:18 | 1.0 | Nikon D4S - ```
What steps will reproduce the problem?
1.Do not work on Nikon D4S
2.push the lv button
3.illegal stop the app.
What is the expected output? What do you see instead?
Illegal stop.
What version of the product are you using? On what operating system?
Dslrdashboard : 30.32 test1
Android : 4.4.2
Please provide any additional information below.
```
Original issue reported on code.google.com by `cocoa.pe...@gmail.com` on 6 Mar 2014 at 2:18 | defect | nikon what steps will reproduce the problem do not work on nikon push the lv button illegal stop the app what is the expected output what do you see instead illegal stop what version of the product are you using on what operating system dslrdashboard android please provide any additional information below original issue reported on code google com by cocoa pe gmail com on mar at | 1 |
347,999 | 31,391,369,434 | IssuesEvent | 2023-08-26 11:38:00 | dieter-project/WithPT-BE | https://api.github.com/repos/dieter-project/WithPT-BE | closed | feat(common) : Swagger UI 를 이용한 API Docs 설정 | ✨ Feature ✅ Test | ## 이슈 내용
- Swagger UI 설정을 통해 백엔드 개발 팀의 API 리소스를 시각화하고 프론트와의 상호 작용을 수월하게 하는 것이 목표.
<br>
## 작업 사항
- [ ] swagger UI config 설정
<br>
| 1.0 | feat(common) : Swagger UI 를 이용한 API Docs 설정 - ## 이슈 내용
- Swagger UI 설정을 통해 백엔드 개발 팀의 API 리소스를 시각화하고 프론트와의 상호 작용을 수월하게 하는 것이 목표.
<br>
## 작업 사항
- [ ] swagger UI config 설정
<br>
| non_defect | feat common swagger ui 를 이용한 api docs 설정 이슈 내용 swagger ui 설정을 통해 백엔드 개발 팀의 api 리소스를 시각화하고 프론트와의 상호 작용을 수월하게 하는 것이 목표 작업 사항 swagger ui config 설정 | 0 |
73,241 | 24,520,945,909 | IssuesEvent | 2022-10-11 09:27:28 | BOINC/boinc | https://api.github.com/repos/BOINC/boinc | closed | [Explained] 7.16.3 systemd startup file prevents LHC VirtualBox jobs running | P: Blocker R: fixed T: Defect E: 1 day C: Client - Linux | **Describe the bug**
I tried an update to 7.16.3, but it produced an oddity, so I switched back to 7.14.2
But I left the 7.16.3 systemd start-up file in place.
The result was that an LHC CMS task started, but then stalled.
It ran for ~6mins (during which the VirtualBox process wasn't consuming CPU) then this showed up in the log:
```
Oct 28 20:03:53 benuc boinc[16837]: 28-Oct-2019 20:03:53 [LHC@home] Task CMS_1316885_1572237828.669283_0 postponed for 86400 seconds: Communication with VM Hypervisor failed.
```
Re-installing the 7.14.2 start-up file and re-starting BOINC (no other change) has allowed the job to run.
**Steps To Reproduce**
1. Try running an LHC CMS job using the 7.16.3 start-up file.
**Expected behavior**
I'd expect jobs to be able to run successfully.
**System Information**
- OS: Kubuntu 19.10
- BOINC Version: 7.16.3 (start-up file only)
**Additional context**
I'll post the working and non-working systemd start-up files in the follow-ups.
| 1.0 | [Explained] 7.16.3 systemd startup file prevents LHC VirtualBox jobs running - **Describe the bug**
I tried an update to 7.16.3, but it produced an oddity, so I switched back to 7.14.2
But I left the 7.16.3 systemd start-up file in place.
The result was that an LHC CMS task started, but then stalled.
It ran for ~6mins (during which the VirtualBox process wasn't consuming CPU) then this showed up in the log:
```
Oct 28 20:03:53 benuc boinc[16837]: 28-Oct-2019 20:03:53 [LHC@home] Task CMS_1316885_1572237828.669283_0 postponed for 86400 seconds: Communication with VM Hypervisor failed.
```
Re-installing the 7.14.2 start-up file and re-starting BOINC (no other change) has allowed the job to run.
**Steps To Reproduce**
1. Try running an LHC CMS job using the 7.16.3 start-up file.
**Expected behavior**
I'd expect jobs to be able to run successfully.
**System Information**
- OS: Kubuntu 19.10
- BOINC Version: 7.16.3 (start-up file only)
**Additional context**
I'll post the working and non-working systemd start-up files in the follow-ups.
| defect | systemd startup file prevents lhc virtualbox jobs running describe the bug i tried an update to but it produced an oddity so i switched back to but i left the systemd start up file in place the result was that an lhc cms task started but then stalled it ran for during which the virtualbox process wasn t consuming cpu then this showed up in the log oct benuc boinc oct task cms postponed for seconds communication with vm hypervisor failed re installing the start up file and re starting boinc no other change has allowed the job to run steps to reproduce try running an lhc cms job using the start up file expected behavior i d expect jobs to be able to run successfully system information os kubuntu boinc version start up file only additional context i ll post the working and non working systemd start up files in the follow ups | 1 |
31,274 | 6,485,144,719 | IssuesEvent | 2017-08-19 07:15:41 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | scipy.statsbinned_statistic_2d: incorrect binnumbers returned | defect scipy.stats | For certain inputs binned_statistic_2d returns incorrect bin numbers
`xEdges = np.arange(79950.,500050.,100.)`
`yEdges = np.arange(7489950.,7860050.,100.)`
`x = 356643.378`
`y = 7813944.500`
`binned, xedges, yedges, binnums = binned_statistic_2d((x,), (y,), (0.5,), 'mean', bins=[xEdges,yEdges],expand_binnumbers=True)`
The binnums seem to be incorrect:
`> binnums Returns: array([[3678],
[1291]])`
`> np.where(np.isfinite(binned))
Returns: (array([2766]), array([3239]))
`
I worked around this using the following code to calculate the bin numbers:
`x_inds = np.searchsorted(xEdges, x)`
`y_inds = np.searchsorted(yEdges , y)` | 1.0 | scipy.statsbinned_statistic_2d: incorrect binnumbers returned - For certain inputs binned_statistic_2d returns incorrect bin numbers
`xEdges = np.arange(79950.,500050.,100.)`
`yEdges = np.arange(7489950.,7860050.,100.)`
`x = 356643.378`
`y = 7813944.500`
`binned, xedges, yedges, binnums = binned_statistic_2d((x,), (y,), (0.5,), 'mean', bins=[xEdges,yEdges],expand_binnumbers=True)`
The binnums seem to be incorrect:
`> binnums Returns: array([[3678],
[1291]])`
`> np.where(np.isfinite(binned))
Returns: (array([2766]), array([3239]))
`
I worked around this using the following code to calculate the bin numbers:
`x_inds = np.searchsorted(xEdges, x)`
`y_inds = np.searchsorted(yEdges , y)` | defect | scipy statsbinned statistic incorrect binnumbers returned for certain inputs binned statistic returns incorrect bin numbers xedges np arange yedges np arange x y binned xedges yedges binnums binned statistic x y mean bins expand binnumbers true the binnums seem to be incorrect binnums returns array np where np isfinite binned returns array array i worked around this using the following code to calculate the bin numbers x inds np searchsorted xedges x y inds np searchsorted yedges y | 1 |
27,762 | 8,033,761,050 | IssuesEvent | 2018-07-29 10:37:08 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | reopened | Circle CI failure and Travis cron job failure | Blocker Build / CI help wanted | Circle CI failure:
Seems that it's due to https://github.com/scikit-learn/scikit-learn/commit/e888c0d65cfaef4a3a2c087ad7609d2296be8062, but I cannot figure out the reason.
Travis cron job failure:
See e.g., https://travis-ci.org/scikit-learn/scikit-learn/builds/408877802
Typical log:
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use
`arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index,
`arr[np.array(seq)]`, which will result either in an error or a different result.
| 1.0 | Circle CI failure and Travis cron job failure - Circle CI failure:
Seems that it's due to https://github.com/scikit-learn/scikit-learn/commit/e888c0d65cfaef4a3a2c087ad7609d2296be8062, but I cannot figure out the reason.
Travis cron job failure:
See e.g., https://travis-ci.org/scikit-learn/scikit-learn/builds/408877802
Typical log:
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use
`arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index,
`arr[np.array(seq)]`, which will result either in an error or a different result.
| non_defect | circle ci failure and travis cron job failure circle ci failure seems that it s due to but i cannot figure out the reason travis cron job failure see e g typical log futurewarning using a non tuple sequence for multidimensional indexing is deprecated use arr instead of arr in the future this will be interpreted as an array index arr which will result either in an error or a different result | 0 |
387,477 | 26,724,575,837 | IssuesEvent | 2023-01-29 15:01:16 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | opened | Multiple keybindings for one action are listed inconsistently | documentation triage | ### Check for existing issues
- [X] Completed
### Page link
https://zed.dev/docs/configuration/key-bindings
### Description
Sometimes multiple keybindings for the same action are
- listed in one row: https://zed.dev/docs/configuration/key-bindings#workspace (see `Toggle theme selctor` for example (typo is not a quotation error, it's literally on the page :P))
- sometimes before the comma there's a space (e.g. `Open key map`)
- sometimes there's no space before the comma (e.g. `Select next`
- split up into multiple rows: https://zed.dev/docs/configuration/key-bindings#editor (e.g. `Backspace` or `Select down`)
This is sparkled throughout the whole document, not just bound to one place. | 1.0 | Multiple keybindings for one action are listed inconsistently - ### Check for existing issues
- [X] Completed
### Page link
https://zed.dev/docs/configuration/key-bindings
### Description
Sometimes multiple keybindings for the same action are
- listed in one row: https://zed.dev/docs/configuration/key-bindings#workspace (see `Toggle theme selctor` for example (typo is not a quotation error, it's literally on the page :P))
- sometimes before the comma there's a space (e.g. `Open key map`)
- sometimes there's no space before the comma (e.g. `Select next`
- split up into multiple rows: https://zed.dev/docs/configuration/key-bindings#editor (e.g. `Backspace` or `Select down`)
This is sparkled throughout the whole document, not just bound to one place. | non_defect | multiple keybindings for one action are listed inconsistently check for existing issues completed page link description sometimes multiple keybindings for the same action are listed in one row see toggle theme selctor for example typo is not a quotation error it s literally on the page p sometimes before the comma there s a space e g open key map sometimes there s no space before the comma e g select next split up into multiple rows e g backspace or select down this is sparkled throughout the whole document not just bound to one place | 0 |
8,833 | 3,009,831,330 | IssuesEvent | 2015-07-28 09:18:44 | printdotio/printio-ios-sdk | https://api.github.com/repos/printdotio/printio-ios-sdk | closed | Images of an order don't show on Admin | bug Ready to Test | This order on staging admin of Mini Book don't show the images we chose in the app
https://staging-admin-v2.print.io/Home#/orders/154460/images | 1.0 | Images of an order don't show on Admin - This order on staging admin of Mini Book don't show the images we chose in the app
https://staging-admin-v2.print.io/Home#/orders/154460/images | non_defect | images of an order don t show on admin this order on staging admin of mini book don t show the images we chose in the app | 0 |
9,950 | 2,616,014,026 | IssuesEvent | 2015-03-02 00:56:58 | jasonhall/bwapi | https://api.github.com/repos/jasonhall/bwapi | closed | WinXP incompatible with LUDP | auto-migrated Priority-Medium Type-Defect Usability | ```
For the multiple-instance hack, the custom network mode LUDP (Local UDP) is not
compatible with Windows XP machines.
```
Original issue reported on code.google.com by `AHeinerm` on 6 Nov 2010 at 10:15 | 1.0 | WinXP incompatible with LUDP - ```
For the multiple-instance hack, the custom network mode LUDP (Local UDP) is not
compatible with Windows XP machines.
```
Original issue reported on code.google.com by `AHeinerm` on 6 Nov 2010 at 10:15 | defect | winxp incompatible with ludp for the multiple instance hack the custom network mode ludp local udp is not compatible with windows xp machines original issue reported on code google com by aheinerm on nov at | 1 |
37,036 | 8,206,556,783 | IssuesEvent | 2018-09-03 13:54:56 | contao/contao | https://api.github.com/repos/contao/contao | closed | handle mod_article_teaser and mod_article_plain selection | defect | <a href="https://github.com/fritzmg"><img src="https://avatars0.githubusercontent.com/u/4970961?v=4" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/installation-bundle/issues/97) by @fritzmg
August 17th, 2018, 12:05 GMT
In Contao 3 you were able to select a variety of custom template for articles, including `mod_article_plain` and `mod_article_teaser` which were removed in Contao 4.
However, this will lead to an
```
Could not find template "mod_article_plain"
```
error after upgrading to Contao 4 if these template were previously set as a `customTpl` manually (for whatever reason) and no such template is present as a custom template.
Thus may be `Version400Update` should check whether or not a custom template with these names are present and if not, `tl_module.customTpl` should be emptied for these records. | 1.0 | handle mod_article_teaser and mod_article_plain selection - <a href="https://github.com/fritzmg"><img src="https://avatars0.githubusercontent.com/u/4970961?v=4" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/installation-bundle/issues/97) by @fritzmg
August 17th, 2018, 12:05 GMT
In Contao 3 you were able to select a variety of custom template for articles, including `mod_article_plain` and `mod_article_teaser` which were removed in Contao 4.
However, this will lead to an
```
Could not find template "mod_article_plain"
```
error after upgrading to Contao 4 if these template were previously set as a `customTpl` manually (for whatever reason) and no such template is present as a custom template.
Thus may be `Version400Update` should check whether or not a custom template with these names are present and if not, `tl_module.customTpl` should be emptied for these records. | defect | handle mod article teaser and mod article plain selection by fritzmg august gmt in contao you were able to select a variety of custom template for articles including mod article plain and mod article teaser which were removed in contao however this will lead to an could not find template mod article plain error after upgrading to contao if these template were previously set as a customtpl manually for whatever reason and no such template is present as a custom template thus may be should check whether or not a custom template with these names are present and if not tl module customtpl should be emptied for these records | 1 |
48,530 | 6,098,813,606 | IssuesEvent | 2017-06-20 08:34:25 | geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE | https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE | reopened | AwwrsXx7T57MB8/09hew/33fjdWKbnfXZyl6fDTfrfo1IzYwjH40et5InaNAFGye7CYnc86P3qcXCFj7yPAGummbcXANuEtvNvxGXtm+0zdnswGxAOoQHONRqFLnGFaHgwIE/x1sJHHevkZyCUPyK1vXA/URVN7vqtZ5kF9Imc4= | design | dPuKwZjtFnMQ1uR3M9yD2bMdwzQMA1UsYXhzL9owNrxmSUff74VqLKTL/0aq2y/COeIhdbNmSHYL/OCak1Z0ktkQHW9laH3VmJGYa2MqRpHevL0vFixavYPyOFhIK1EiU0wtcRfBPAZzbbcCLivar85OTuqN8mytP0HEdzgX4fvl2nV4fni1MJG9i8XhpDt7H7Ay4UYePa9XaD+H8+bUGR4im0uSKm/wusqLWO+cP3V7U9TAg5cYL4SlF34T9o+sqVW+uHyIJ63fuOujMdsuYNMWqHibBvW2PrHtU95ENgWO3V6c5SLcAGoNnSjkGKo3e8ldmetKBsGu6WlJ9R4ezKL+cMtm9Axnc2ASkBKQKZFrL9giKBr/jTe/6vq0XZqz+lr/fpaoKjLMmrcjvIHhD/+3YHlBGN2kVSeNzXxtX0BrL9giKBr/jTe/6vq0XZqzUcdYtwyHpzvhWlgcgQtbxAcqG2qlrsmjzLLouM9d10bqZd9PrsgT/tUL+ubpuOOp8FmZwV/+TsZWayDtfJriDTX9FDhpT8KQyfw4j6EJ2fPN9jEzjbbP2GxENMulGiK+dtJ8Js3KkmXyFTiEisGUwXAa1fNmtWvfRoOShpAx8/V03sBe0tXE9JP3HpTdzCd9UsNT497Wy4bau68f0BVmgurWiuzg0M39K1IMSE4GlcjmXZsf20TXhNeMRHgKOgP5bz3ddwZ2zL7f/KIPill7jnJLKBU7X1zGD3JDcoTqDZqlZBVwK62b7qs6mx79xgODEzMR//M93jrB2xfR7GC19y8hqsYzHaIkG905JSwHQWlm1O+Kx2TZRuyflFTPXpO6 | 1.0 | AwwrsXx7T57MB8/09hew/33fjdWKbnfXZyl6fDTfrfo1IzYwjH40et5InaNAFGye7CYnc86P3qcXCFj7yPAGummbcXANuEtvNvxGXtm+0zdnswGxAOoQHONRqFLnGFaHgwIE/x1sJHHevkZyCUPyK1vXA/URVN7vqtZ5kF9Imc4= - dPuKwZjtFnMQ1uR3M9yD2bMdwzQMA1UsYXhzL9owNrxmSUff74VqLKTL/0aq2y/COeIhdbNmSHYL/OCak1Z0ktkQHW9laH3VmJGYa2MqRpHevL0vFixavYPyOFhIK1EiU0wtcRfBPAZzbbcCLivar85OTuqN8mytP0HEdzgX4fvl2nV4fni1MJG9i8XhpDt7H7Ay4UYePa9XaD+H8+bUGR4im0uSKm/wusqLWO+cP3V7U9TAg5cYL4SlF34T9o+sqVW+uHyIJ63fuOujMdsuYNMWqHibBvW2PrHtU95ENgWO3V6c5SLcAGoNnSjkGKo3e8ldmetKBsGu6WlJ9R4ezKL+cMtm9Axnc2ASkBKQKZFrL9giKBr/jTe/6vq0XZqz+lr/fpaoKjLMmrcjvIHhD/+3YHlBGN2kVSeNzXxtX0BrL9giKBr/jTe/6vq0XZqzUcdYtwyHpzvhWlgcgQtbxAcqG2qlrsmjzLLouM9d10bqZd9PrsgT/tUL+ubpuOOp8FmZwV/+TsZWayDtfJriDTX9FDhpT8KQyfw4j6EJ2fPN9jEzjbbP2GxENMulGiK+dtJ8Js3KkmXyFTiEisGUwXAa1fNmtWvfRoOShpAx8/V03sBe0tXE9JP3HpTdzCd9UsNT497Wy4bau68f0BVmgurWiuzg0M39K1IMSE4GlcjmXZsf20TXhNeMRHgKOgP5bz3ddwZ2zL7f/KIPill7jnJLKBU7X1zGD3JDcoTqDZqlZBVwK62b7qs6mx79xgODEzMR//M93jrB2xfR7GC19y8hqsYzHaIkG905JSwHQWlm1O+Kx2TZRuyflFTPXpO6 | non_defect | coeihdbnmshyl wusqlwo sqvw jte lr fpaokjlmmrcjvihhd jte tul | 0 |
14,509 | 2,814,314,051 | IssuesEvent | 2015-05-18 19:19:35 | m-lab/mlab-wikis | https://api.github.com/repos/m-lab/mlab-wikis | closed | Split the check_status cron job into one check per tool/slice | auto-migrated Priority-High Type-Defect | ```
The 'check_status' cron job takes too long, sometimes causing 'Deadline
exceeded' errors, so we must split this task into one check per
tool/slice.
```
Original issue reported on code.google.com by `claudiu....@gmail.com` on 31 Jan 2013 at 5:10 | 1.0 | Split the check_status cron job into one check per tool/slice - ```
The 'check_status' cron job takes too long, sometimes causing 'Deadline
exceeded' errors, so we must split this task into one check per
tool/slice.
```
Original issue reported on code.google.com by `claudiu....@gmail.com` on 31 Jan 2013 at 5:10 | defect | split the check status cron job into one check per tool slice the check status cron job takes too long sometimes causing deadline exceeded errors so we must split this task into one check per tool slice original issue reported on code google com by claudiu gmail com on jan at | 1 |
47,406 | 13,056,172,311 | IssuesEvent | 2020-07-30 03:52:52 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | icetray development email list (Trac #512) | Migrated from Trac defect tools/ports | create a email list for icetray development.
icetray-dev or something like that
Let's use umdgrb's mailman and avoid the collective at UW
Migrated from https://code.icecube.wisc.edu/ticket/512
```json
{
"status": "closed",
"changetime": "2009-01-22T18:45:35",
"description": "create a email list for icetray development.\n\nicetray-dev or something like that\n\nLet's use umdgrb's mailman and avoid the collective at UW",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1232649935000000",
"component": "tools/ports",
"summary": "icetray development email list",
"priority": "normal",
"keywords": "",
"time": "2009-01-09T21:14:57",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
| 1.0 | icetray development email list (Trac #512) - create a email list for icetray development.
icetray-dev or something like that
Let's use umdgrb's mailman and avoid the collective at UW
Migrated from https://code.icecube.wisc.edu/ticket/512
```json
{
"status": "closed",
"changetime": "2009-01-22T18:45:35",
"description": "create a email list for icetray development.\n\nicetray-dev or something like that\n\nLet's use umdgrb's mailman and avoid the collective at UW",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1232649935000000",
"component": "tools/ports",
"summary": "icetray development email list",
"priority": "normal",
"keywords": "",
"time": "2009-01-09T21:14:57",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
| defect | icetray development email list trac create a email list for icetray development icetray dev or something like that let s use umdgrb s mailman and avoid the collective at uw migrated from json status closed changetime description create a email list for icetray development n nicetray dev or something like that n nlet s use umdgrb s mailman and avoid the collective at uw reporter blaufuss cc resolution fixed ts component tools ports summary icetray development email list priority normal keywords time milestone owner cgils type defect | 1 |
58,972 | 11,914,138,511 | IssuesEvent | 2020-03-31 13:10:07 | nopSolutions/nopCommerce | https://api.github.com/repos/nopSolutions/nopCommerce | closed | Upgrade .NET Core to the latest version | refactoring / source code | Currently it's 3.1.201
Let's do it before the RTM of nopCommerce 4.30.
| 1.0 | Upgrade .NET Core to the latest version - Currently it's 3.1.201
Let's do it before the RTM of nopCommerce 4.30.
| non_defect | upgrade net core to the latest version currently it s let s do it before the rtm of nopcommerce | 0 |
56,616 | 15,218,932,954 | IssuesEvent | 2021-02-17 18:29:57 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | opened | Change 'Submit' to 'Submit tests' on button | defect webui | Change _Submit_ to _Submit tests_ on button on the Submit tests tile | 1.0 | Change 'Submit' to 'Submit tests' on button - Change _Submit_ to _Submit tests_ on button on the Submit tests tile | defect | change submit to submit tests on button change submit to submit tests on button on the submit tests tile | 1 |
6,561 | 2,610,256,870 | IssuesEvent | 2015-02-26 19:22:00 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳激光祛痘需要多少钱 | auto-migrated Priority-Medium Type-Defect | ```
深圳激光祛痘需要多少钱【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:37 | 1.0 | 深圳激光祛痘需要多少钱 - ```
深圳激光祛痘需要多少钱【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:37 | defect | 深圳激光祛痘需要多少钱 深圳激光祛痘需要多少钱【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at | 1 |
69,171 | 22,263,971,009 | IssuesEvent | 2022-06-10 05:08:16 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | vertical scroll is shown though content fits viewport | T-Defect | ### Steps to reproduce

1. Create a new room
2. Write a message.
3. Scroll
### Outcome
#### What did you expect?
No scroll
#### What happened instead?
Scroll
### Operating system
linux
### Browser information
Firefox 101.0.1
### URL for webapp
https://app.element.io
### Application version
1.10.14
### Homeserver
matrix.org
### Will you send logs?
No | 1.0 | vertical scroll is shown though content fits viewport - ### Steps to reproduce

1. Create a new room
2. Write a message.
3. Scroll
### Outcome
#### What did you expect?
No scroll
#### What happened instead?
Scroll
### Operating system
linux
### Browser information
Firefox 101.0.1
### URL for webapp
https://app.element.io
### Application version
1.10.14
### Homeserver
matrix.org
### Will you send logs?
No | defect | vertical scroll is shown though content fits viewport steps to reproduce create a new room write a message scroll outcome what did you expect no scroll what happened instead scroll operating system linux browser information firefox url for webapp application version homeserver matrix org will you send logs no | 1 |
59,970 | 17,023,301,228 | IssuesEvent | 2021-07-03 01:18:54 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Minimise/maximise doesn't redraw background | Component: potlatch (flash editor) Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 6.26am, Friday, 26th September 2008]**
e.g. if NPE is selected and the SWF is maximised, it won't load the extra tiles until you drag.
| 1.0 | Minimise/maximise doesn't redraw background - **[Submitted to the original trac issue database at 6.26am, Friday, 26th September 2008]**
e.g. if NPE is selected and the SWF is maximised, it won't load the extra tiles until you drag.
| defect | minimise maximise doesn t redraw background e g if npe is selected and the swf is maximised it won t load the extra tiles until you drag | 1 |
56,175 | 14,963,109,844 | IssuesEvent | 2021-01-27 10:08:35 | primefaces/primereact | https://api.github.com/repos/primefaces/primereact | closed | DataTable with editMode="cell" doesn't work as expected | defect | ### There is no guarantee in receiving an immediate response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeReact PRO Support* where support is provided within 4 business hours
**I'm submitting a ...** (check one with "x")
```
[X] bug report
[X] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Current behavior**
<!-- Describe how the bug manifests. -->
Cannot open line edit dropdown
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Dropdown in line edit should be made visible by anyway. currently it just gets closed.
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://codesandbox.io or similar (you can use this template as a starting point: https://codesandbox.io/s/qjx332qq4).
-->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **React version:**
<!-- Check whether this is still an issue in the most recent React version -->
* **PrimeReact version:**
<!-- Check whether this is still an issue in the most recent PrimeReact version -->
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
| 1.0 | DataTable with editMode="cell" doesn't work as expected - ### There is no guarantee in receiving an immediate response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeReact PRO Support* where support is provided within 4 business hours
**I'm submitting a ...** (check one with "x")
```
[X] bug report
[X] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Current behavior**
<!-- Describe how the bug manifests. -->
Cannot open line edit dropdown
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Dropdown in line edit should be made visible by anyway. currently it just gets closed.
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://codesandbox.io or similar (you can use this template as a starting point: https://codesandbox.io/s/qjx332qq4).
-->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **React version:**
<!-- Check whether this is still an issue in the most recent React version -->
* **PrimeReact version:**
<!-- Check whether this is still an issue in the most recent PrimeReact version -->
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
| defect | datatable with editmode cell doesn t work as expected there is no guarantee in receiving an immediate response in github issue tracker if you d like to secure our response you may consider primereact pro support where support is provided within business hours i m submitting a check one with x bug report feature request support request please do not submit support request here instead see current behavior cannot open line edit dropdown expected behavior dropdown in line edit should be made visible by anyway currently it just gets closed minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point please tell us about your environment react version primereact version browser language | 1 |
64,063 | 18,160,562,514 | IssuesEvent | 2021-09-27 09:08:02 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | opened | Unable to accept verification requests | T-Defect | ### Steps to reproduce
More details here: https://github.com/vector-im/element-android/issues/4085
I am unable to accept verification requests from this user.
### What happened?
### What did you expect?
I was expecting to receive a verification request. I received the notification for it but when clicking it nothing happened. I've submmited rageshake logs associated with this.
### Your phone model
iPhone 11 Pro
### Operating system version
iOS 14.8
### Application version
Element version 1.5.4
### Homeserver
vector.modular.im
### Have you submitted a rageshake?
Yes | 1.0 | Unable to accept verification requests - ### Steps to reproduce
More details here: https://github.com/vector-im/element-android/issues/4085
I am unable to accept verification requests from this user.
### What happened?
### What did you expect?
I was expecting to receive a verification request. I received the notification for it but when clicking it nothing happened. I've submmited rageshake logs associated with this.
### Your phone model
iPhone 11 Pro
### Operating system version
iOS 14.8
### Application version
Element version 1.5.4
### Homeserver
vector.modular.im
### Have you submitted a rageshake?
Yes | defect | unable to accept verification requests steps to reproduce more details here i am unable to accept verification requests from this user what happened what did you expect i was expecting to receive a verification request i received the notification for it but when clicking it nothing happened i ve submmited rageshake logs associated with this your phone model iphone pro operating system version ios application version element version homeserver vector modular im have you submitted a rageshake yes | 1 |
11,406 | 2,651,204,853 | IssuesEvent | 2015-03-16 09:41:54 | Budenzauber/truecrack-extended | https://api.github.com/repos/Budenzauber/truecrack-extended | reopened | OSX compile errors, warnings | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. compile with "make GPU=false"
2.
3.
What is the expected output? What do you see instead?
compiled truecrack_optimized version
cc -c -I./Common/ -I./Crypto/ -I./Cuda/ -I./Main/ -I./ -lm Main/Utils.c -o Utils.o
clang: warning: -lm: 'linker' input unused
In file included from Main/Utils.c:37:
./Crypto/CpuAes.h:243:1: warning: '/*' within block comment [-Wcomment]
/*
^
./Crypto/CpuAes.h:1221:9: warning: 'ALIGN' macro redefined
#define ALIGN
^
/usr/include/i386/param.h:83:9: note: previous definition is here
#define ALIGN(p) __DARWIN_ALIGN(p)
^
Main/Utils.c:126:9: error: non-void function 'file_readHeader' should return a value [-Wreturn-type]
return ;
^
2 warnings and 1 error generated.
make: *** [Utils.o] Error 1
What version of the product are you using? On what operating system?
OSX 10.9.2, Truecrack_optimized (May2012)
Please provide any additional information below.
```
Original issue reported on code.google.com by `robertre...@gmail.com` on 27 Feb 2014 at 10:09 | 1.0 | OSX compile errors, warnings - ```
What steps will reproduce the problem?
1. compile with "make GPU=false"
2.
3.
What is the expected output? What do you see instead?
compiled truecrack_optimized version
cc -c -I./Common/ -I./Crypto/ -I./Cuda/ -I./Main/ -I./ -lm Main/Utils.c -o Utils.o
clang: warning: -lm: 'linker' input unused
In file included from Main/Utils.c:37:
./Crypto/CpuAes.h:243:1: warning: '/*' within block comment [-Wcomment]
/*
^
./Crypto/CpuAes.h:1221:9: warning: 'ALIGN' macro redefined
#define ALIGN
^
/usr/include/i386/param.h:83:9: note: previous definition is here
#define ALIGN(p) __DARWIN_ALIGN(p)
^
Main/Utils.c:126:9: error: non-void function 'file_readHeader' should return a value [-Wreturn-type]
return ;
^
2 warnings and 1 error generated.
make: *** [Utils.o] Error 1
What version of the product are you using? On what operating system?
OSX 10.9.2, Truecrack_optimized (May2012)
Please provide any additional information below.
```
Original issue reported on code.google.com by `robertre...@gmail.com` on 27 Feb 2014 at 10:09 | defect | osx compile errors warnings what steps will reproduce the problem compile with make gpu false what is the expected output what do you see instead compiled truecrack optimized version cc c i common i crypto i cuda i main i lm main utils c o utils o clang warning lm linker input unused in file included from main utils c crypto cpuaes h warning within block comment crypto cpuaes h warning align macro redefined define align usr include param h note previous definition is here define align p darwin align p main utils c error non void function file readheader should return a value return warnings and error generated make error what version of the product are you using on what operating system osx truecrack optimized please provide any additional information below original issue reported on code google com by robertre gmail com on feb at | 1 |
60,508 | 17,023,444,373 | IssuesEvent | 2021-07-03 02:03:55 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | language fallback mechanism could be more robust | Component: admin Priority: major Resolution: wontfix Type: defect | **[Submitted to the original trac issue database at 5.28am, Monday, 20th July 2009]**
There seems to be some sort of an error in Slovenian translation on page
http://www.openstreetmap.org/browse/way/27028715
which works perfectly with English locale.
My guess is that either some interpolation strings don't match the en.yml or there is some pluralization issue (missing key...).
In such cases the server could fallback to English (or whatever language is next on user's preference list :)).
Unfortunately the server hides the actual error with a generic error page, which tells us nothing about specific error.
Alternatively translations could be more thoroughly checked (pluralization keys, interpolation strings...) before using them (/scripts/locale/diff doesn't do that yet).
| 1.0 | language fallback mechanism could be more robust - **[Submitted to the original trac issue database at 5.28am, Monday, 20th July 2009]**
There seems to be some sort of an error in Slovenian translation on page
http://www.openstreetmap.org/browse/way/27028715
which works perfectly with English locale.
My guess is that either some interpolation strings don't match the en.yml or there is some pluralization issue (missing key...).
In such cases the server could fallback to English (or whatever language is next on user's preference list :)).
Unfortunately the server hides the actual error with a generic error page, which tells us nothing about specific error.
Alternatively translations could be more thoroughly checked (pluralization keys, interpolation strings...) before using them (/scripts/locale/diff doesn't do that yet).
| defect | language fallback mechanism could be more robust there seems to be some sort of an error in slovenian translation on page which works perfectly with english locale my guess is that either some interpolation strings don t match the en yml or there is some pluralization issue missing key in such cases the server could fallback to english or whatever language is next on user s preference list unfortunately the server hides the actual error with a generic error page which tells us nothing about specific error alternatively translations could be more thoroughly checked pluralization keys interpolation strings before using them scripts locale diff doesn t do that yet | 1 |
31,038 | 6,412,152,935 | IssuesEvent | 2017-08-08 01:56:45 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | EntityTrait::setDirty($property, false) should return $this | Defect ORM | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.12.
`EntityTrait::setDirty($property, false)` returns `false` instead `$this`.
https://github.com/cakephp/cakephp/blob/master/src/Datasource/EntityTrait.php#L756
I'm creating this issue because i don't have time to write a PR with tests. | 1.0 | EntityTrait::setDirty($property, false) should return $this - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.12.
`EntityTrait::setDirty($property, false)` returns `false` instead `$this`.
https://github.com/cakephp/cakephp/blob/master/src/Datasource/EntityTrait.php#L756
I'm creating this issue because i don't have time to write a PR with tests. | defect | entitytrait setdirty property false should return this this is a multiple allowed bug enhancement feature discussion rfc cakephp version entitytrait setdirty property false returns false instead this i m creating this issue because i don t have time to write a pr with tests | 1 |
6,799 | 2,860,725,074 | IssuesEvent | 2015-06-03 17:09:46 | alexweissman/UserFrosting | https://api.github.com/repos/alexweissman/UserFrosting | opened | New demo site | 0.3.0 testing website | There is a demo up for the new version at http://uf-demo.alexanderweissman.com/. Right now you get the dashboard placeholder page, and settings page. I'll add some groups and let people play with adding/removing users in another group later today. | 1.0 | New demo site - There is a demo up for the new version at http://uf-demo.alexanderweissman.com/. Right now you get the dashboard placeholder page, and settings page. I'll add some groups and let people play with adding/removing users in another group later today. | non_defect | new demo site there is a demo up for the new version at right now you get the dashboard placeholder page and settings page i ll add some groups and let people play with adding removing users in another group later today | 0 |
45,639 | 12,965,195,289 | IssuesEvent | 2020-07-20 21:52:19 | googlefonts/noto-fonts | https://api.github.com/repos/googlefonts/noto-fonts | closed | NotoSansDisplay-MM.glyphs glyph uniA652 has incompatible masters | Type-Defect |
NotoSansDisplay-MM.glyphs glyph uniA652 has incompatible masters and we cannot make a variable font for this glyph
fontmake currently generates this warning
WARNING:fontTools.varLib:glyph uniA652 has incompatible masters; skipping
this means that the glyph above won't have any variation; just the "base" shape. | 1.0 | NotoSansDisplay-MM.glyphs glyph uniA652 has incompatible masters -
NotoSansDisplay-MM.glyphs glyph uniA652 has incompatible masters and we cannot make a variable font for this glyph
fontmake currently generates this warning
WARNING:fontTools.varLib:glyph uniA652 has incompatible masters; skipping
this means that the glyph above won't have any variation; just the "base" shape. | defect | notosansdisplay mm glyphs glyph has incompatible masters notosansdisplay mm glyphs glyph has incompatible masters and we cannot make a variable font for this glyph fontmake currently generates this warning warning fonttools varlib glyph has incompatible masters skipping this means that the glyph above won t have any variation just the base shape | 1 |
351,985 | 10,526,091,280 | IssuesEvent | 2019-09-30 16:19:49 | woocommerce/woocommerce-gateway-stripe | https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe | closed | Use customer IDs when creating payment intents | Priority: High | A requirement from Stripe: we must save Merchants’ customers on Merchants’ Stripe accounts and make charge requests by specifying a Customer ID per the Stripe Documentation. | 1.0 | Use customer IDs when creating payment intents - A requirement from Stripe: we must save Merchants’ customers on Merchants’ Stripe accounts and make charge requests by specifying a Customer ID per the Stripe Documentation. | non_defect | use customer ids when creating payment intents a requirement from stripe we must save merchants’ customers on merchants’ stripe accounts and make charge requests by specifying a customer id per the stripe documentation | 0 |
39,259 | 9,358,560,257 | IssuesEvent | 2019-04-02 02:57:04 | SangheeKim/xerela | https://api.github.com/repos/SangheeKim/xerela | closed | HTTP ERRO: 404 | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1.Install of the software
2.
3.
What is the expected output? I should be able to go to https://127.0.0.1:8080
and see the management console. What do you see instead? This: HTTP ERROR: 404
NOT_FOUND
RequestURI=/
Powered by jetty://
What version of the product are you using? On what operating system? Windows
2008 R2 64 bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `surfa...@gmail.com` on 23 Jul 2013 at 3:33
| 1.0 | HTTP ERRO: 404 - ```
What steps will reproduce the problem?
1.Install of the software
2.
3.
What is the expected output? I should be able to go to https://127.0.0.1:8080
and see the management console. What do you see instead? This: HTTP ERROR: 404
NOT_FOUND
RequestURI=/
Powered by jetty://
What version of the product are you using? On what operating system? Windows
2008 R2 64 bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `surfa...@gmail.com` on 23 Jul 2013 at 3:33
| defect | http erro what steps will reproduce the problem install of the software what is the expected output i should be able to go to and see the management console what do you see instead this http error not found requesturi powered by jetty what version of the product are you using on what operating system windows bit please provide any additional information below original issue reported on code google com by surfa gmail com on jul at | 1 |
8,623 | 2,611,533,486 | IssuesEvent | 2015-02-27 06:04:18 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Blowtorch crash | auto-migrated Priority-Medium ReleaseBug-0.9.19 Type-Defect | ```
What steps will reproduce the problem?
1. In network game, use blowtorch
2. While it is doing its job, quit game/disconnect, so others see your team gone
What is the expected output? What do you see instead?
Engine crashes
What version of the product are you using? On what operating system?
0.9.19
```
Original issue reported on code.google.com by `unC0Rr` on 12 Aug 2013 at 1:52 | 1.0 | Blowtorch crash - ```
What steps will reproduce the problem?
1. In network game, use blowtorch
2. While it is doing its job, quit game/disconnect, so others see your team gone
What is the expected output? What do you see instead?
Engine crashes
What version of the product are you using? On what operating system?
0.9.19
```
Original issue reported on code.google.com by `unC0Rr` on 12 Aug 2013 at 1:52 | defect | blowtorch crash what steps will reproduce the problem in network game use blowtorch while it is doing its job quit game disconnect so others see your team gone what is the expected output what do you see instead engine crashes what version of the product are you using on what operating system original issue reported on code google com by on aug at | 1 |
46,096 | 13,055,851,648 | IssuesEvent | 2020-07-30 02:55:34 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | Conditional execution doesn't work for Dump and I3Writer (Trac #578) | Incomplete Migration Migrated from Trac dataio defect | Migrated from https://code.icecube.wisc.edu/ticket/578
```json
{
"status": "closed",
"changetime": "2009-12-02T18:07:51",
"description": "I am not sure if this ever worked before but I just tried to conditionally execute the I3Writer and the Dump module and it didn't work even if these \nmodules inherit from I3ConditionalModule. The reason seems to be that the Process-method got re-implemented in these modules and therefore the \nShouldDoPhysics-method will never be called. There might be other modules out there with the same problem. Was this intended?",
"reporter": "tilo",
"cc": "tilo.waldenmaier@desy.de",
"resolution": "fixed",
"_ts": "1259777271000000",
"component": "dataio",
"summary": "Conditional execution doesn't work for Dump and I3Writer",
"priority": "normal",
"keywords": "conditional execution",
"time": "2009-11-24T10:17:25",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| 1.0 | Conditional execution doesn't work for Dump and I3Writer (Trac #578) - Migrated from https://code.icecube.wisc.edu/ticket/578
```json
{
"status": "closed",
"changetime": "2009-12-02T18:07:51",
"description": "I am not sure if this ever worked before but I just tried to conditionally execute the I3Writer and the Dump module and it didn't work even if these \nmodules inherit from I3ConditionalModule. The reason seems to be that the Process-method got re-implemented in these modules and therefore the \nShouldDoPhysics-method will never be called. There might be other modules out there with the same problem. Was this intended?",
"reporter": "tilo",
"cc": "tilo.waldenmaier@desy.de",
"resolution": "fixed",
"_ts": "1259777271000000",
"component": "dataio",
"summary": "Conditional execution doesn't work for Dump and I3Writer",
"priority": "normal",
"keywords": "conditional execution",
"time": "2009-11-24T10:17:25",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| defect | conditional execution doesn t work for dump and trac migrated from json status closed changetime description i am not sure if this ever worked before but i just tried to conditionally execute the and the dump module and it didn t work even if these nmodules inherit from the reason seems to be that the process method got re implemented in these modules and therefore the nshoulddophysics method will never be called there might be other modules out there with the same problem was this intended reporter tilo cc tilo waldenmaier desy de resolution fixed ts component dataio summary conditional execution doesn t work for dump and priority normal keywords conditional execution time milestone owner troy type defect | 1 |
123,069 | 17,772,150,206 | IssuesEvent | 2021-08-30 14:47:54 | kapseliboi/postman-sandbox | https://api.github.com/repos/kapseliboi/postman-sandbox | opened | CVE-2021-33587 (High) detected in css-what-2.1.3.tgz | security vulnerability | ## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-2.1.3.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz</a></p>
<p>Path to dependency file: postman-sandbox/package.json</p>
<p>Path to vulnerable library: postman-sandbox/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-0.22.0.tgz (Root Library)
- css-select-1.2.0.tgz
- :x: **css-what-2.1.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/postman-sandbox/commit/10929c07413c8c4f948eb1e876eccd61884f04a8">10929c07413c8c4f948eb1e876eccd61884f04a8</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33587 (High) detected in css-what-2.1.3.tgz - ## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-2.1.3.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz</a></p>
<p>Path to dependency file: postman-sandbox/package.json</p>
<p>Path to vulnerable library: postman-sandbox/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-0.22.0.tgz (Root Library)
- css-select-1.2.0.tgz
- :x: **css-what-2.1.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/postman-sandbox/commit/10929c07413c8c4f948eb1e876eccd61884f04a8">10929c07413c8c4f948eb1e876eccd61884f04a8</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in css what tgz cve high severity vulnerability vulnerable library css what tgz a css selector parser library home page a href path to dependency file postman sandbox package json path to vulnerable library postman sandbox node modules css what package json dependency hierarchy cheerio tgz root library css select tgz x css what tgz vulnerable library found in head commit a href found in base branch develop vulnerability details the css what package through for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what step up your open source security game with whitesource | 0 |
310,995 | 9,526,944,368 | IssuesEvent | 2019-04-29 00:26:56 | Darkosto/SkyFactory4Issues | https://api.github.com/repos/Darkosto/SkyFactory4Issues | closed | Bounding Box pickup when using Carryon with Mekanism's Advanced Solar Panel | Category: Config Priority: High Type: Bug | <!-- Thank you for submitting an issue for the relevant topic. Please ensure that you fill in all the required information needed as specified by the template below. -->
<!-- Note: As you are reporting a bug, please ensure that you have logs uploaded to either PasteBin/Gist etc... No logs = Closing and Ignoring of the issue! -->
<!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install and test again with a crash/bug before posting. -->
## Bug Report
<!--- If you're describing a bug, describe the current behaviour -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
When using carryon to pick up the Advanced Solar Panel (from Mekanism) from the top side you pick up a Bounding Block instead.
## Expected Behaviour
<!--- If describing a bug, tell us what happens instead of the expected behaviour -->
<!--- If suggesting a change/improvement, explain the difference from current behaviour -->
You should pick up the Advanced Solar Panel.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Blacklist the Advanced Solar Panel from carryon.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
1. Shift+right click the top face of an Advanced Solar Panel.
<!--- Add more if needed -->
## Logs
<!-- Twitch logs can be found in the installation directory for the Twitch App. Or click the ... button on SkyFactory and hit "Open Folder" -->
<!-- ATLauncher logs can be found in the installation directory. Or you can "Open Folder" from the launcher to view the instance. -->
<!-- Then upload the latest/crash logs to PasteBin or Gist. DON'T Upload them to GitHub -->
* Client/Server Log:
* Crash Log:
## World Information
<!-- Which Topography world are you using? -->
* Preset: Skyfactory 4
<!-- Do you have Prestige enabled? -->
* Prestige: Yes
<!-- Please provide the version of the modpack that the world was created in if known. Rough estimates are OK -->
* Modpack Version world created in: 4.0.3
<!-- If there are any additional mods, please state them below -->
* Additional Content Installed:
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
* Modpack Version: 4.0.3
* Java Version: Java 8 update 211
* Launcher Used: Twitch
* Memory Allocated:
* Server/LAN/Single Player: Server
* Optifine Installed: No
* Shaders Enabled: No
<!--- Additional Information if you are using a server setup (DELETE THIS SECTION IF YOUR ISSUE IS CLIENT ONLY) -->
## Server Information
* Java Version: Java 8 update 211
* Operating System: Mac OS
* Hoster/Hosting Solution: Server files on own machine
* Sponge (Non-Vanilla Forge) Server: No | 1.0 | Bounding Box pickup when using Carryon with Mekanism's Advanced Solar Panel - <!-- Thank you for submitting an issue for the relevant topic. Please ensure that you fill in all the required information needed as specified by the template below. -->
<!-- Note: As you are reporting a bug, please ensure that you have logs uploaded to either PasteBin/Gist etc... No logs = Closing and Ignoring of the issue! -->
<!-- NOTE: If you have other mods installed or you have changed versions; please revert to a clean install and test again with a crash/bug before posting. -->
## Bug Report
<!--- If you're describing a bug, describe the current behaviour -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
When using carryon to pick up the Advanced Solar Panel (from Mekanism) from the top side you pick up a Bounding Block instead.
## Expected Behaviour
<!--- If describing a bug, tell us what happens instead of the expected behaviour -->
<!--- If suggesting a change/improvement, explain the difference from current behaviour -->
You should pick up the Advanced Solar Panel.
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Blacklist the Advanced Solar Panel from carryon.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
1. Shift+right click the top face of an Advanced Solar Panel.
<!--- Add more if needed -->
## Logs
<!-- Twitch logs can be found in the installation directory for the Twitch App. Or click the ... button on SkyFactory and hit "Open Folder" -->
<!-- ATLauncher logs can be found in the installation directory. Or you can "Open Folder" from the launcher to view the instance. -->
<!-- Then upload the latest/crash logs to PasteBin or Gist. DON'T Upload them to GitHub -->
* Client/Server Log:
* Crash Log:
## World Information
<!-- Which Topography world are you using? -->
* Preset: Skyfactory 4
<!-- Do you have Prestige enabled? -->
* Prestige: Yes
<!-- Please provide the version of the modpack that the world was created in if known. Rough estimates are OK -->
* Modpack Version world created in: 4.0.3
<!-- If there are any additional mods, please state them below -->
* Additional Content Installed:
## Client Information
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!-- Please tell us how much memory you have allocated to the game. For Twitch/ATLauncher look in the settings -->
* Modpack Version: 4.0.3
* Java Version: Java 8 update 211
* Launcher Used: Twitch
* Memory Allocated:
* Server/LAN/Single Player: Server
* Optifine Installed: No
* Shaders Enabled: No
<!--- Additional Information if you are using a server setup (DELETE THIS SECTION IF YOUR ISSUE IS CLIENT ONLY) -->
## Server Information
* Java Version: Java 8 update 211
* Operating System: Mac OS
* Hoster/Hosting Solution: Server files on own machine
* Sponge (Non-Vanilla Forge) Server: No | non_defect | bounding box pickup when using carryon with mekanism s advanced solar panel bug report when using carryon to pick up the advanced solar panel from mekanism from the top side you pick up a bounding block instead expected behaviour you should pick up the advanced solar panel possible solution blacklist the advanced solar panel from carryon steps to reproduce for bugs shift right click the top face of an advanced solar panel logs client server log crash log world information preset skyfactory prestige yes modpack version world created in additional content installed client information modpack version java version java update launcher used twitch memory allocated server lan single player server optifine installed no shaders enabled no server information java version java update operating system mac os hoster hosting solution server files on own machine sponge non vanilla forge server no | 0 |
80,988 | 30,647,327,884 | IssuesEvent | 2023-07-25 06:16:09 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | panel: Panel without header leads to broken aria reference | :lady_beetle: defect :bangbang: needs-triage | ### Describe the bug
If you use a p:panel and don't declare a header attribute you will get a broken aria reference ("aria-labelledby")
### Reproducer
Any use of just the panel will reproduce the issue:
`<p:panel>`
Using a header seems to fix it, but sometimes I don't need a header:
`<p:panel header="headertext">`
### Expected behavior
Aria labels should be fixed
### PrimeFaces edition
None
### PrimeFaces version
13.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.x
### Java version
11
### Browser(s)
_No response_ | 1.0 | panel: Panel without header leads to broken aria reference - ### Describe the bug
If you use a p:panel and don't declare a header attribute you will get a broken aria reference ("aria-labelledby")
### Reproducer
Any use of just the panel will reproduce the issue:
`<p:panel>`
Using a header seems to fix it, but sometimes I don't need a header:
`<p:panel header="headertext">`
### Expected behavior
Aria labels should be fixed
### PrimeFaces edition
None
### PrimeFaces version
13.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.x
### Java version
11
### Browser(s)
_No response_ | defect | panel panel without header leads to broken aria reference describe the bug if you use a p panel and don t declare a header attribute you will get a broken aria reference aria labelledby reproducer any use of just the panel will reproduce the issue using a header seems to fix it but sometimes i don t need a header expected behavior aria labels should be fixed primefaces edition none primefaces version theme no response jsf implementation mojarra jsf version x java version browser s no response | 1 |
323,722 | 23,962,403,266 | IssuesEvent | 2022-09-12 20:25:07 | gravitational/teleport | https://api.github.com/repos/gravitational/teleport | closed | Update PagerDuty URL | documentation | ## Details
Update URL for PagerDuty on docs/pages/access-controls/access-request-plugins/index.mdx to https://goteleport.com/docs/access-controls/access-request-plugins/ssh-approval-pagerduty/ as the current URL reaches a 404
### Category
<!-- Delete non-applicable category -->
- Improve Existing
| 1.0 | Update PagerDuty URL - ## Details
Update URL for PagerDuty on docs/pages/access-controls/access-request-plugins/index.mdx to https://goteleport.com/docs/access-controls/access-request-plugins/ssh-approval-pagerduty/ as the current URL reaches a 404
### Category
<!-- Delete non-applicable category -->
- Improve Existing
| non_defect | update pagerduty url details update url for pagerduty on docs pages access controls access request plugins index mdx to as the current url reaches a category improve existing | 0 |
38,275 | 8,725,299,031 | IssuesEvent | 2018-12-10 09:00:15 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | closed | Quotation marks in nickname destroys SQL queries and profile view | Component UserProfile Priority High Type Defect | 

Also registration (or nick change) should not allow to set nick with `'` `"` marks
Reported by telefonalarmowy via mail | 1.0 | Quotation marks in nickname destroys SQL queries and profile view - 

Also registration (or nick change) should not allow to set nick with `'` `"` marks
Reported by telefonalarmowy via mail | defect | quotation marks in nickname destroys sql queries and profile view also registration or nick change should not allow to set nick with marks reported by telefonalarmowy via mail | 1 |
27,508 | 13,261,357,863 | IssuesEvent | 2020-08-20 19:45:05 | zcash/zcash | https://api.github.com/repos/zcash/zcash | opened | performance: startup time has increased - GetConsensus() | I-performance | I happened to notice that lately `zcashd` takes longer to start up. Most of the difference happens while the UI status is `Init message: Rewinding blocks if needed...`, and in `debug.log`, notice the approximately 45 seconds between the first two lines:
```
Aug 20 13:17:06.364 INFO Init: main: LoadBlockIndexDB: hashBestChain=00697f0bbd6cf51e93f2a91af0e94e09cad2f238c83b9f2d8e032ce808c90973 height=1046227 date=2020-08-18 18:49:04 progress=0.998992
Aug 20 13:17:52.058 INFO Init: main: Verifying last 288 blocks at level 3
Aug 20 13:17:52.436 INFO Init: main: No coin database inconsistencies in last 289 blocks (297 transactions)
Aug 20 13:17:52.436 INFO Init: main: block index 82678ms
```
I broke in with the debugger several times, and it seems to be spending much of its time `GetConsensus()`:
```
#0 0x0000555555602ad1 in prevector<28u, unsigned char, unsigned int, int>::is_direct (this=0x5555ccdb0598) at prevector.h:158
#1 0x0000555555602a2e in prevector<28u, unsigned char, unsigned int, int>::item_ptr (this=0x5555ccdb0598, pos=20) at prevector.h:191
#2 0x00005555555f9f59 in prevector<28u, unsigned char, unsigned int, int>::prevector<prevector<28u, unsigned char, unsigned int, int>::const_iterator> (this=0x5555ccdb0598, first=..., last=...) at prevector.h:240
#3 0x00005555555f4857 in CScript::CScript (this=0x5555ccdb0598, b=...) at ./script/script.h:395
#4 0x0000555555678fc2 in boost::detail::variant::copy_into::internal_visit<CScript> (this=0x7fffffffc750, operand=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:458
#5 0x00005555556750fa in boost::detail::variant::visitation_impl_invoke_impl<boost::detail::variant::copy_into, void const*, CScript> (internal_which=1, visitor=..., storage=0x55555684a138) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:126
#6 0x000055555566f01d in boost::detail::variant::visitation_impl_invoke<boost::detail::variant::copy_into, void const*, CScript, boost::variant<libzcash::SaplingPaymentAddress, CScript>::has_fallback_type_> (internal_which=1, visitor=..., storage=0x55555684a138, t=0x0)
at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:150
#7 0x000055555566740f in boost::detail::variant::visitation_impl<mpl_::int_<0>, boost::detail::variant::visitation_impl_step<boost::mpl::l_iter<boost::mpl::l_item<mpl_::long_<2l>, libzcash::SaplingPaymentAddress, boost::mpl::l_item<mpl_::long_<1l>, CScript, boost::mpl::l_end> > >, boost::mpl::l_iter<boost::mpl::l_end> >, boost::detail::variant::copy_into, void const*, boost::variant<libzcash::SaplingPaymentAddress, CScript>::has_fallback_type_> (internal_which=1, logical_which=1, visitor=..., storage=0x55555684a138, no_backup_flag=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:231
#8 0x000055555565a6e7 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::internal_apply_visitor_impl<boost::detail::variant::copy_into, void const*> (internal_which=1, logical_which=1, visitor=..., storage=0x55555684a138) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:2333
#9 0x000055555564c723 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::internal_apply_visitor<boost::detail::variant::copy_into> (this=0x55555684a130, visitor=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:2354
#10 0x000055555563dea5 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::variant (this=0x5555ccdb0590, operand=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:1759
#11 0x000055555562d108 in std::_Construct<boost::variant<libzcash::SaplingPaymentAddress, CScript>, boost::variant<libzcash::SaplingPaymentAddress, CScript> const&> (__p=0x5555ccdb0590) at /usr/include/c++/7/bits/stl_construct.h:75
#12 0x000055555561d310 in std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*> (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:83
#13 0x000055555560f6f0 in std::uninitialized_copy<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*> (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:134
#14 0x0000555555602fa3 in std::__uninitialized_copy_a<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*, boost::variant<libzcash::SaplingPaymentAddress, CScript> > (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:289
#15 0x00005555555fa185 in std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > >::vector (this=0x7fffffffcc18, __x=...) at /usr/include/c++/7/bits/stl_vector.h:331
#16 0x00005555555f4908 in Consensus::FundingStream::FundingStream (this=0x7fffffffcc10, fs=...) at ./consensus/params.h:116
#17 0x00005555556f175f in boost::optional_detail::optional_base<Consensus::FundingStream>::construct (this=0x7fffffffcc08, val=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:402
#18 0x00005555556d892e in boost::optional_detail::optional_base<Consensus::FundingStream>::optional_base (this=0x7fffffffcc08, rhs=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:199
#19 0x00005555556cbfce in boost::optional<Consensus::FundingStream>::optional (this=0x7fffffffcc08) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:960
#20 0x00005555556cc10e in Consensus::Params::Params (this=0x7fffffffca40) at ./consensus/params.h:159
#21 0x00005555556a66fc in <lambda(const CBlockIndex*)>::operator()(const CBlockIndex *) const (__closure=0x7fffffffcd00, pindex=0x5555689e3080) at main.cpp:4841
#22 0x00005555556a7154 in RewindBlockIndex (chainparams=..., clearWitnessCaches=@0x7fffffffd0a6: false) at main.cpp:4937
#23 0x00005555555ea61a in AppInit2 (threadGroup=..., scheduler=...) at init.cpp:1446
#24 0x00005555555cd3f5 in AppInit (argc=argc@entry=2, argv=argv@entry=0x7fffffffdf18) at bitcoind.cpp:174
#25 0x00005555555cdce6 in main (argc=2, argv=0x7fffffffdf18) at bitcoind.cpp:206
```
I tried this small patch to simply cache the return value of `GetConsensus()`:
```
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -4857,8 +4857,8 @@ bool RewindBlockIndex(const CChainParams& chainparams, bool& clearWitnessCaches)
//
// - BLOCK_ACTIVATES_UPGRADE is set only on blocks that activate upgrades.
// - nCachedBranchId for each block matches what we expect.
- auto sufficientlyValidated = [&chainparams](const CBlockIndex* pindex) {
- auto consensus = chainparams.GetConsensus();
+ const auto consensus = chainparams.GetConsensus();
+ auto sufficientlyValidated = [&consensus](const CBlockIndex* pindex) {
bool fFlagSet = pindex->nStatus & BLOCK_ACTIVATES_UPGRADE;
bool fFlagExpected = IsActivationHeightForAnyUpgrade(pindex->nHeight, consensus);
return fFlagSet == fFlagExpected &&
```
and with this change the `debug.log` looks like this:
```
Aug 20 13:24:55.553 INFO Init: main: LoadBlockIndexDB: hashBestChain=00697f0bbd6cf51e93f2a91af0e94e09cad2f238c83b9f2d8e032ce808c90973 height=1046227 date=2020-08-18 18:49:04 progress=0.998990
Aug 20 13:25:00.512 INFO Init: main: Verifying last 288 blocks at level 3
Aug 20 13:25:00.905 INFO Init: main: No coin database inconsistencies in last 289 blocks (297 transactions)
Aug 20 13:25:00.905 INFO Init: main: block index 37861ms
```
(only about 5 seconds). But I don't know if this is the best way to fix it (assuming we should fix it). It's probably better if `GetConsensus()` can do some caching internally, if possible.
I also wondered if there are other places that might be affected significantly if `GetConsensus()` is slower, and there are over 200 calls to that function. Most of them are not performance-critical, but I did notice, for example, many calls in `ContextualCheckTransaction()` -- and that function's performance is probably important. | True | performance: startup time has increased - GetConsensus() - I happened to notice that lately `zcashd` takes longer to start up. Most of the difference happens while the UI status is `Init message: Rewinding blocks if needed...`, and in `debug.log`, notice the approximately 45 seconds between the first two lines:
```
Aug 20 13:17:06.364 INFO Init: main: LoadBlockIndexDB: hashBestChain=00697f0bbd6cf51e93f2a91af0e94e09cad2f238c83b9f2d8e032ce808c90973 height=1046227 date=2020-08-18 18:49:04 progress=0.998992
Aug 20 13:17:52.058 INFO Init: main: Verifying last 288 blocks at level 3
Aug 20 13:17:52.436 INFO Init: main: No coin database inconsistencies in last 289 blocks (297 transactions)
Aug 20 13:17:52.436 INFO Init: main: block index 82678ms
```
I broke in with the debugger several times, and it seems to be spending much of its time `GetConsensus()`:
```
#0 0x0000555555602ad1 in prevector<28u, unsigned char, unsigned int, int>::is_direct (this=0x5555ccdb0598) at prevector.h:158
#1 0x0000555555602a2e in prevector<28u, unsigned char, unsigned int, int>::item_ptr (this=0x5555ccdb0598, pos=20) at prevector.h:191
#2 0x00005555555f9f59 in prevector<28u, unsigned char, unsigned int, int>::prevector<prevector<28u, unsigned char, unsigned int, int>::const_iterator> (this=0x5555ccdb0598, first=..., last=...) at prevector.h:240
#3 0x00005555555f4857 in CScript::CScript (this=0x5555ccdb0598, b=...) at ./script/script.h:395
#4 0x0000555555678fc2 in boost::detail::variant::copy_into::internal_visit<CScript> (this=0x7fffffffc750, operand=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:458
#5 0x00005555556750fa in boost::detail::variant::visitation_impl_invoke_impl<boost::detail::variant::copy_into, void const*, CScript> (internal_which=1, visitor=..., storage=0x55555684a138) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:126
#6 0x000055555566f01d in boost::detail::variant::visitation_impl_invoke<boost::detail::variant::copy_into, void const*, CScript, boost::variant<libzcash::SaplingPaymentAddress, CScript>::has_fallback_type_> (internal_which=1, visitor=..., storage=0x55555684a138, t=0x0)
at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:150
#7 0x000055555566740f in boost::detail::variant::visitation_impl<mpl_::int_<0>, boost::detail::variant::visitation_impl_step<boost::mpl::l_iter<boost::mpl::l_item<mpl_::long_<2l>, libzcash::SaplingPaymentAddress, boost::mpl::l_item<mpl_::long_<1l>, CScript, boost::mpl::l_end> > >, boost::mpl::l_iter<boost::mpl::l_end> >, boost::detail::variant::copy_into, void const*, boost::variant<libzcash::SaplingPaymentAddress, CScript>::has_fallback_type_> (internal_which=1, logical_which=1, visitor=..., storage=0x55555684a138, no_backup_flag=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/detail/visitation_impl.hpp:231
#8 0x000055555565a6e7 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::internal_apply_visitor_impl<boost::detail::variant::copy_into, void const*> (internal_which=1, logical_which=1, visitor=..., storage=0x55555684a138) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:2333
#9 0x000055555564c723 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::internal_apply_visitor<boost::detail::variant::copy_into> (this=0x55555684a130, visitor=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:2354
#10 0x000055555563dea5 in boost::variant<libzcash::SaplingPaymentAddress, CScript>::variant (this=0x5555ccdb0590, operand=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/variant/variant.hpp:1759
#11 0x000055555562d108 in std::_Construct<boost::variant<libzcash::SaplingPaymentAddress, CScript>, boost::variant<libzcash::SaplingPaymentAddress, CScript> const&> (__p=0x5555ccdb0590) at /usr/include/c++/7/bits/stl_construct.h:75
#12 0x000055555561d310 in std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*> (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:83
#13 0x000055555560f6f0 in std::uninitialized_copy<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*> (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:134
#14 0x0000555555602fa3 in std::__uninitialized_copy_a<__gnu_cxx::__normal_iterator<boost::variant<libzcash::SaplingPaymentAddress, CScript> const*, std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > > >, boost::variant<libzcash::SaplingPaymentAddress, CScript>*, boost::variant<libzcash::SaplingPaymentAddress, CScript> > (__first=..., __last=..., __result=0x5555ccdafdb0) at /usr/include/c++/7/bits/stl_uninitialized.h:289
#15 0x00005555555fa185 in std::vector<boost::variant<libzcash::SaplingPaymentAddress, CScript>, std::allocator<boost::variant<libzcash::SaplingPaymentAddress, CScript> > >::vector (this=0x7fffffffcc18, __x=...) at /usr/include/c++/7/bits/stl_vector.h:331
#16 0x00005555555f4908 in Consensus::FundingStream::FundingStream (this=0x7fffffffcc10, fs=...) at ./consensus/params.h:116
#17 0x00005555556f175f in boost::optional_detail::optional_base<Consensus::FundingStream>::construct (this=0x7fffffffcc08, val=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:402
#18 0x00005555556d892e in boost::optional_detail::optional_base<Consensus::FundingStream>::optional_base (this=0x7fffffffcc08, rhs=...) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:199
#19 0x00005555556cbfce in boost::optional<Consensus::FundingStream>::optional (this=0x7fffffffcc08) at /home/larry/zcash/depends/x86_64-unknown-linux-gnu/share/../include/boost/optional/optional.hpp:960
#20 0x00005555556cc10e in Consensus::Params::Params (this=0x7fffffffca40) at ./consensus/params.h:159
#21 0x00005555556a66fc in <lambda(const CBlockIndex*)>::operator()(const CBlockIndex *) const (__closure=0x7fffffffcd00, pindex=0x5555689e3080) at main.cpp:4841
#22 0x00005555556a7154 in RewindBlockIndex (chainparams=..., clearWitnessCaches=@0x7fffffffd0a6: false) at main.cpp:4937
#23 0x00005555555ea61a in AppInit2 (threadGroup=..., scheduler=...) at init.cpp:1446
#24 0x00005555555cd3f5 in AppInit (argc=argc@entry=2, argv=argv@entry=0x7fffffffdf18) at bitcoind.cpp:174
#25 0x00005555555cdce6 in main (argc=2, argv=0x7fffffffdf18) at bitcoind.cpp:206
```
I tried this small patch to simply cache the return value of `GetConsensus()`:
```
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -4857,8 +4857,8 @@ bool RewindBlockIndex(const CChainParams& chainparams, bool& clearWitnessCaches)
//
// - BLOCK_ACTIVATES_UPGRADE is set only on blocks that activate upgrades.
// - nCachedBranchId for each block matches what we expect.
- auto sufficientlyValidated = [&chainparams](const CBlockIndex* pindex) {
- auto consensus = chainparams.GetConsensus();
+ const auto consensus = chainparams.GetConsensus();
+ auto sufficientlyValidated = [&consensus](const CBlockIndex* pindex) {
bool fFlagSet = pindex->nStatus & BLOCK_ACTIVATES_UPGRADE;
bool fFlagExpected = IsActivationHeightForAnyUpgrade(pindex->nHeight, consensus);
return fFlagSet == fFlagExpected &&
```
and with this change the `debug.log` looks like this:
```
Aug 20 13:24:55.553 INFO Init: main: LoadBlockIndexDB: hashBestChain=00697f0bbd6cf51e93f2a91af0e94e09cad2f238c83b9f2d8e032ce808c90973 height=1046227 date=2020-08-18 18:49:04 progress=0.998990
Aug 20 13:25:00.512 INFO Init: main: Verifying last 288 blocks at level 3
Aug 20 13:25:00.905 INFO Init: main: No coin database inconsistencies in last 289 blocks (297 transactions)
Aug 20 13:25:00.905 INFO Init: main: block index 37861ms
```
(only about 5 seconds). But I don't know if this is the best way to fix it (assuming we should fix it). It's probably better if `GetConsensus()` can do some caching internally, if possible.
I also wondered if there are other places that might be affected significantly if `GetConsensus()` is slower, and there are over 200 calls to that function. Most of them are not performance-critical, but I did notice, for example, many calls in `ContextualCheckTransaction()` -- and that function's performance is probably important. | non_defect | performance startup time has increased getconsensus i happened to notice that lately zcashd takes longer to start up most of the difference happens while the ui status is init message rewinding blocks if needed and in debug log notice the approximately seconds between the first two lines aug info init main loadblockindexdb hashbestchain height date progress aug info init main verifying last blocks at level aug info init main no coin database inconsistencies in last blocks transactions aug info init main block index i broke in with the debugger several times and it seems to be spending much of its time getconsensus in prevector is direct this at prevector h in prevector item ptr this pos at prevector h in prevector prevector const iterator this first last at prevector h in cscript cscript this b at script script h in boost detail variant copy into internal visit this operand at home larry zcash depends unknown linux gnu share include boost variant variant hpp in boost detail variant visitation impl invoke impl internal which visitor storage at home larry zcash depends unknown linux gnu share include boost variant detail visitation impl hpp in boost detail variant visitation impl invoke has fallback type internal which visitor storage t at home larry zcash depends unknown linux gnu share include boost variant detail visitation impl hpp in boost detail variant visitation impl boost detail variant visitation impl step libzcash saplingpaymentaddress boost mpl l item cscript boost mpl l end boost mpl l iter boost detail variant copy into void const boost variant has fallback type internal which logical which visitor storage no backup flag at home larry zcash depends unknown linux gnu share include boost variant detail visitation impl hpp in boost variant internal apply visitor impl internal which logical which visitor storage at home larry zcash depends unknown linux gnu share include boost variant variant hpp in boost variant internal apply visitor this visitor at home larry zcash depends unknown linux gnu share include boost variant variant hpp in boost variant variant this operand at home larry zcash depends unknown linux gnu share include boost variant variant hpp in std construct boost variant const p at usr include c bits stl construct h in std uninitialized copy uninit copy const std vector std allocator boost variant first last result at usr include c bits stl uninitialized h in std uninitialized copy const std vector std allocator boost variant first last result at usr include c bits stl uninitialized h in std uninitialized copy a const std vector std allocator boost variant boost variant first last result at usr include c bits stl uninitialized h in std vector std allocator vector this x at usr include c bits stl vector h in consensus fundingstream fundingstream this fs at consensus params h in boost optional detail optional base construct this val at home larry zcash depends unknown linux gnu share include boost optional optional hpp in boost optional detail optional base optional base this rhs at home larry zcash depends unknown linux gnu share include boost optional optional hpp in boost optional optional this at home larry zcash depends unknown linux gnu share include boost optional optional hpp in consensus params params this at consensus params h in operator const cblockindex const closure pindex at main cpp in rewindblockindex chainparams clearwitnesscaches false at main cpp in threadgroup scheduler at init cpp in appinit argc argc entry argv argv entry at bitcoind cpp in main argc argv at bitcoind cpp i tried this small patch to simply cache the return value of getconsensus a src main cpp b src main cpp bool rewindblockindex const cchainparams chainparams bool clearwitnesscaches block activates upgrade is set only on blocks that activate upgrades ncachedbranchid for each block matches what we expect auto sufficientlyvalidated const cblockindex pindex auto consensus chainparams getconsensus const auto consensus chainparams getconsensus auto sufficientlyvalidated const cblockindex pindex bool fflagset pindex nstatus block activates upgrade bool fflagexpected isactivationheightforanyupgrade pindex nheight consensus return fflagset fflagexpected and with this change the debug log looks like this aug info init main loadblockindexdb hashbestchain height date progress aug info init main verifying last blocks at level aug info init main no coin database inconsistencies in last blocks transactions aug info init main block index only about seconds but i don t know if this is the best way to fix it assuming we should fix it it s probably better if getconsensus can do some caching internally if possible i also wondered if there are other places that might be affected significantly if getconsensus is slower and there are over calls to that function most of them are not performance critical but i did notice for example many calls in contextualchecktransaction and that function s performance is probably important | 0 |
61,998 | 8,565,268,343 | IssuesEvent | 2018-11-09 19:19:44 | honey-dos/web | https://api.github.com/repos/honey-dos/web | opened | Web Design | documentation good first issue help wanted | If you long press a list or todo item, it brings up the same bottom window pane to edit your object and it has the fields all populated with the data from the todo object.
```complex-add-list-todo```

```simple-add-list-or-todo```

```one-down-from-root```

```home-root-lists```

| 1.0 | Web Design - If you long press a list or todo item, it brings up the same bottom window pane to edit your object and it has the fields all populated with the data from the todo object.
```complex-add-list-todo```

```simple-add-list-or-todo```

```one-down-from-root```

```home-root-lists```

| non_defect | web design if you long press a list or todo item it brings up the same bottom window pane to edit your object and it has the fields all populated with the data from the todo object complex add list todo simple add list or todo one down from root home root lists | 0 |
11,915 | 2,668,989,229 | IssuesEvent | 2015-03-23 13:03:16 | contao/core-bundle | https://api.github.com/repos/contao/core-bundle | closed | Cannot load resource ".". | defect | > <a href="https://github.com/leofeyer"><img src="https://avatars.githubusercontent.com/u/1192057?v=3" align="left" width="42" height="42" hspace="10"></img></a> [Issue](https://github.com/contao/contao/issues/11) by @leofeyer
Sunday Jun 29, 2014 at 19:33 GMT
Right now, Symfony does not fully support the custom route we are using for our front end router. If you e.g. try to run `app/console cache:clear` on the command line, it will throw an Exception:
```
[Symfony\Component\Config\Exception\FileLoaderLoadException]
Cannot load resource ".".
```
@aschempp @contao/developers
| 1.0 | Cannot load resource ".". - > <a href="https://github.com/leofeyer"><img src="https://avatars.githubusercontent.com/u/1192057?v=3" align="left" width="42" height="42" hspace="10"></img></a> [Issue](https://github.com/contao/contao/issues/11) by @leofeyer
Sunday Jun 29, 2014 at 19:33 GMT
Right now, Symfony does not fully support the custom route we are using for our front end router. If you e.g. try to run `app/console cache:clear` on the command line, it will throw an Exception:
```
[Symfony\Component\Config\Exception\FileLoaderLoadException]
Cannot load resource ".".
```
@aschempp @contao/developers
| defect | cannot load resource by leofeyer sunday jun at gmt right now symfony does not fully support the custom route we are using for our front end router if you e g try to run app console cache clear on the command line it will throw an exception cannot load resource aschempp contao developers | 1 |
637,455 | 20,628,940,206 | IssuesEvent | 2022-03-08 03:11:41 | space-wizards/RobustToolbox | https://api.github.com/repos/space-wizards/RobustToolbox | opened | StringBuilder sandbox fail | Type: Discussion Size: Very Small Difficulty: 2-Medium Priority: 2-Before Release | ```
Sandbox violation: Access to method not allowed: [System.Runtime]System.Text.StringBuilder [System.Runtime]System.Text.StringBuilder.AppendLine([System.Runtime]System.Text.StringBuilder/AppendInterpolatedStringHandler&
```
Steps:
- Create stringbuilder from empty ctor
- Use AppendLine
- Fail
Creating stringbuilder with a string works okay as Content.Client already does this so not sure if they're different internally. | 1.0 | StringBuilder sandbox fail - ```
Sandbox violation: Access to method not allowed: [System.Runtime]System.Text.StringBuilder [System.Runtime]System.Text.StringBuilder.AppendLine([System.Runtime]System.Text.StringBuilder/AppendInterpolatedStringHandler&
```
Steps:
- Create stringbuilder from empty ctor
- Use AppendLine
- Fail
Creating stringbuilder with a string works okay as Content.Client already does this so not sure if they're different internally. | non_defect | stringbuilder sandbox fail sandbox violation access to method not allowed system text stringbuilder system text stringbuilder appendline system text stringbuilder appendinterpolatedstringhandler steps create stringbuilder from empty ctor use appendline fail creating stringbuilder with a string works okay as content client already does this so not sure if they re different internally | 0 |
271,307 | 29,418,955,746 | IssuesEvent | 2023-05-31 01:05:26 | MidnightBSD/src | https://api.github.com/repos/MidnightBSD/src | closed | CVE-2019-6706 (High) detected in freebsd-srcrelease/12.3.0 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-6706 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.3.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.1</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Lua 5.3.5 has a use-after-free in lua_upvaluejoin in lapi.c. For example, a crash outcome might be achieved by an attacker who is able to trigger a debug.upvaluejoin call in which the arguments have certain relationships.
<p>Publish Date: 2019-01-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-6706>CVE-2019-6706</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-6706">https://nvd.nist.gov/vuln/detail/CVE-2019-6706</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: lua-debuginfo - 5.3.4-11,5.3.4-11;lua-libs - 5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11;lua - 5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11;lua-debugsource - 5.3.4-11,5.3.4-11;lua-libs-debuginfo - 5.3.4-11,5.3.4-11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-6706 (High) detected in freebsd-srcrelease/12.3.0 - autoclosed - ## CVE-2019-6706 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.3.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.1</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Lua 5.3.5 has a use-after-free in lua_upvaluejoin in lapi.c. For example, a crash outcome might be achieved by an attacker who is able to trigger a debug.upvaluejoin call in which the arguments have certain relationships.
<p>Publish Date: 2019-01-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-6706>CVE-2019-6706</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-6706">https://nvd.nist.gov/vuln/detail/CVE-2019-6706</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: lua-debuginfo - 5.3.4-11,5.3.4-11;lua-libs - 5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11;lua - 5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11,5.3.4-11;lua-debugsource - 5.3.4-11,5.3.4-11;lua-libs-debuginfo - 5.3.4-11,5.3.4-11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in freebsd srcrelease autoclosed cve high severity vulnerability vulnerable library freebsd srcrelease freebsd src tree read only mirror library home page a href found in head commit a href found in base branch stable vulnerable source files lapi c lapi c vulnerability details lua has a use after free in lua upvaluejoin in lapi c for example a crash outcome might be achieved by an attacker who is able to trigger a debug upvaluejoin call in which the arguments have certain relationships publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lua debuginfo lua libs lua lua debugsource lua libs debuginfo step up your open source security game with mend | 0 |
29,747 | 11,771,448,463 | IssuesEvent | 2020-03-16 00:01:16 | Agent-Fennec/modmail | https://api.github.com/repos/Agent-Fennec/modmail | closed | Fix 1 Security, 3 Maintainability issues in core\thread.py | Stale enhancement security | [CodeFactor](https://www.codefactor.io/repository/github/agent-fennec/modmail/overview/master) found multiple issues:
#### Try, Except, Pass detected.
[core\thread.py:138
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L138)
#### No exception type(s) specified
[core\thread.py:119
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L119)[core\thread.py:138
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L138)
#### Catching too general exception Exception
[core\thread.py:391
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L391) | True | Fix 1 Security, 3 Maintainability issues in core\thread.py - [CodeFactor](https://www.codefactor.io/repository/github/agent-fennec/modmail/overview/master) found multiple issues:
#### Try, Except, Pass detected.
[core\thread.py:138
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L138)
#### No exception type(s) specified
[core\thread.py:119
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L119)[core\thread.py:138
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L138)
#### Catching too general exception Exception
[core\thread.py:391
](https://www.codefactor.io/repository/github/agent-fennec/modmail/source/master/core/thread.py#L391) | non_defect | fix security maintainability issues in core thread py found multiple issues try except pass detected core thread py no exception type s specified core thread py catching too general exception exception core thread py | 0 |
74,876 | 25,378,806,834 | IssuesEvent | 2022-11-21 15:58:03 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | BUG: scipy.linalg.eig is missing documentation on the orthogonality of eigenvectors | defect scipy.linalg Documentation | ### Describe your issue.
Hello,
the documentation mentions that scipy.linalg.eig solves generalised eigenvalue problem of a square matrix.
But it does not necessarily produce orthogonal eigenvectors, e.g. symmetric real matrix I attached here ([test_array.txt](https://github.com/AlexiaNomena/Tests_Files/tree/main/scipy_linalg))
I would like to suggest to at least change the documentation to "non-symmetric" matrices instead of "ordinary or generalised
EDIT: The issue is rather on the orthogonality of the eigenvectors. The documentation should mention that `eigh` always computes orthogonal eigenvectors (if that is True) while `eig` does not necessarily do so
### Reproducing Code Example
```python
import pdb
import numpy as np
import scipy.linalg as splinalg
M = np.loadtxt("test_array.txt")
print("M is symmetric:", np.all(np.isclose(M, M.T))) ### test symmetry of M
M[np.isclose(M, np.zeros(M.shape))] = 0 # remove numerical zeros
E, U = splinalg.eigh(M) ### eigendecomposition for symmetric matrices
sort = False
Ec, Uc = splinalg.eig(M) ### eigendecomposition for general matrices
sortc = False
if sort:
E = np.real(E) # remove tiny imaginary parts (symmetric matrices have real eigenvalues)
sort = np.argsort(E)[::-1] # descending order
E = E[sort]
U = U[:, sort]
print("All eigenvalues are equal:", np.all(np.isclose(E, Ec))) # True
if sortc:
Ec = np.real(E) # remove tiny imaginary parts (symmetric matrices have real eigenvalues)
sortc = np.argsort(Ec)[::-1] # descending order
Ec = Ec[sortc]
Uc = Uc[:, sortc]
print("All eigenvectors are equal:", np.all(np.isclose(U, Uc))) # False
print("eigh eigenvalues", E)
### Recovering M from Eigendecomposition ####
SS = np.sqrt(np.diag(E))
tX0 = np.real(U.dot(SS))
Gram = tX0.dot(tX0.T)
SSc = np.sqrt(np.diag(Ec))
tX0c = np.real(Uc.dot(SSc))
Gramc = tX0c.dot(tX0c.T)
### Test general decomposition ###
SSc2 = np.diag(Ec)
gMc = Uc.dot(SSc2.dot(splinalg.inv(Uc)))
print("Gram matrices are the same:", np.all(np.isclose(Gram,Gramc))) # False
print("sp.linalg.eigh recovers M:", np.all(np.isclose(Gram, M))) # True
print("sp.linalg.eig recovers M:", np.all(np.isclose(M, Gramc))) # False
print("sp.linalg.eig recovers M (general decomposition):", np.all(np.isclose(M, gMc))) # True
```
### Error message
```shell
M is symmetric: True
eigh eigenvalues [ 0. 22.0865683 37.31990533 40.15032657 41.44211427
42.22858225 42.9206473 43.55066949 43.96554338 44.45700434
44.96375497 45.30847694 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 50.70736511 51.63337703
53.10796584 53.67034273 53.82246955 54.85078408 55.65500791
56.71016463 57.0178392 58.02958643 60.7358963 66.31964316
77.75053236 80.70438303 84.68630203 94.02987113 96.31747861
108.40075002 142.41016691 4181.29685039]
Gram matrices are the same: False
sp.linalg.eigh recovers M (iff orthogonal eigenvectors): True
sp.linalg.eig recovers M (iff orthogonal eigenvectors): False
sp.linalg.eig recovers M (general decomposition): True
```
### SciPy/NumPy/Python version information
1.7.3 1.21.5 sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0) | 1.0 | BUG: scipy.linalg.eig is missing documentation on the orthogonality of eigenvectors - ### Describe your issue.
Hello,
the documentation mentions that scipy.linalg.eig solves generalised eigenvalue problem of a square matrix.
But it does not necessarily produce orthogonal eigenvectors, e.g. symmetric real matrix I attached here ([test_array.txt](https://github.com/AlexiaNomena/Tests_Files/tree/main/scipy_linalg))
I would like to suggest to at least change the documentation to "non-symmetric" matrices instead of "ordinary or generalised
EDIT: The issue is rather on the orthogonality of the eigenvectors. The documentation should mention that `eigh` always computes orthogonal eigenvectors (if that is True) while `eig` does not necessarily do so
### Reproducing Code Example
```python
import pdb
import numpy as np
import scipy.linalg as splinalg
M = np.loadtxt("test_array.txt")
print("M is symmetric:", np.all(np.isclose(M, M.T))) ### test symmetry of M
M[np.isclose(M, np.zeros(M.shape))] = 0 # remove numerical zeros
E, U = splinalg.eigh(M) ### eigendecomposition for symmetric matrices
sort = False
Ec, Uc = splinalg.eig(M) ### eigendecomposition for general matrices
sortc = False
if sort:
E = np.real(E) # remove tiny imaginary parts (symmetric matrices have real eigenvalues)
sort = np.argsort(E)[::-1] # descending order
E = E[sort]
U = U[:, sort]
print("All eigenvalues are equal:", np.all(np.isclose(E, Ec))) # True
if sortc:
Ec = np.real(E) # remove tiny imaginary parts (symmetric matrices have real eigenvalues)
sortc = np.argsort(Ec)[::-1] # descending order
Ec = Ec[sortc]
Uc = Uc[:, sortc]
print("All eigenvectors are equal:", np.all(np.isclose(U, Uc))) # False
print("eigh eigenvalues", E)
### Recovering M from Eigendecomposition ####
SS = np.sqrt(np.diag(E))
tX0 = np.real(U.dot(SS))
Gram = tX0.dot(tX0.T)
SSc = np.sqrt(np.diag(Ec))
tX0c = np.real(Uc.dot(SSc))
Gramc = tX0c.dot(tX0c.T)
### Test general decomposition ###
SSc2 = np.diag(Ec)
gMc = Uc.dot(SSc2.dot(splinalg.inv(Uc)))
print("Gram matrices are the same:", np.all(np.isclose(Gram,Gramc))) # False
print("sp.linalg.eigh recovers M:", np.all(np.isclose(Gram, M))) # True
print("sp.linalg.eig recovers M:", np.all(np.isclose(M, Gramc))) # False
print("sp.linalg.eig recovers M (general decomposition):", np.all(np.isclose(M, gMc))) # True
```
### Error message
```shell
M is symmetric: True
eigh eigenvalues [ 0. 22.0865683 37.31990533 40.15032657 41.44211427
42.22858225 42.9206473 43.55066949 43.96554338 44.45700434
44.96375497 45.30847694 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 49.8142849 49.8142849
49.8142849 49.8142849 49.8142849 50.70736511 51.63337703
53.10796584 53.67034273 53.82246955 54.85078408 55.65500791
56.71016463 57.0178392 58.02958643 60.7358963 66.31964316
77.75053236 80.70438303 84.68630203 94.02987113 96.31747861
108.40075002 142.41016691 4181.29685039]
Gram matrices are the same: False
sp.linalg.eigh recovers M (iff orthogonal eigenvectors): True
sp.linalg.eig recovers M (iff orthogonal eigenvectors): False
sp.linalg.eig recovers M (general decomposition): True
```
### SciPy/NumPy/Python version information
1.7.3 1.21.5 sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0) | defect | bug scipy linalg eig is missing documentation on the orthogonality of eigenvectors describe your issue hello the documentation mentions that scipy linalg eig solves generalised eigenvalue problem of a square matrix but it does not necessarily produce orthogonal eigenvectors e g symmetric real matrix i attached here i would like to suggest to at least change the documentation to non symmetric matrices instead of ordinary or generalised edit the issue is rather on the orthogonality of the eigenvectors the documentation should mention that eigh always computes orthogonal eigenvectors if that is true while eig does not necessarily do so reproducing code example python import pdb import numpy as np import scipy linalg as splinalg m np loadtxt test array txt print m is symmetric np all np isclose m m t test symmetry of m m remove numerical zeros e u splinalg eigh m eigendecomposition for symmetric matrices sort false ec uc splinalg eig m eigendecomposition for general matrices sortc false if sort e np real e remove tiny imaginary parts symmetric matrices have real eigenvalues sort np argsort e descending order e e u u print all eigenvalues are equal np all np isclose e ec true if sortc ec np real e remove tiny imaginary parts symmetric matrices have real eigenvalues sortc np argsort ec descending order ec ec uc uc print all eigenvectors are equal np all np isclose u uc false print eigh eigenvalues e recovering m from eigendecomposition ss np sqrt np diag e np real u dot ss gram dot t ssc np sqrt np diag ec np real uc dot ssc gramc dot t test general decomposition np diag ec gmc uc dot dot splinalg inv uc print gram matrices are the same np all np isclose gram gramc false print sp linalg eigh recovers m np all np isclose gram m true print sp linalg eig recovers m np all np isclose m gramc false print sp linalg eig recovers m general decomposition np all np isclose m gmc true error message shell m is symmetric true eigh eigenvalues gram matrices are the same false sp linalg eigh recovers m iff orthogonal eigenvectors true sp linalg eig recovers m iff orthogonal eigenvectors false sp linalg eig recovers m general decomposition true scipy numpy python version information sys version info major minor micro releaselevel final serial | 1 |
54,331 | 13,578,687,422 | IssuesEvent | 2020-09-20 09:07:53 | lazydroid/auto-update-apk-client | https://api.github.com/repos/lazydroid/auto-update-apk-client | closed | SilentAutoUpdate is not silent | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1.rooted the device
2.made a sample project
3.used autoUpdateApk - worked as it should
4.changed to silentAutoUpdat
What is the expected output? What do you see instead?
I expect the version to update without any user involved but the app is wating
for the user to agree the installation.
What version of the product are you using? On what operating system?
latest version
Please provide any additional information below.
```
Original issue reported on code.google.com by `entechAc...@gmail.com` on 7 Apr 2013 at 8:49
| 1.0 | SilentAutoUpdate is not silent - ```
What steps will reproduce the problem?
1.rooted the device
2.made a sample project
3.used autoUpdateApk - worked as it should
4.changed to silentAutoUpdat
What is the expected output? What do you see instead?
I expect the version to update without any user involved but the app is wating
for the user to agree the installation.
What version of the product are you using? On what operating system?
latest version
Please provide any additional information below.
```
Original issue reported on code.google.com by `entechAc...@gmail.com` on 7 Apr 2013 at 8:49
| defect | silentautoupdate is not silent what steps will reproduce the problem rooted the device made a sample project used autoupdateapk worked as it should changed to silentautoupdat what is the expected output what do you see instead i expect the version to update without any user involved but the app is wating for the user to agree the installation what version of the product are you using on what operating system latest version please provide any additional information below original issue reported on code google com by entechac gmail com on apr at | 1 |
995 | 2,594,418,545 | IssuesEvent | 2015-02-20 03:07:37 | BALL-Project/ball | https://api.github.com/repos/BALL-Project/ball | opened | PDB HIP inter-fragment bonds missing | C: FragmentDB P: major T: defect | **Reported by wolfgang on 2 May 41904562 13:42 UTC**
PDB ID: 1JEM
PDB Residue ID: HIP 16 (ND1-PHOSPHONOHISTIDINE)
is missing inter-fragment bonds to ILE 14; ALA 16.
I'm not sure if that will clash with our handling of AMBERs HIP (= PDB HIS) | 1.0 | PDB HIP inter-fragment bonds missing - **Reported by wolfgang on 2 May 41904562 13:42 UTC**
PDB ID: 1JEM
PDB Residue ID: HIP 16 (ND1-PHOSPHONOHISTIDINE)
is missing inter-fragment bonds to ILE 14; ALA 16.
I'm not sure if that will clash with our handling of AMBERs HIP (= PDB HIS) | defect | pdb hip inter fragment bonds missing reported by wolfgang on may utc pdb id pdb residue id hip phosphonohistidine is missing inter fragment bonds to ile ala i m not sure if that will clash with our handling of ambers hip pdb his | 1 |
811,200 | 30,278,640,064 | IssuesEvent | 2023-07-07 22:41:40 | libp2p/rust-libp2p | https://api.github.com/repos/libp2p/rust-libp2p | closed | ci: better caching for interop-tests docker build | priority:nicetohave difficulty:moderate help wanted | ## Description
<!-- Describe the enhancement that you are proposing.-->
For every PR, we build a docker container of the interop tests. The dockerfile uses `--mount=type=cache` for the `target` directory: https://github.com/libp2p/rust-libp2p/blob/dda6fc5dd74db7e00321ea21d0b06fe1b2da7f83/interop-tests/Dockerfile#L12
This speeds up the build as long as the host system doesn't change, i.e. when building locally. For the ephemeral runners on GitHub Actions, this has no effect whatsoever and we completely rebuild all dependencies for every commit pushed to every PR.
We should figure out a way of how we can make this build faster and reuse this cache, ideally without any hacks to the Dockerfile.
The cache must be somewhere on disk, probably in a directory claimed by docker. Perhaps we can use GitHub actions cache to cache this directory between CI runs?
## Motivation
Faster builds and less resource waste.
<!-- Explain why this enhancement is beneficial.-->
## Current Implementation
<!-- Describe the current implementation. -->
## Are you planning to do it yourself in a pull request?
<!--Any contribution is greatly appreciated. We are more than happy to provide help on the process.-->
Maybe, but not in the near future.
| 1.0 | ci: better caching for interop-tests docker build - ## Description
<!-- Describe the enhancement that you are proposing.-->
For every PR, we build a docker container of the interop tests. The dockerfile uses `--mount=type=cache` for the `target` directory: https://github.com/libp2p/rust-libp2p/blob/dda6fc5dd74db7e00321ea21d0b06fe1b2da7f83/interop-tests/Dockerfile#L12
This speeds up the build as long as the host system doesn't change, i.e. when building locally. For the ephemeral runners on GitHub Actions, this has no effect whatsoever and we completely rebuild all dependencies for every commit pushed to every PR.
We should figure out a way of how we can make this build faster and reuse this cache, ideally without any hacks to the Dockerfile.
The cache must be somewhere on disk, probably in a directory claimed by docker. Perhaps we can use GitHub actions cache to cache this directory between CI runs?
## Motivation
Faster builds and less resource waste.
<!-- Explain why this enhancement is beneficial.-->
## Current Implementation
<!-- Describe the current implementation. -->
## Are you planning to do it yourself in a pull request?
<!--Any contribution is greatly appreciated. We are more than happy to provide help on the process.-->
Maybe, but not in the near future.
| non_defect | ci better caching for interop tests docker build description for every pr we build a docker container of the interop tests the dockerfile uses mount type cache for the target directory this speeds up the build as long as the host system doesn t change i e when building locally for the ephemeral runners on github actions this has no effect whatsoever and we completely rebuild all dependencies for every commit pushed to every pr we should figure out a way of how we can make this build faster and reuse this cache ideally without any hacks to the dockerfile the cache must be somewhere on disk probably in a directory claimed by docker perhaps we can use github actions cache to cache this directory between ci runs motivation faster builds and less resource waste current implementation are you planning to do it yourself in a pull request maybe but not in the near future | 0 |
290,315 | 32,060,695,420 | IssuesEvent | 2023-09-24 16:18:16 | hinoshiba/news | https://api.github.com/repos/hinoshiba/news | closed | [SecurityWeek] In Other News: New Analysis of Snowden Files, Yubico Goes Public, Election Hacking | SecurityWeek Stale |
Noteworthy stories that might have slipped under the radar: Snowden file analysis, Yubico starts trading, election hacking event.
The post [In Other News: New Analysis of Snowden Files, Yubico Goes Public, Election Hacking](https://www.securityweek.com/in-other-news-new-analysis-of-snowden-files-yubico-goes-public-election-hacking/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/in-other-news-new-analysis-of-snowden-files-yubico-goes-public-election-hacking/>
| True | [SecurityWeek] In Other News: New Analysis of Snowden Files, Yubico Goes Public, Election Hacking -
Noteworthy stories that might have slipped under the radar: Snowden file analysis, Yubico starts trading, election hacking event.
The post [In Other News: New Analysis of Snowden Files, Yubico Goes Public, Election Hacking](https://www.securityweek.com/in-other-news-new-analysis-of-snowden-files-yubico-goes-public-election-hacking/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/in-other-news-new-analysis-of-snowden-files-yubico-goes-public-election-hacking/>
| non_defect | in other news new analysis of snowden files yubico goes public election hacking noteworthy stories that might have slipped under the radar snowden file analysis yubico starts trading election hacking event the post appeared first on | 0 |
533,127 | 15,577,309,008 | IssuesEvent | 2021-03-17 13:24:06 | schemathesis/schemathesis | https://api.github.com/repos/schemathesis/schemathesis | closed | [FEATURE] Control the "code to reproduce" section style from CLI | Difficulty: Medium Priority: Low Type: Feature | **Is your feature request related to a problem? Please describe.**
At the moment, Schemathesis CLI always produces Python code that the end-user should run to reproduce the problem. It might not be desired, especially if Python is not the main language of the app under test or the user doesn't want to use it.
**Describe the solution you'd like**
Provide a way to control whether Python or cURL will be used in these code samples
Schemathesis 2.8.4
| 1.0 | [FEATURE] Control the "code to reproduce" section style from CLI - **Is your feature request related to a problem? Please describe.**
At the moment, Schemathesis CLI always produces Python code that the end-user should run to reproduce the problem. It might not be desired, especially if Python is not the main language of the app under test or the user doesn't want to use it.
**Describe the solution you'd like**
Provide a way to control whether Python or cURL will be used in these code samples
Schemathesis 2.8.4
| non_defect | control the code to reproduce section style from cli is your feature request related to a problem please describe at the moment schemathesis cli always produces python code that the end user should run to reproduce the problem it might not be desired especially if python is not the main language of the app under test or the user doesn t want to use it describe the solution you d like provide a way to control whether python or curl will be used in these code samples schemathesis | 0 |
81,657 | 31,233,490,438 | IssuesEvent | 2023-08-20 01:25:16 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | closed | parseIsolatedEntityName doesn't return correct pos | Not a Defect | ### 🔎 Search Terms
parseIsolatedEntityName,pos,invalid
### 🕗 Version & Regression Information
Fails on `typescript@5.1.6` and `typescript@next`.
### 💻 Code
```ts
import ts from "npm:typescript@next";
console.log(ts.parseIsolatedEntityName(" aa ", ts.ScriptTarget.ESNext));
```
### 🙁 Actual behavior
```
$ deno run --check --allow-read code.ts
IdentifierObject {
pos: 0,
end: 4,
flags: 262144,
modifierFlagsCache: 0,
transformFlags: 0,
parent: undefined,
kind: 80,
escapedText: "aa",
jsDoc: undefined,
flowNode: undefined,
symbol: undefined
}
```
The value of `pos` is 0 even though the identifier doesn't start at position 0.
### 🙂 Expected behavior
```
$ deno run --check --allow-read code.ts
IdentifierObject {
pos: 2,
end: 4,
flags: 262144,
modifierFlagsCache: 0,
transformFlags: 0,
parent: undefined,
kind: 80,
escapedText: "aa",
jsDoc: undefined,
flowNode: undefined,
symbol: undefined
}
```
The value of `pos` is 2.
### Additional information about the issue
Deno is used for simplicity but same behavior would be displayed using nodejs | 1.0 | parseIsolatedEntityName doesn't return correct pos - ### 🔎 Search Terms
parseIsolatedEntityName,pos,invalid
### 🕗 Version & Regression Information
Fails on `typescript@5.1.6` and `typescript@next`.
### 💻 Code
```ts
import ts from "npm:typescript@next";
console.log(ts.parseIsolatedEntityName(" aa ", ts.ScriptTarget.ESNext));
```
### 🙁 Actual behavior
```
$ deno run --check --allow-read code.ts
IdentifierObject {
pos: 0,
end: 4,
flags: 262144,
modifierFlagsCache: 0,
transformFlags: 0,
parent: undefined,
kind: 80,
escapedText: "aa",
jsDoc: undefined,
flowNode: undefined,
symbol: undefined
}
```
The value of `pos` is 0 even though the identifier doesn't start at position 0.
### 🙂 Expected behavior
```
$ deno run --check --allow-read code.ts
IdentifierObject {
pos: 2,
end: 4,
flags: 262144,
modifierFlagsCache: 0,
transformFlags: 0,
parent: undefined,
kind: 80,
escapedText: "aa",
jsDoc: undefined,
flowNode: undefined,
symbol: undefined
}
```
The value of `pos` is 2.
### Additional information about the issue
Deno is used for simplicity but same behavior would be displayed using nodejs | defect | parseisolatedentityname doesn t return correct pos 🔎 search terms parseisolatedentityname pos invalid 🕗 version regression information fails on typescript and typescript next 💻 code ts import ts from npm typescript next console log ts parseisolatedentityname aa ts scripttarget esnext 🙁 actual behavior deno run check allow read code ts identifierobject pos end flags modifierflagscache transformflags parent undefined kind escapedtext aa jsdoc undefined flownode undefined symbol undefined the value of pos is even though the identifier doesn t start at position 🙂 expected behavior deno run check allow read code ts identifierobject pos end flags modifierflagscache transformflags parent undefined kind escapedtext aa jsdoc undefined flownode undefined symbol undefined the value of pos is additional information about the issue deno is used for simplicity but same behavior would be displayed using nodejs | 1 |
294,996 | 9,064,231,069 | IssuesEvent | 2019-02-14 00:09:57 | samiha-rahman/SOEN341 | https://api.github.com/repos/samiha-rahman/SOEN341 | closed | Set up required infrastructure on the back end | 1 SP Priority: HIGH Risk: LOW back end epic | Set up the required infrastructure on the back end side of WordPress according to needs to support development of further features.
Relevant user stories:
- [x] Edit user registration procedure and create new user role #15
- [x] Create new post type #20 | 1.0 | Set up required infrastructure on the back end - Set up the required infrastructure on the back end side of WordPress according to needs to support development of further features.
Relevant user stories:
- [x] Edit user registration procedure and create new user role #15
- [x] Create new post type #20 | non_defect | set up required infrastructure on the back end set up the required infrastructure on the back end side of wordpress according to needs to support development of further features relevant user stories edit user registration procedure and create new user role create new post type | 0 |
43,638 | 7,056,810,188 | IssuesEvent | 2018-01-04 14:17:32 | janephp/openapi | https://api.github.com/repos/janephp/openapi | closed | No Host/Scheme in generated resources | documentation | Hi
with the current release of openapi, it does not generate a complete $url in the generated resource/* files.
This raises a issue with guzzle/curl which throw's the following exception:
```
URL error 3: <url> malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)" when emitting request: "<method> /<path>"
```
We currently fixing this by manually set the full uri on the first $url-variable, after that everything works as it should.
Stack we currently using (according to composer.lock):
- guzzlehttp/guzzle: v6.3.0
- jane/openapi-runtime: v2.0.0
- jane/runtime: v2.1.0
- jane/jane: v3.0.0
- jane/open-api: v3.0.0
- php-http/guzzle6-adapter: v1.1.1
- php-http/httplug: v1.1.0
- php-http/httplug-bundle: v1.7.1
- php-http/message-factory: v.1.0.2
- guzzlehttp/promises: v1.3.1
- php-http/promise: v.1.0.0
- symfony/symfony: v.3.3.9
- php7.1.9-1+ubuntu16.04.1+deb.sury.org+1 | 1.0 | No Host/Scheme in generated resources - Hi
with the current release of openapi, it does not generate a complete $url in the generated resource/* files.
This raises a issue with guzzle/curl which throw's the following exception:
```
URL error 3: <url> malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)" when emitting request: "<method> /<path>"
```
We currently fixing this by manually set the full uri on the first $url-variable, after that everything works as it should.
Stack we currently using (according to composer.lock):
- guzzlehttp/guzzle: v6.3.0
- jane/openapi-runtime: v2.0.0
- jane/runtime: v2.1.0
- jane/jane: v3.0.0
- jane/open-api: v3.0.0
- php-http/guzzle6-adapter: v1.1.1
- php-http/httplug: v1.1.0
- php-http/httplug-bundle: v1.7.1
- php-http/message-factory: v.1.0.2
- guzzlehttp/promises: v1.3.1
- php-http/promise: v.1.0.0
- symfony/symfony: v.3.3.9
- php7.1.9-1+ubuntu16.04.1+deb.sury.org+1 | non_defect | no host scheme in generated resources hi with the current release of openapi it does not generate a complete url in the generated resource files this raises a issue with guzzle curl which throw s the following exception url error malformed see when emitting request we currently fixing this by manually set the full uri on the first url variable after that everything works as it should stack we currently using according to composer lock guzzlehttp guzzle jane openapi runtime jane runtime jane jane jane open api php http adapter php http httplug php http httplug bundle php http message factory v guzzlehttp promises php http promise v symfony symfony v deb sury org | 0 |
81,290 | 30,783,544,248 | IssuesEvent | 2023-07-31 11:45:24 | vector-im/element-x-android | https://api.github.com/repos/vector-im/element-x-android | opened | Banner about actions in previous session after restarting app | T-Defect | ### Steps to reproduce
1. Experience #1006 and leave room
2. Restart app
### Outcome
#### What did you expect?
No banner saying that I've left a room after an app restart
#### What happened instead?
See banner saying that I've left room (still see that room which I am not a member of in the timeline)
### Your phone model
Pixel 6a
### Operating system version
Graphene OS
### Application version and app store
Nightly
### Homeserver
matrix.org
### Will you send logs?
Yes
### Are you willing to provide a PR?
No | 1.0 | Banner about actions in previous session after restarting app - ### Steps to reproduce
1. Experience #1006 and leave room
2. Restart app
### Outcome
#### What did you expect?
No banner saying that I've left a room after an app restart
#### What happened instead?
See banner saying that I've left room (still see that room which I am not a member of in the timeline)
### Your phone model
Pixel 6a
### Operating system version
Graphene OS
### Application version and app store
Nightly
### Homeserver
matrix.org
### Will you send logs?
Yes
### Are you willing to provide a PR?
No | defect | banner about actions in previous session after restarting app steps to reproduce experience and leave room restart app outcome what did you expect no banner saying that i ve left a room after an app restart what happened instead see banner saying that i ve left room still see that room which i am not a member of in the timeline your phone model pixel operating system version graphene os application version and app store nightly homeserver matrix org will you send logs yes are you willing to provide a pr no | 1 |
249,780 | 7,964,690,210 | IssuesEvent | 2018-07-13 22:50:02 | samsung-cnct/kraken-lib | https://api.github.com/repos/samsung-cnct/kraken-lib | closed | ec2_elb_facts sometimes fails fatally | bug kraken-lib priority-p2 | ```
FAILED - RETRYING: Wait for ELBs to be deleted (109 retries left).
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: </ErrorResponse>
fatal: [localhost]: FAILED! => {"attempts": 13, "changed": false, "failed": true, "msg": "LoadBalancerNotFound: Cannot find Load Balancer ad7dc468e56cc11e7a4ac02570a8bbd5"}
```
This is an upstream bug which can be followed at:
https://github.com/ansible/ansible/issues/25982 | 1.0 | ec2_elb_facts sometimes fails fatally - ```
FAILED - RETRYING: Wait for ELBs to be deleted (109 retries left).
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: </ErrorResponse>
fatal: [localhost]: FAILED! => {"attempts": 13, "changed": false, "failed": true, "msg": "LoadBalancerNotFound: Cannot find Load Balancer ad7dc468e56cc11e7a4ac02570a8bbd5"}
```
This is an upstream bug which can be followed at:
https://github.com/ansible/ansible/issues/25982 | non_defect | elb facts sometimes fails fatally failed retrying wait for elbs to be deleted retries left an exception occurred during task execution to see the full traceback use vvv the error was fatal failed attempts changed false failed true msg loadbalancernotfound cannot find load balancer this is an upstream bug which can be followed at | 0 |
40,480 | 2,868,922,250 | IssuesEvent | 2015-06-05 21:58:51 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Use friendlier YAML formatting for pubspec.lock | enhancement Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5104_
----
The lock file that pub generates doesn't use any whitespace which makes it hard to read and unfriendly in diffs. We should try to write a cleaner, more human-friendly YAML file. | 1.0 | Use friendlier YAML formatting for pubspec.lock - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#5104_
----
The lock file that pub generates doesn't use any whitespace which makes it hard to read and unfriendly in diffs. We should try to write a cleaner, more human-friendly YAML file. | non_defect | use friendlier yaml formatting for pubspec lock issue by originally opened as dart lang sdk the lock file that pub generates doesn t use any whitespace which makes it hard to read and unfriendly in diffs we should try to write a cleaner more human friendly yaml file | 0 |
536,117 | 15,704,466,074 | IssuesEvent | 2021-03-26 15:01:48 | CMSCompOps/WmAgentScripts | https://api.github.com/repos/CMSCompOps/WmAgentScripts | opened | Make a case insensitive check for pilot keyword check in SubRequestType | New Feature Priority: High | **Impact of the new feature**
pilot workflows
**Is your feature request related to a problem? Please describe.**
PdmV provides `SubRequestType` with `Pilot` value for pilot workflows whereas Unified checks for `pilot` value: https://github.com/CMSCompOps/WmAgentScripts/blob/master/utils.py#L8138
**Describe the solution you'd like**
Make the check case insensitive
**Describe alternatives you've considered**
None
**Additional context**
None
| 1.0 | Make a case insensitive check for pilot keyword check in SubRequestType - **Impact of the new feature**
pilot workflows
**Is your feature request related to a problem? Please describe.**
PdmV provides `SubRequestType` with `Pilot` value for pilot workflows whereas Unified checks for `pilot` value: https://github.com/CMSCompOps/WmAgentScripts/blob/master/utils.py#L8138
**Describe the solution you'd like**
Make the check case insensitive
**Describe alternatives you've considered**
None
**Additional context**
None
| non_defect | make a case insensitive check for pilot keyword check in subrequesttype impact of the new feature pilot workflows is your feature request related to a problem please describe pdmv provides subrequesttype with pilot value for pilot workflows whereas unified checks for pilot value describe the solution you d like make the check case insensitive describe alternatives you ve considered none additional context none | 0 |
310,677 | 9,522,753,599 | IssuesEvent | 2019-04-27 11:31:36 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | closed | Log Level Issue | Priority: Medium Status: Available Type: Question | Hey,
I used v2tec Watchtower a longer time.
Was it renamed? Moved? Is there somewhere info about that?
With v2tec I always got a log entry for "First Run". And also got a mail for the "First Run" log level entry.
I changed to containrrr/watchtower and when i open the log there only is one line "Attaching to watchtower", no "First Run".
Using the environment variable "WATCHTOWER_NOFITICATIONS_LEVEL=debug" changes nothing. Log still empty.
Using the "--debug" gives me more information in the log. But still no mail.
I want again to get a mail every time watchtower is started and containers are updated.
My Compose:
version: "2"
services:
watchtower:
container_name: watchtower
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: --debug
environment:
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_TO=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=secretpassword | 1.0 | Log Level Issue - Hey,
I used v2tec Watchtower a longer time.
Was it renamed? Moved? Is there somewhere info about that?
With v2tec I always got a log entry for "First Run". And also got a mail for the "First Run" log level entry.
I changed to containrrr/watchtower and when i open the log there only is one line "Attaching to watchtower", no "First Run".
Using the environment variable "WATCHTOWER_NOFITICATIONS_LEVEL=debug" changes nothing. Log still empty.
Using the "--debug" gives me more information in the log. But still no mail.
I want again to get a mail every time watchtower is started and containers are updated.
My Compose:
version: "2"
services:
watchtower:
container_name: watchtower
image: containrrr/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: --debug
environment:
- WATCHTOWER_NOTIFICATIONS=email
- WATCHTOWER_NOTIFICATION_EMAIL_FROM=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_TO=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER=smtp.abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER=abc@abc.abc
- WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD=secretpassword | non_defect | log level issue hey i used watchtower a longer time was it renamed moved is there somewhere info about that with i always got a log entry for first run and also got a mail for the first run log level entry i changed to containrrr watchtower and when i open the log there only is one line attaching to watchtower no first run using the environment variable watchtower nofitications level debug changes nothing log still empty using the debug gives me more information in the log but still no mail i want again to get a mail every time watchtower is started and containers are updated my compose version services watchtower container name watchtower image containrrr watchtower restart always volumes var run docker sock var run docker sock etc timezone etc timezone ro etc localtime etc localtime ro command debug environment watchtower notifications email watchtower notification email from abc abc abc watchtower notification email to abc abc abc watchtower notification email server smtp abc abc watchtower notification email server user abc abc abc watchtower notification email server password secretpassword | 0 |
75,968 | 26,184,901,442 | IssuesEvent | 2023-01-02 21:53:55 | dotCMS/core | https://api.github.com/repos/dotCMS/core | closed | Relate Content screen thumbnails are not scaled correctly | Type : Defect dotCMS : User Interface WF: Can't reproduce Team : Lunik Next Release | [](https://mrkr.io/s/6392468927507e7e77984acc/0)
## Problem Statement
Relate Content screen thumbnails are not scaled correctly
## Steps to Reproduce
See screenshot
## Acceptance Criteria
Image should be cropped not stretched
## Wireframes / Designs / Prototypes
## Testing Notes
## Assumptions
## Estimates
## Sub-Tasks
---
**Reported by:** Jason A Smith (jason@dotcms.com)
**Source URL:** [https://local.dotcms.site:8443/dotAdmin/#/c/c_Blogs_list/d9355094-cc5b-4754-8301-a27d6bf3a902](https://local.dotcms.site:8443/dotAdmin/#/c/c_Blogs_list/d9355094-cc5b-4754-8301-a27d6bf3a902)
**Issue details:** [Open in Marker.io](https://app.marker.io/i/6392468927507e7e77984acf_e2c465caf54e77f7?advanced=1)
<table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Chrome 108.0.0.0</td></tr><tr><td><strong>Screen Size</strong></td><td>3200 x 1333</td></tr><tr><td><strong>OS</strong></td><td>OS X 12.6.0</td></tr><tr><td><strong>Viewport Size</strong></td><td>1666 x 1007</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@​2x</td></tr></table> | 1.0 | Relate Content screen thumbnails are not scaled correctly - [](https://mrkr.io/s/6392468927507e7e77984acc/0)
## Problem Statement
Relate Content screen thumbnails are not scaled correctly
## Steps to Reproduce
See screenshot
## Acceptance Criteria
Image should be cropped not stretched
## Wireframes / Designs / Prototypes
## Testing Notes
## Assumptions
## Estimates
## Sub-Tasks
---
**Reported by:** Jason A Smith (jason@dotcms.com)
**Source URL:** [https://local.dotcms.site:8443/dotAdmin/#/c/c_Blogs_list/d9355094-cc5b-4754-8301-a27d6bf3a902](https://local.dotcms.site:8443/dotAdmin/#/c/c_Blogs_list/d9355094-cc5b-4754-8301-a27d6bf3a902)
**Issue details:** [Open in Marker.io](https://app.marker.io/i/6392468927507e7e77984acf_e2c465caf54e77f7?advanced=1)
<table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Chrome 108.0.0.0</td></tr><tr><td><strong>Screen Size</strong></td><td>3200 x 1333</td></tr><tr><td><strong>OS</strong></td><td>OS X 12.6.0</td></tr><tr><td><strong>Viewport Size</strong></td><td>1666 x 1007</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@​2x</td></tr></table> | defect | relate content screen thumbnails are not scaled correctly problem statement relate content screen thumbnails are not scaled correctly steps to reproduce see screenshot acceptance criteria image should be cropped not stretched wireframes designs prototypes testing notes assumptions estimates sub tasks reported by jason a smith jason dotcms com source url issue details device type desktop browser chrome screen size x os os x viewport size x zoom level pixel ratio | 1 |
82,148 | 23,687,266,366 | IssuesEvent | 2022-08-29 07:36:00 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [bazel, sca] Ensure that englishbreakfast CI builds can use Bazel to build their SW artifacts. | Component:Software Component:Tooling Priority:P1 Component:CI SW:Bazel Adoption Requirement SW:Build System | See https://github.com/lowRISC/opentitan/pull/12083, which enables doing this (but does not turn it on in CI).
I don't understand what SCA CI is doing, so I would appreciate guidance here, @vogelpi | 1.0 | [bazel, sca] Ensure that englishbreakfast CI builds can use Bazel to build their SW artifacts. - See https://github.com/lowRISC/opentitan/pull/12083, which enables doing this (but does not turn it on in CI).
I don't understand what SCA CI is doing, so I would appreciate guidance here, @vogelpi | non_defect | ensure that englishbreakfast ci builds can use bazel to build their sw artifacts see which enables doing this but does not turn it on in ci i don t understand what sca ci is doing so i would appreciate guidance here vogelpi | 0 |
831,863 | 32,063,873,544 | IssuesEvent | 2023-09-24 23:54:04 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | Replay feature request (The Trailer Park) | Status: Help Wanted Priority: 2-Before Release Issue: Feature Request Difficulty: 2-Medium | ## Description
<!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. -->
from the sacred mouth of bobda
- [ ] An easy way to tell what round number it is during a round as both a player and an admin, bonus points for it telling you round length too
- [ ] 16:9 viewport in replays
- [ ] hiding your ghost and other ghosts in replays
- [ ] lighting to not die when the camera is zoomed out slightly
- [ ] fast forward
| 1.0 | Replay feature request (The Trailer Park) - ## Description
<!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. -->
from the sacred mouth of bobda
- [ ] An easy way to tell what round number it is during a round as both a player and an admin, bonus points for it telling you round length too
- [ ] 16:9 viewport in replays
- [ ] hiding your ghost and other ghosts in replays
- [ ] lighting to not die when the camera is zoomed out slightly
- [ ] fast forward
| non_defect | replay feature request the trailer park description from the sacred mouth of bobda an easy way to tell what round number it is during a round as both a player and an admin bonus points for it telling you round length too viewport in replays hiding your ghost and other ghosts in replays lighting to not die when the camera is zoomed out slightly fast forward | 0 |
586,260 | 17,573,740,588 | IssuesEvent | 2021-08-15 07:34:10 | fosscord/fosscord-server | https://api.github.com/repos/fosscord/fosscord-server | opened | Can’t accept vanity urls | bug api high priority | We have two options to fix it, either store vanity urls in the Invites collection or also check for vanity urls in the accept invite route. | 1.0 | Can’t accept vanity urls - We have two options to fix it, either store vanity urls in the Invites collection or also check for vanity urls in the accept invite route. | non_defect | can’t accept vanity urls we have two options to fix it either store vanity urls in the invites collection or also check for vanity urls in the accept invite route | 0 |
60,953 | 17,023,565,804 | IssuesEvent | 2021-07-03 02:41:07 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | osm2pgsql accepts multiple conflicting arguments | Component: nominatim Priority: major Resolution: invalid Type: defect | **[Submitted to the original trac issue database at 11.16am, Sunday, 21st March 2010]**
by accident, I specified the -O twice, but got no warning.
this is bad :
./osm2pgsql -O gazetteer -U gis -H localhost -lsc -O pgsql -C 2000 -d gisdb test.osm.bz2
osm2pgsql should check for duplicate and conflicting arguments, I can make a patch for this if needed.
mike | 1.0 | osm2pgsql accepts multiple conflicting arguments - **[Submitted to the original trac issue database at 11.16am, Sunday, 21st March 2010]**
by accident, I specified the -O twice, but got no warning.
this is bad :
./osm2pgsql -O gazetteer -U gis -H localhost -lsc -O pgsql -C 2000 -d gisdb test.osm.bz2
osm2pgsql should check for duplicate and conflicting arguments, I can make a patch for this if needed.
mike | defect | accepts multiple conflicting arguments by accident i specified the o twice but got no warning this is bad o gazetteer u gis h localhost lsc o pgsql c d gisdb test osm should check for duplicate and conflicting arguments i can make a patch for this if needed mike | 1 |
12,881 | 2,723,939,234 | IssuesEvent | 2015-04-14 15:15:37 | LarsTi/opcua4j | https://api.github.com/repos/LarsTi/opcua4j | closed | testGetChildren(bpi.most.server.services.opcua.server.MostNodeManagerTest) FAILS | auto-migrated Priority-Medium Type-Defect | ```
1. Checked out the MOST from GIT @ ..tuwien.ac.at
2. from Eclipse: maven clean, maven build
WIN7, JDK 1.6
Looked at MostOpcUaStandaloneServer.java and found:
private void run() {
String endpointUrl = "opc.tcp://127.0.0.1:6001/mostopcua";
String keyPhrase = "mostrulez";
String certPath = "META-INF/pki/server.pem";
String keyPath = "META-INF/pki/server.key";
Is this endpointUrl right?
```
Original issue reported on code.google.com by `crimblet...@gmail.com` on 5 Oct 2013 at 5:53 | 1.0 | testGetChildren(bpi.most.server.services.opcua.server.MostNodeManagerTest) FAILS - ```
1. Checked out the MOST from GIT @ ..tuwien.ac.at
2. from Eclipse: maven clean, maven build
WIN7, JDK 1.6
Looked at MostOpcUaStandaloneServer.java and found:
private void run() {
String endpointUrl = "opc.tcp://127.0.0.1:6001/mostopcua";
String keyPhrase = "mostrulez";
String certPath = "META-INF/pki/server.pem";
String keyPath = "META-INF/pki/server.key";
Is this endpointUrl right?
```
Original issue reported on code.google.com by `crimblet...@gmail.com` on 5 Oct 2013 at 5:53 | defect | testgetchildren bpi most server services opcua server mostnodemanagertest fails checked out the most from git tuwien ac at from eclipse maven clean maven build jdk looked at mostopcuastandaloneserver java and found private void run string endpointurl opc tcp mostopcua string keyphrase mostrulez string certpath meta inf pki server pem string keypath meta inf pki server key is this endpointurl right original issue reported on code google com by crimblet gmail com on oct at | 1 |
9,527 | 12,500,607,182 | IssuesEvent | 2020-06-01 22:43:58 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Using Fields Mapper parameter in Modeler throws "NotImplementedError" | Bug Processing | Steps to reproduce:
1. Open Processing Modeler.
2. From the Inputs panel, drag and drop a Fields Mapper parameter to the model.
3. Once you give the parameter a name and you click on OK, you get the error:
```
NotImplementedError
QgsProcessingParameterDefinition.type() is abstract and must be overridden
```
Using a just compiled master (2020.05.25). | 1.0 | Using Fields Mapper parameter in Modeler throws "NotImplementedError" - Steps to reproduce:
1. Open Processing Modeler.
2. From the Inputs panel, drag and drop a Fields Mapper parameter to the model.
3. Once you give the parameter a name and you click on OK, you get the error:
```
NotImplementedError
QgsProcessingParameterDefinition.type() is abstract and must be overridden
```
Using a just compiled master (2020.05.25). | non_defect | using fields mapper parameter in modeler throws notimplementederror steps to reproduce open processing modeler from the inputs panel drag and drop a fields mapper parameter to the model once you give the parameter a name and you click on ok you get the error notimplementederror qgsprocessingparameterdefinition type is abstract and must be overridden using a just compiled master | 0 |
126,047 | 10,374,396,840 | IssuesEvent | 2019-09-09 09:30:58 | readsoftware/ReadIssues | https://api.github.com/repos/readsoftware/ReadIssues | closed | Paleographic tagging does not update properly | Bug Priority 1 Test V1RC | When tagging in the properties window a syllable using a paleographic tag does not refresh the table layout.

| 1.0 | Paleographic tagging does not update properly - When tagging in the properties window a syllable using a paleographic tag does not refresh the table layout.

| non_defect | paleographic tagging does not update properly when tagging in the properties window a syllable using a paleographic tag does not refresh the table layout | 0 |
75,181 | 25,572,111,666 | IssuesEvent | 2022-11-30 18:34:41 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | closed | [🐛 Bug]: Selenium Manager not directly usable in Ubuntu 22.04 due to missing library libssl.so.1.1 | I-defect needs-triaging | ### What happened?
It seems that selenium-manager binary fails when trying to run it in a clean Ubuntu 22.04 machine. It is missing library **` libssl.so.1.1`**:
```console
root@ip-177-77-77-222:/home/ubuntu# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"
```
```console
root@ip-177-77-77-222:/home/ubuntu# ./selenium-manager --help
./selenium-manager: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
```
After installing the missing library manually, it worked fine:
```console
wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1l-1ubuntu1.6_amd64.deb
sudo dpkg -i libssl1.1_1.1.1l-1ubuntu1.6_amd64.deb
```
```console
root@ip-172-31-21-204:/home/ubuntu# ./selenium-manager --help
selenium-manager 1.0.0-M1
Automated driver management for Selenium
Usage: selenium-manager [OPTIONS]
Options:
-b, --browser <BROWSER>
Browser name (chrome, firefox, or edge) [default: ]
-d, --driver <DRIVER>
Driver name (chromedriver, geckodriver, or msedgedriver) [default: ]
-v, --driver-version <DRIVER_VERSION>
Driver version (e.g., 106.0.5249.61, 0.31.0, etc.) [default: ]
-B, --browser-version <BROWSER_VERSION>
Major browser version (e.g., 105, 106, etc.) [default: ]
-D, --debug
Display DEBUG messages
-T, --trace
Display TRACE messages
-c, --clear-cache
Clear driver cache
-h, --help
Print help information
-V, --version
Print version information
```
I don't know if this is something expected, but I just find a little weird that Selenium Manager does not work out-of-the-box with the current LTS version of the most popular Linux distro.
### How can we reproduce the issue?
```shell
1) Initiate a clean Ubuntu 22.04 machine.
2) Run selenium-manager binary.
```
### Relevant log output
```shell
./selenium-manager: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
```
### Operating System
Ubuntu 22.04
### Selenium version
4.6.0
### What are the browser(s) and version(s) where you see this issue?
_Not relevant_
### What are the browser driver(s) and version(s) where you see this issue?
_Not relevant_
### Are you using Selenium Grid?
_Not relevant_ | 1.0 | [🐛 Bug]: Selenium Manager not directly usable in Ubuntu 22.04 due to missing library libssl.so.1.1 - ### What happened?
It seems that selenium-manager binary fails when trying to run it in a clean Ubuntu 22.04 machine. It is missing library **` libssl.so.1.1`**:
```console
root@ip-177-77-77-222:/home/ubuntu# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"
```
```console
root@ip-177-77-77-222:/home/ubuntu# ./selenium-manager --help
./selenium-manager: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
```
After installing the missing library manually, it worked fine:
```console
wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1l-1ubuntu1.6_amd64.deb
sudo dpkg -i libssl1.1_1.1.1l-1ubuntu1.6_amd64.deb
```
```console
root@ip-172-31-21-204:/home/ubuntu# ./selenium-manager --help
selenium-manager 1.0.0-M1
Automated driver management for Selenium
Usage: selenium-manager [OPTIONS]
Options:
-b, --browser <BROWSER>
Browser name (chrome, firefox, or edge) [default: ]
-d, --driver <DRIVER>
Driver name (chromedriver, geckodriver, or msedgedriver) [default: ]
-v, --driver-version <DRIVER_VERSION>
Driver version (e.g., 106.0.5249.61, 0.31.0, etc.) [default: ]
-B, --browser-version <BROWSER_VERSION>
Major browser version (e.g., 105, 106, etc.) [default: ]
-D, --debug
Display DEBUG messages
-T, --trace
Display TRACE messages
-c, --clear-cache
Clear driver cache
-h, --help
Print help information
-V, --version
Print version information
```
I don't know if this is something expected, but I just find a little weird that Selenium Manager does not work out-of-the-box with the current LTS version of the most popular Linux distro.
### How can we reproduce the issue?
```shell
1) Initiate a clean Ubuntu 22.04 machine.
2) Run selenium-manager binary.
```
### Relevant log output
```shell
./selenium-manager: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
```
### Operating System
Ubuntu 22.04
### Selenium version
4.6.0
### What are the browser(s) and version(s) where you see this issue?
_Not relevant_
### What are the browser driver(s) and version(s) where you see this issue?
_Not relevant_
### Are you using Selenium Grid?
_Not relevant_ | defect | selenium manager not directly usable in ubuntu due to missing library libssl so what happened it seems that selenium manager binary fails when trying to run it in a clean ubuntu machine it is missing library libssl so console root ip home ubuntu cat etc lsb release distrib id ubuntu distrib release distrib codename jammy distrib description ubuntu lts console root ip home ubuntu selenium manager help selenium manager error while loading shared libraries libssl so cannot open shared object file no such file or directory after installing the missing library manually it worked fine console wget sudo dpkg i deb console root ip home ubuntu selenium manager help selenium manager automated driver management for selenium usage selenium manager options b browser browser name chrome firefox or edge d driver driver name chromedriver geckodriver or msedgedriver v driver version driver version e g etc b browser version major browser version e g etc d debug display debug messages t trace display trace messages c clear cache clear driver cache h help print help information v version print version information i don t know if this is something expected but i just find a little weird that selenium manager does not work out of the box with the current lts version of the most popular linux distro how can we reproduce the issue shell initiate a clean ubuntu machine run selenium manager binary relevant log output shell selenium manager error while loading shared libraries libssl so cannot open shared object file no such file or directory operating system ubuntu selenium version what are the browser s and version s where you see this issue not relevant what are the browser driver s and version s where you see this issue not relevant are you using selenium grid not relevant | 1 |
414,758 | 12,111,276,759 | IssuesEvent | 2020-04-21 11:56:56 | cms-gem-daq-project/cmsgemos | https://api.github.com/repos/cms-gem-daq-project/cmsgemos | closed | Detector cold start | Priority: High Type: Feature Request | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
At the moment there's no mechanism to provide a cold start (after powercycle) for the front-end. If the requested OH is not programmed, the system will go into error state, with no recovering actions possible inside `cmsgemos`. We need to provide the mechanism of cold start either during "Initialize" or "Configure" state transition.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [x] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Freshly powercycled front end should be initialized and configured correctly
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
The system will go into error state during the "Initialize" FSM transition
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
Powercycle OH, start the `cmsgemos` `gemsupervisor` application and press "Initialize" button
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Requires manual intervention outside the `cmsgemos` to recover the front end to operational state. Significantly complicates debugging with the templated `rpc` modules as no tool for automatic front end recovery with templated `rpc` modules is present at the moment.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: valid for all stable and dev releases
* Shell used:
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Detector cold start - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
At the moment there's no mechanism to provide a cold start (after powercycle) for the front-end. If the requested OH is not programmed, the system will go into error state, with no recovering actions possible inside `cmsgemos`. We need to provide the mechanism of cold start either during "Initialize" or "Configure" state transition.
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [x] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Freshly powercycled front end should be initialized and configured correctly
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
The system will go into error state during the "Initialize" FSM transition
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
Powercycle OH, start the `cmsgemos` `gemsupervisor` application and press "Initialize" button
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Requires manual intervention outside the `cmsgemos` to recover the front end to operational state. Significantly complicates debugging with the templated `rpc` modules as no tool for automatic front end recovery with templated `rpc` modules is present at the moment.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: valid for all stable and dev releases
* Shell used:
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| non_defect | detector cold start brief summary of issue at the moment there s no mechanism to provide a cold start after powercycle for the front end if the requested oh is not programmed the system will go into error state with no recovering actions possible inside cmsgemos we need to provide the mechanism of cold start either during initialize or configure state transition types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior freshly powercycled front end should be initialized and configured correctly current behavior the system will go into error state during the initialize fsm transition steps to reproduce for bugs powercycle oh start the cmsgemos gemsupervisor application and press initialize button context requires manual intervention outside the cmsgemos to recover the front end to operational state significantly complicates debugging with the templated rpc modules as no tool for automatic front end recovery with templated rpc modules is present at the moment your environment version used valid for all stable and dev releases shell used | 0 |
11,334 | 30,089,408,547 | IssuesEvent | 2023-06-29 11:07:47 | DependencyTrack/hyades | https://api.github.com/repos/DependencyTrack/hyades | closed | Proposal: Moving Mirroring Tasks On to Quarkus Application | proposal 🤔 architecture 🔮 domain/vuln-mirroring 🪞 | Current Implementation:
Currently the DT performs Mirroring of GitHub, OSV, NVD and EPSS on the scheduled time and also when user updates the config on the UI to enable mirroring for particular advisory.
In Current implementation, we sequentially download, parse and add vulnerabilities from the URL
Few bottlenecks with current approach:
1. Mirroring happens in DT and take lot of CPU and can interfere with other functionality of DT
2. All the ecosystems ( in case of OSV) and api hit for each year (NVD) happens sequentially, taking lot of time
Proposed Architecture:
Move the mirroring tasks out from DT to Quarkus Application and run the ecosystems in parallel.
Few positives outcomes from this approach:
1.The DT server can be offloaded from the mirror task and this could enhance the performance of DT
2.Parallel download of ecosystems can speed up the mirroring, however since the database remains with the DT, parallel processing of mirroring tasks could cause too many update requests to Database.
3. Can achieve parallelism using kafka streams consumer, which could be dynamically scaled
A rough sketch around the proposal : https://excalidraw.com/#room=2eabf2bfa48e3cdc9dd3,jrGbPL7Vgn2RwAXTnxeceg
| 1.0 | Proposal: Moving Mirroring Tasks On to Quarkus Application - Current Implementation:
Currently the DT performs Mirroring of GitHub, OSV, NVD and EPSS on the scheduled time and also when user updates the config on the UI to enable mirroring for particular advisory.
In Current implementation, we sequentially download, parse and add vulnerabilities from the URL
Few bottlenecks with current approach:
1. Mirroring happens in DT and take lot of CPU and can interfere with other functionality of DT
2. All the ecosystems ( in case of OSV) and api hit for each year (NVD) happens sequentially, taking lot of time
Proposed Architecture:
Move the mirroring tasks out from DT to Quarkus Application and run the ecosystems in parallel.
Few positives outcomes from this approach:
1.The DT server can be offloaded from the mirror task and this could enhance the performance of DT
2.Parallel download of ecosystems can speed up the mirroring, however since the database remains with the DT, parallel processing of mirroring tasks could cause too many update requests to Database.
3. Can achieve parallelism using kafka streams consumer, which could be dynamically scaled
A rough sketch around the proposal : https://excalidraw.com/#room=2eabf2bfa48e3cdc9dd3,jrGbPL7Vgn2RwAXTnxeceg
| non_defect | proposal moving mirroring tasks on to quarkus application current implementation currently the dt performs mirroring of github osv nvd and epss on the scheduled time and also when user updates the config on the ui to enable mirroring for particular advisory in current implementation we sequentially download parse and add vulnerabilities from the url few bottlenecks with current approach mirroring happens in dt and take lot of cpu and can interfere with other functionality of dt all the ecosystems in case of osv and api hit for each year nvd happens sequentially taking lot of time proposed architecture move the mirroring tasks out from dt to quarkus application and run the ecosystems in parallel few positives outcomes from this approach the dt server can be offloaded from the mirror task and this could enhance the performance of dt parallel download of ecosystems can speed up the mirroring however since the database remains with the dt parallel processing of mirroring tasks could cause too many update requests to database can achieve parallelism using kafka streams consumer which could be dynamically scaled a rough sketch around the proposal | 0 |
79,303 | 28,096,507,055 | IssuesEvent | 2023-03-30 16:09:11 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | 508-defect-2: Focus on alert header after showing submit error | frontend 508/Accessibility 508-defect-2 Supplemental Claims benefits-team-1 squad-2 | ### Point of contact
Josh Kim
### Severity level
2, Serious. Should be fixed in 1-2 sprints post-launch.
### Details
After submitting an application but an error is thrown, focus should be sent to the alert's `H3`
<img width="671" alt="review & submit page showing a error alert with focus on header of you're decision review request didn't go through" src="https://user-images.githubusercontent.com/136959/225957659-90d56064-f3ed-418d-8ba1-f9e462131c20.png">
### Reproduction steps
1. I had to duplicate this issue locally by changing the API endpoint...
2. Get to the review & submit page and submit
### Proposed solution or next steps
The submission error component is provided to the form config, so within it we can shift focus to the `H3` once the alert is visible
### References, articles, or WCAG support
1.
2.
3.
...
### Type of issue
- [X] Screenreader
- [ ] Keyboard
- [X] Focus
- [ ] Headings
- [ ] Color
- [ ] Zoom
- [ ] Semantics
- [ ] Axe-core
- [ ] Something else | 1.0 | 508-defect-2: Focus on alert header after showing submit error - ### Point of contact
Josh Kim
### Severity level
2, Serious. Should be fixed in 1-2 sprints post-launch.
### Details
After submitting an application but an error is thrown, focus should be sent to the alert's `H3`
<img width="671" alt="review & submit page showing a error alert with focus on header of you're decision review request didn't go through" src="https://user-images.githubusercontent.com/136959/225957659-90d56064-f3ed-418d-8ba1-f9e462131c20.png">
### Reproduction steps
1. I had to duplicate this issue locally by changing the API endpoint...
2. Get to the review & submit page and submit
### Proposed solution or next steps
The submission error component is provided to the form config, so within it we can shift focus to the `H3` once the alert is visible
### References, articles, or WCAG support
1.
2.
3.
...
### Type of issue
- [X] Screenreader
- [ ] Keyboard
- [X] Focus
- [ ] Headings
- [ ] Color
- [ ] Zoom
- [ ] Semantics
- [ ] Axe-core
- [ ] Something else | defect | defect focus on alert header after showing submit error point of contact josh kim severity level serious should be fixed in sprints post launch details after submitting an application but an error is thrown focus should be sent to the alert s img width alt review submit page showing a error alert with focus on header of you re decision review request didn t go through src reproduction steps i had to duplicate this issue locally by changing the api endpoint get to the review submit page and submit proposed solution or next steps the submission error component is provided to the form config so within it we can shift focus to the once the alert is visible references articles or wcag support type of issue screenreader keyboard focus headings color zoom semantics axe core something else | 1 |
248,722 | 26,819,396,123 | IssuesEvent | 2023-02-02 08:20:11 | Trinadh465/linux-4.1.15_CVE-2017-1000371 | https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2017-1000371 | opened | CVE-2019-19037 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2019-19037 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2017-1000371/commit/8cc0bec3d85a996d6015c27f949826b9ffc4d1ae">8cc0bec3d85a996d6015c27f949826b9ffc4d1ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext4_empty_dir in fs/ext4/namei.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because ext4_read_dirblock(inode,0,DIRENT_HTREE) can be zero.
Mend Note: After conducting further research, Mend has determined that versions v2.6.30-rc1-v4.9.207, v4.10-rc1-v4.14.160, v4.15-rc1--v4.19.91, v5.0-rc1--v5.4.6 and v5.5-rc1--v5.5-rc2 of Linux Kernel are vulnerable to CVE-2019-19037.
<p>Publish Date: 2019-11-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19037>CVE-2019-19037</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19037">https://www.linuxkernelcves.com/cves/CVE-2019-19037</a></p>
<p>Release Date: 2019-11-21</p>
<p>Fix Resolution: v4.9.208, v4.14.161, v4.19.92, v5.4.7, v5.5-rc3,</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19037 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-19037 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2017-1000371/commit/8cc0bec3d85a996d6015c27f949826b9ffc4d1ae">8cc0bec3d85a996d6015c27f949826b9ffc4d1ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext4_empty_dir in fs/ext4/namei.c in the Linux kernel through 5.3.12 allows a NULL pointer dereference because ext4_read_dirblock(inode,0,DIRENT_HTREE) can be zero.
Mend Note: After conducting further research, Mend has determined that versions v2.6.30-rc1-v4.9.207, v4.10-rc1-v4.14.160, v4.15-rc1--v4.19.91, v5.0-rc1--v5.4.6 and v5.5-rc1--v5.5-rc2 of Linux Kernel are vulnerable to CVE-2019-19037.
<p>Publish Date: 2019-11-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19037>CVE-2019-19037</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19037">https://www.linuxkernelcves.com/cves/CVE-2019-19037</a></p>
<p>Release Date: 2019-11-21</p>
<p>Fix Resolution: v4.9.208, v4.14.161, v4.19.92, v5.4.7, v5.5-rc3,</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details empty dir in fs namei c in the linux kernel through allows a null pointer dereference because read dirblock inode dirent htree can be zero mend note after conducting further research mend has determined that versions and of linux kernel are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
65,336 | 19,406,141,973 | IssuesEvent | 2021-12-20 01:09:20 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | reopened | Calling someone at the same time they call you results in neither client being able to accept either call request. | T-Defect X-Needs-Info | ### Steps to reproduce
1. Call a friend at the same time they call you
2. Observe the "connecting..." dialogue
3. The call never starts unless you hang up and recall.
### Outcome
#### What did you expect?
It to either show the incoming call (could use the existing dialogue for when you're called from another individual while in a call) or automatically answer (since I sent a call request at the same time, to the exact same person)
#### What happened instead?
The call waits indefinitely. I've had situations happen where we both hit call at the same time, hit hang up, hit call at the same time, etc.
### Operating system
Windows
### Application version
Element version: 1.9.7 Olm version: 3.2.8
### How did you install the app?
https://community.chocolatey.org/packages/element-desktop
### Homeserver
yiff.social
### Will you send logs?
No | 1.0 | Calling someone at the same time they call you results in neither client being able to accept either call request. - ### Steps to reproduce
1. Call a friend at the same time they call you
2. Observe the "connecting..." dialogue
3. The call never starts unless you hang up and recall.
### Outcome
#### What did you expect?
It to either show the incoming call (could use the existing dialogue for when you're called from another individual while in a call) or automatically answer (since I sent a call request at the same time, to the exact same person)
#### What happened instead?
The call waits indefinitely. I've had situations happen where we both hit call at the same time, hit hang up, hit call at the same time, etc.
### Operating system
Windows
### Application version
Element version: 1.9.7 Olm version: 3.2.8
### How did you install the app?
https://community.chocolatey.org/packages/element-desktop
### Homeserver
yiff.social
### Will you send logs?
No | defect | calling someone at the same time they call you results in neither client being able to accept either call request steps to reproduce call a friend at the same time they call you observe the connecting dialogue the call never starts unless you hang up and recall outcome what did you expect it to either show the incoming call could use the existing dialogue for when you re called from another individual while in a call or automatically answer since i sent a call request at the same time to the exact same person what happened instead the call waits indefinitely i ve had situations happen where we both hit call at the same time hit hang up hit call at the same time etc operating system windows application version element version olm version how did you install the app homeserver yiff social will you send logs no | 1 |
20,064 | 3,293,846,084 | IssuesEvent | 2015-10-30 21:00:30 | biocodellc/biocode-fims | https://api.github.com/repos/biocodellc/biocode-fims | closed | Make data.biscicol.org robust | auto-migrated Priority-Medium Type-Defect | ```
What happens to FIMS if this goes down?
Why would it go down?
How do we prevent it from going down.
How do we test if this site goes down?
Database doing down on 8/16/2014 produced the following:
I was just testing a new version of the Biocode LIMS plugin and I can't seem to
get any results back from the new Biocode FIMS.
http://biscicol.org/biocode-fims/rest/query/json?project_id=5&graphs=urn:uuid:e2
2c08ae-1da5-44e9-8d43-674b5dbd3897
produces the following stack trace. I also tried using the page at
http://biscicol.org/biocode-fims/query.jsp but I'm only getting errors.
exception
javax.servlet.ServletException: settings.FIMSException
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:418)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause
settings.FIMSException
run.process.query(process.java:337)
rest.query.GETQueryResult(query.java:332)
rest.query.queryJson(query.java:42)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1483)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1414)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1363)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1353)
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:414)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause
java.lang.NullPointerException
fims.fimsQueryBuilder.run(fimsQueryBuilder.java:228)
run.process.query(process.java:335)
rest.query.GETQueryResult(query.java:332)
rest.query.queryJson(query.java:42)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1483)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1414)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1363)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1353)
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:414)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
```
Original issue reported on code.google.com by `jdec...@gmail.com` on 18 Aug 2014 at 10:02 | 1.0 | Make data.biscicol.org robust - ```
What happens to FIMS if this goes down?
Why would it go down?
How do we prevent it from going down.
How do we test if this site goes down?
Database doing down on 8/16/2014 produced the following:
I was just testing a new version of the Biocode LIMS plugin and I can't seem to
get any results back from the new Biocode FIMS.
http://biscicol.org/biocode-fims/rest/query/json?project_id=5&graphs=urn:uuid:e2
2c08ae-1da5-44e9-8d43-674b5dbd3897
produces the following stack trace. I also tried using the page at
http://biscicol.org/biocode-fims/query.jsp but I'm only getting errors.
exception
javax.servlet.ServletException: settings.FIMSException
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:418)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause
settings.FIMSException
run.process.query(process.java:337)
rest.query.GETQueryResult(query.java:332)
rest.query.queryJson(query.java:42)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1483)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1414)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1363)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1353)
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:414)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause
java.lang.NullPointerException
fims.fimsQueryBuilder.run(fimsQueryBuilder.java:228)
run.process.query(process.java:335)
rest.query.GETQueryResult(query.java:332)
rest.query.queryJson(query.java:42)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1483)
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1414)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1363)
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1353)
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:414)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:708)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
```
Original issue reported on code.google.com by `jdec...@gmail.com` on 18 Aug 2014 at 10:02 | defect | make data biscicol org robust what happens to fims if this goes down why would it go down how do we prevent it from going down how do we test if this site goes down database doing down on produced the following i was just testing a new version of the biocode lims plugin and i can t seem to get any results back from the new biocode fims produces the following stack trace i also tried using the page at but i m only getting errors exception javax servlet servletexception settings fimsexception com sun jersey spi container servlet webcomponent service webcomponent java com sun jersey spi container servlet servletcontainer service servletcontainer java com sun jersey spi container servlet servletcontainer service servletcontainer java javax servlet http httpservlet service httpservlet java org apache tomcat websocket server wsfilter dofilter wsfilter java root cause settings fimsexception run process query process java rest query getqueryresult query java rest query queryjson query java sun reflect nativemethodaccessorimpl native method sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java java lang reflect method invoke method java com sun jersey spi container javamethodinvokerfactory invoke javamethodinvokerfactory java com sun jersey server impl model method dispatch abstractresourcemethoddispatchprovider responseoutinvoker dispatch abstractresourcemethoddispatchprovider java com sun jersey server impl model method dispatch resourcejavamethoddispatcher dispatch resourcejavamethoddispatcher java com sun jersey server impl uri rules httpmethodrule accept httpmethodrule java com sun jersey server impl uri rules righthandpathrule accept righthandpathrule java com sun jersey server impl uri rules resourceclassrule accept resourceclassrule java com sun jersey server impl uri rules righthandpathrule accept righthandpathrule java com sun jersey server impl uri rules rootresourceclassesrule accept rootresourceclassesrule java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey spi container servlet webcomponent service webcomponent java com sun jersey spi container servlet servletcontainer service servletcontainer java com sun jersey spi container servlet servletcontainer service servletcontainer java javax servlet http httpservlet service httpservlet java org apache tomcat websocket server wsfilter dofilter wsfilter java root cause java lang nullpointerexception fims fimsquerybuilder run fimsquerybuilder java run process query process java rest query getqueryresult query java rest query queryjson query java sun reflect nativemethodaccessorimpl native method sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java java lang reflect method invoke method java com sun jersey spi container javamethodinvokerfactory invoke javamethodinvokerfactory java com sun jersey server impl model method dispatch abstractresourcemethoddispatchprovider responseoutinvoker dispatch abstractresourcemethoddispatchprovider java com sun jersey server impl model method dispatch resourcejavamethoddispatcher dispatch resourcejavamethoddispatcher java com sun jersey server impl uri rules httpmethodrule accept httpmethodrule java com sun jersey server impl uri rules righthandpathrule accept righthandpathrule java com sun jersey server impl uri rules resourceclassrule accept resourceclassrule java com sun jersey server impl uri rules righthandpathrule accept righthandpathrule java com sun jersey server impl uri rules rootresourceclassesrule accept rootresourceclassesrule java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey server impl application webapplicationimpl handlerequest webapplicationimpl java com sun jersey spi container servlet webcomponent service webcomponent java com sun jersey spi container servlet servletcontainer service servletcontainer java com sun jersey spi container servlet servletcontainer service servletcontainer java javax servlet http httpservlet service httpservlet java org apache tomcat websocket server wsfilter dofilter wsfilter java original issue reported on code google com by jdec gmail com on aug at | 1 |
429,136 | 30,025,837,430 | IssuesEvent | 2023-06-27 06:00:32 | containers/podman-desktop | https://api.github.com/repos/containers/podman-desktop | closed | Confusion between engine provider and machine provider | kind/enhancement 👋 area/documentation 📖 | ### Is your enhancement related to a problem? Please describe
Currently the documentation* mixes up the extensions:
> Podman Desktop can control various container engines, such as:
>
> Docker
> Lima
> Podman
\* https://podman-desktop.io/docs/Installation
Some of them provide a container engine, like Docker and Podman...
Some of them provide a virtual machine, like Lima and Podman (Machine).
### Describe the solution you'd like
Maybe it could be made a more clear distinction between the extensions...
Then it could also offer more control over the virtual machine, like start/stop etc ?
Currently the Podman Machine is missing from Linux, only available on Mac/Win.
You can run Podman Engine on the host, and that is what the extension connects to.
The Lima extension does not have anyway to install or start the Lima virtual machine.
If you already have one, it can connect to the unix socket of either Podman or Docker.
### Describe alternatives you've considered
The default engine of Lima is containerd, but Podman Desktop can't communicate with it...
Currently containerd/buildkitd requires file system access, and does not have a remote API.
### Additional context
There currently doesn't seem to be any way to provision a Docker Machine?
(the links* only go to Moby, but that project does not feature a machine to run)
\* https://github.com/containers/podman-desktop#multiple-container-engine-support
At least not using Open Source tools, but you can run with Docker Desktop...
With the recently added feature, you can now use Lima to provider a `docker` VM.
| 1.0 | Confusion between engine provider and machine provider - ### Is your enhancement related to a problem? Please describe
Currently the documentation* mixes up the extensions:
> Podman Desktop can control various container engines, such as:
>
> Docker
> Lima
> Podman
\* https://podman-desktop.io/docs/Installation
Some of them provide a container engine, like Docker and Podman...
Some of them provide a virtual machine, like Lima and Podman (Machine).
### Describe the solution you'd like
Maybe it could be made a more clear distinction between the extensions...
Then it could also offer more control over the virtual machine, like start/stop etc ?
Currently the Podman Machine is missing from Linux, only available on Mac/Win.
You can run Podman Engine on the host, and that is what the extension connects to.
The Lima extension does not have anyway to install or start the Lima virtual machine.
If you already have one, it can connect to the unix socket of either Podman or Docker.
### Describe alternatives you've considered
The default engine of Lima is containerd, but Podman Desktop can't communicate with it...
Currently containerd/buildkitd requires file system access, and does not have a remote API.
### Additional context
There currently doesn't seem to be any way to provision a Docker Machine?
(the links* only go to Moby, but that project does not feature a machine to run)
\* https://github.com/containers/podman-desktop#multiple-container-engine-support
At least not using Open Source tools, but you can run with Docker Desktop...
With the recently added feature, you can now use Lima to provider a `docker` VM.
| non_defect | confusion between engine provider and machine provider is your enhancement related to a problem please describe currently the documentation mixes up the extensions podman desktop can control various container engines such as docker lima podman some of them provide a container engine like docker and podman some of them provide a virtual machine like lima and podman machine describe the solution you d like maybe it could be made a more clear distinction between the extensions then it could also offer more control over the virtual machine like start stop etc currently the podman machine is missing from linux only available on mac win you can run podman engine on the host and that is what the extension connects to the lima extension does not have anyway to install or start the lima virtual machine if you already have one it can connect to the unix socket of either podman or docker describe alternatives you ve considered the default engine of lima is containerd but podman desktop can t communicate with it currently containerd buildkitd requires file system access and does not have a remote api additional context there currently doesn t seem to be any way to provision a docker machine the links only go to moby but that project does not feature a machine to run at least not using open source tools but you can run with docker desktop with the recently added feature you can now use lima to provider a docker vm | 0 |
30,997 | 6,393,060,884 | IssuesEvent | 2017-08-04 05:57:08 | lagom/lagom | https://api.github.com/repos/lagom/lagom | closed | Sample code in Testing subscription may not succeed eventually. | topic:documentation type:defect | Hi there, I'm using lagom to implement my service. And I've found a little problem there when get across to `eventually`. The sample code is in [Testing subscription](https://www.lagomframework.com/documentation/1.3.x/scala/MessageBrokerTesting.html#Testing-subscription)
It seems that `eventully` have done nothing specially to `Future`. And if at the first time what wrapped in future failed, then the test failed, especially when kafka message is not sent to the subscriber.
So my test just runs OK sometimes. It depends on the speed message sent to subscriber.
I think it may be better chaning the future in [Testing subscription](https://www.lagomframework.com/documentation/1.3.x/scala/MessageBrokerTesting.html#Testing-subscription) with .await operation. This ensure that the test will eventually succeed because when failed, it will throw exception.
I've found this because `elastic4s-embedded` will create some data files at the first time which takes a while. But every time I run my coverage test I will clean my project first. Although I'm quite sure it will succeed in 10 seconds, the query for new entity won't succeed because it's wrapped in a `Future`. This future fail as a value without throwing any exception. The sematic of `eventually` won't retry as no exception thrown. So for `Future`, it just fail and won't retry until time expected reached. | 1.0 | Sample code in Testing subscription may not succeed eventually. - Hi there, I'm using lagom to implement my service. And I've found a little problem there when get across to `eventually`. The sample code is in [Testing subscription](https://www.lagomframework.com/documentation/1.3.x/scala/MessageBrokerTesting.html#Testing-subscription)
It seems that `eventully` have done nothing specially to `Future`. And if at the first time what wrapped in future failed, then the test failed, especially when kafka message is not sent to the subscriber.
So my test just runs OK sometimes. It depends on the speed message sent to subscriber.
I think it may be better chaning the future in [Testing subscription](https://www.lagomframework.com/documentation/1.3.x/scala/MessageBrokerTesting.html#Testing-subscription) with .await operation. This ensure that the test will eventually succeed because when failed, it will throw exception.
I've found this because `elastic4s-embedded` will create some data files at the first time which takes a while. But every time I run my coverage test I will clean my project first. Although I'm quite sure it will succeed in 10 seconds, the query for new entity won't succeed because it's wrapped in a `Future`. This future fail as a value without throwing any exception. The sematic of `eventually` won't retry as no exception thrown. So for `Future`, it just fail and won't retry until time expected reached. | defect | sample code in testing subscription may not succeed eventually hi there i m using lagom to implement my service and i ve found a little problem there when get across to eventually the sample code is in it seems that eventully have done nothing specially to future and if at the first time what wrapped in future failed then the test failed especially when kafka message is not sent to the subscriber so my test just runs ok sometimes it depends on the speed message sent to subscriber i think it may be better chaning the future in with await operation this ensure that the test will eventually succeed because when failed it will throw exception i ve found this because embedded will create some data files at the first time which takes a while but every time i run my coverage test i will clean my project first although i m quite sure it will succeed in seconds the query for new entity won t succeed because it s wrapped in a future this future fail as a value without throwing any exception the sematic of eventually won t retry as no exception thrown so for future it just fail and won t retry until time expected reached | 1 |
65,856 | 19,721,987,359 | IssuesEvent | 2022-01-13 16:11:07 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [Markup and meta data] HTML markup isn't valid. (09.01.2) | vsa-public-websites 508/Accessibility 508-defect-3 collab-cycle-feedback Staging CCIssue09.01 CC-Dashboard | ### General Information
#### VFS team name
Public Websites
#### VFS product name
Outreach & Events Enhancements
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
---
### Platform Issue
HTML markup isn't valid.
### Issue Details
When viewing an individual event, the More Details button is coded as a <a> link with a <button> nested inside of it. This is not valid HTML.
### Link, screenshot or steps to recreate
### VA.gov Experience Standard
[Category Number 09, Issue Number 01](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 1.3.1_A
---
### Platform Recommendation
Remove the <button> and just have a <a> hyperlink. Consider using an Action Link rather than styling a link to look like a button: https://design.va.gov/experimental-design/action_links
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket. | 1.0 | [Markup and meta data] HTML markup isn't valid. (09.01.2) - ### General Information
#### VFS team name
Public Websites
#### VFS product name
Outreach & Events Enhancements
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
---
### Platform Issue
HTML markup isn't valid.
### Issue Details
When viewing an individual event, the More Details button is coded as a <a> link with a <button> nested inside of it. This is not valid HTML.
### Link, screenshot or steps to recreate
### VA.gov Experience Standard
[Category Number 09, Issue Number 01](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 1.3.1_A
---
### Platform Recommendation
Remove the <button> and just have a <a> hyperlink. Consider using an Action Link rather than styling a link to look like a button: https://design.va.gov/experimental-design/action_links
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket. | defect | html markup isn t valid general information vfs team name public websites vfs product name outreach events enhancements point of contact reviewers brian deconinck briandeconinck accessibility platform issue html markup isn t valid issue details when viewing an individual event the more details button is coded as a link with a nested inside of it this is not valid html link screenshot or steps to recreate va gov experience standard other references wcag sc a platform recommendation remove the and just have a hyperlink consider using an action link rather than styling a link to look like a button vfs team tasks to complete comment on the ticket if there are questions or concerns close the ticket when the issue has been resolved or validated by your product owner if a team has additional questions or needs platform help validating the issue please comment in the ticket | 1 |
28,790 | 5,368,058,590 | IssuesEvent | 2017-02-22 07:22:00 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | Openchecker - count attempts for cache | Component_OpenChecker Priority_Medium Type_Defect | At this moment openchecker counts all attempts globally and after X tries you will show error: to much tries!
So if you check X caches and even all attempts will be finished with success, there will be impossible to check X + 1 cache.
Attempts should be count per cache, not globally.
Copied from e-mail:
```
Cześć
Wykorzystując zimowe wieczory wydrukowałem sobie i rozwiązałem serię sudoku, następnie zacząłem wpisywać wyniki do opensprawdzacza. Przy 8 keszu (przy poprzednich się pomyliłem) wyskoczył mi komunikat:
Zgadywałeś(-aś) za dużo razy!!
Rozumiem, że jest zabezpieczenie przed bruteforcem, ale chyba powinno to dotyczyć jednej skrzynki, a nie jeśli próbuje się wpisać rozwiązania kilku(nastu) quizów. Czy jest to do naprawienia?
Przy okazji wpisując do modyfikatora współrzędne trzeba podać kropkę a nie przecinek. Nie amerykanizujmy się na siłę. Kiedyś już to zgłaszałem i pewnie wisi gdzieś na githubie, ale przecież to nie jest wielki problem poprawić w programie (nawet jeśli musi być użyta kropka).
Pozdrawiam Paweł
``` | 1.0 | Openchecker - count attempts for cache - At this moment openchecker counts all attempts globally and after X tries you will show error: to much tries!
So if you check X caches and even all attempts will be finished with success, there will be impossible to check X + 1 cache.
Attempts should be count per cache, not globally.
Copied from e-mail:
```
Cześć
Wykorzystując zimowe wieczory wydrukowałem sobie i rozwiązałem serię sudoku, następnie zacząłem wpisywać wyniki do opensprawdzacza. Przy 8 keszu (przy poprzednich się pomyliłem) wyskoczył mi komunikat:
Zgadywałeś(-aś) za dużo razy!!
Rozumiem, że jest zabezpieczenie przed bruteforcem, ale chyba powinno to dotyczyć jednej skrzynki, a nie jeśli próbuje się wpisać rozwiązania kilku(nastu) quizów. Czy jest to do naprawienia?
Przy okazji wpisując do modyfikatora współrzędne trzeba podać kropkę a nie przecinek. Nie amerykanizujmy się na siłę. Kiedyś już to zgłaszałem i pewnie wisi gdzieś na githubie, ale przecież to nie jest wielki problem poprawić w programie (nawet jeśli musi być użyta kropka).
Pozdrawiam Paweł
``` | defect | openchecker count attempts for cache at this moment openchecker counts all attempts globally and after x tries you will show error to much tries so if you check x caches and even all attempts will be finished with success there will be impossible to check x cache attempts should be count per cache not globally copied from e mail cześć wykorzystując zimowe wieczory wydrukowałem sobie i rozwiązałem serię sudoku następnie zacząłem wpisywać wyniki do opensprawdzacza przy keszu przy poprzednich się pomyliłem wyskoczył mi komunikat zgadywałeś aś za dużo razy rozumiem że jest zabezpieczenie przed bruteforcem ale chyba powinno to dotyczyć jednej skrzynki a nie jeśli próbuje się wpisać rozwiązania kilku nastu quizów czy jest to do naprawienia przy okazji wpisując do modyfikatora współrzędne trzeba podać kropkę a nie przecinek nie amerykanizujmy się na siłę kiedyś już to zgłaszałem i pewnie wisi gdzieś na githubie ale przecież to nie jest wielki problem poprawić w programie nawet jeśli musi być użyta kropka pozdrawiam paweł | 1 |
44,770 | 12,374,788,129 | IssuesEvent | 2020-05-19 02:39:01 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [COGNITION]: Section 103 - CONSIDER updating slash yr and slash mo to include screen reader only text | 508-defect-3 508-issue-cognition 508/Accessibility bah-section103 | # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The Your estimated benefits panel has several `/yr` and `/mo` text strings. These are well-understood by a good percentage of users, but read out very ambiguously to screen readers. I'd like to consider adding screen reader only text to make these more descriptive. Screenshot attached below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
- [ ] Text reads out clearly and plainly to screen readers
## Possible Fixes (optional)
```diff
- <h5>$10,981/yr</h5>
+ <h5>
+ $10,981
+ <span aria-hidden="true">/yr</span>
+ <span class="sr-only">per year</span>
+ </h5>
```
## WCAG or Vendor Guidance (optional)
* [Info and Relationships: Understanding SC 1.3.1](https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| 1.0 | [COGNITION]: Section 103 - CONSIDER updating slash yr and slash mo to include screen reader only text - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The Your estimated benefits panel has several `/yr` and `/mo` text strings. These are well-understood by a good percentage of users, but read out very ambiguously to screen readers. I'd like to consider adding screen reader only text to make these more descriptive. Screenshot attached below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
- [ ] Text reads out clearly and plainly to screen readers
## Possible Fixes (optional)
```diff
- <h5>$10,981/yr</h5>
+ <h5>
+ $10,981
+ <span aria-hidden="true">/yr</span>
+ <span class="sr-only">per year</span>
+ </h5>
```
## WCAG or Vendor Guidance (optional)
* [Info and Relationships: Understanding SC 1.3.1](https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| defect | section consider updating slash yr and slash mo to include screen reader only text enter an issue title using the format brief description of the problem edit buttons need aria label for context add another user link will not receive keyboard focus heading levels should increase by one error messages should be more specific blue button on blue background does not have sufficient contrast ratio feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements description the your estimated benefits panel has several yr and mo text strings these are well understood by a good percentage of users but read out very ambiguously to screen readers i d like to consider adding screen reader only text to make these more descriptive screenshot attached below point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact trevor acceptance criteria text reads out clearly and plainly to screen readers possible fixes optional diff yr yr per year wcag or vendor guidance optional screenshots or trace logs | 1 |
79,497 | 15,586,159,141 | IssuesEvent | 2021-03-18 01:18:24 | cniweb/cniweb-demo | https://api.github.com/repos/cniweb/cniweb-demo | opened | CVE-2020-36181 (High) detected in jackson-databind-2.8.11.1.jar | security vulnerability | ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cniweb-demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar</p>
<p>
Dependency Hierarchy:
- jackson-jaxrs-json-provider-2.8.11.jar (Root Library)
- :x: **jackson-databind-2.8.11.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36181 (High) detected in jackson-databind-2.8.11.1.jar - ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cniweb-demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar</p>
<p>
Dependency Hierarchy:
- jackson-jaxrs-json-provider-2.8.11.jar (Root Library)
- :x: **jackson-databind-2.8.11.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cniweb demo pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jackson jaxrs json provider jar root library x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
52,037 | 13,211,371,741 | IssuesEvent | 2020-08-15 22:39:39 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [cmake] -DNDEBUG not set for DEBUG build type (Trac #1547) | Incomplete Migration Migrated from Trac cmake defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1547">https://code.icecube.wisc.edu/projects/icecube/ticket/1547</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-12T21:09:06",
"_ts": "1455311346033035",
"description": "It appears that all other release types have special flags set, except the DEBUG release type. Thus, we're missing the `-DNDEBUG` flag. (and maybe other things, I'm not sure)",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"time": "2016-02-12T21:06:24",
"component": "cmake",
"summary": "[cmake] -DNDEBUG not set for DEBUG build type",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [cmake] -DNDEBUG not set for DEBUG build type (Trac #1547) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1547">https://code.icecube.wisc.edu/projects/icecube/ticket/1547</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-12T21:09:06",
"_ts": "1455311346033035",
"description": "It appears that all other release types have special flags set, except the DEBUG release type. Thus, we're missing the `-DNDEBUG` flag. (and maybe other things, I'm not sure)",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"time": "2016-02-12T21:06:24",
"component": "cmake",
"summary": "[cmake] -DNDEBUG not set for DEBUG build type",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | dndebug not set for debug build type trac migrated from json status closed changetime ts description it appears that all other release types have special flags set except the debug release type thus we re missing the dndebug flag and maybe other things i m not sure reporter david schultz cc resolution invalid time component cmake summary dndebug not set for debug build type priority major keywords milestone owner nega type defect | 1 |
142,134 | 5,459,722,744 | IssuesEvent | 2017-03-09 01:45:06 | NostraliaWoW/mangoszero | https://api.github.com/repos/NostraliaWoW/mangoszero | reopened | Garona: A Study on Stealth and Treachery | Awaiting Feedback Priority - Medium System | This book you can get at 54 but you can't start the quest as the quest is 57 in the database please change this.
http://db.vanillagaming.org/?search=Garona%3A+A+Study+on+Stealth+and+Treachery#items | 1.0 | Garona: A Study on Stealth and Treachery - This book you can get at 54 but you can't start the quest as the quest is 57 in the database please change this.
http://db.vanillagaming.org/?search=Garona%3A+A+Study+on+Stealth+and+Treachery#items | non_defect | garona a study on stealth and treachery this book you can get at but you can t start the quest as the quest is in the database please change this | 0 |
15,918 | 5,195,651,909 | IssuesEvent | 2017-01-23 10:07:41 | SemsTestOrg/combinearchive-web | https://api.github.com/repos/SemsTestOrg/combinearchive-web | closed | Lock Timeout after creation from M2CAT-Hack | code fixed major migrated task | ## Trac Ticket #107
**component:** code
**owner:** martinP
**reporter:** martinP
**created:** 2015-02-24 10:06:05
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2015-02-24 11:26:40
**author:** anonymous
when donwloading files, the lock isn't released
## comment 2
**time:** 2015-02-24 11:27:06
**author:** martinP
## comment 3
**time:** 2015-02-24 11:27:06
**author:** martinP
Updated **owner** to **martinP**
## comment 4
**time:** 2015-02-24 11:27:06
**author:** martinP
Updated **status** to **assigned**
## comment 5
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"cb919194f00e913ba9920a852ae1bf20ea12e88a"]:
```CommitTicketReference repository="" revision="cb919194f00e913ba9920a852ae1bf20ea12e88a"
[fixes #107] closes archive, not just combineArchive, in Download
Servlet.
```
## comment 6
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
| 1.0 | Lock Timeout after creation from M2CAT-Hack - ## Trac Ticket #107
**component:** code
**owner:** martinP
**reporter:** martinP
**created:** 2015-02-24 10:06:05
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2015-02-24 11:26:40
**author:** anonymous
when donwloading files, the lock isn't released
## comment 2
**time:** 2015-02-24 11:27:06
**author:** martinP
## comment 3
**time:** 2015-02-24 11:27:06
**author:** martinP
Updated **owner** to **martinP**
## comment 4
**time:** 2015-02-24 11:27:06
**author:** martinP
Updated **status** to **assigned**
## comment 5
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"cb919194f00e913ba9920a852ae1bf20ea12e88a"]:
```CommitTicketReference repository="" revision="cb919194f00e913ba9920a852ae1bf20ea12e88a"
[fixes #107] closes archive, not just combineArchive, in Download
Servlet.
```
## comment 6
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2015-02-25 15:45:33
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
| non_defect | lock timeout after creation from hack trac ticket component code owner martinp reporter martinp created milestone type task version keywords comment time author anonymous when donwloading files the lock isn t released comment time author martinp comment time author martinp updated owner to martinp comment time author martinp updated status to assigned comment time author in changeset committicketreference repository revision closes archive not just combinearchive in download servlet comment time author updated resolution to fixed comment time author updated status to closed | 0 |
61,868 | 17,023,795,785 | IssuesEvent | 2021-07-03 03:54:09 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Potlatch 2 initializes but fails to get past loading screen | Component: potlatch2 Priority: major Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 9.31am, Saturday, 5th May 2012]**
If I try and use Potlatch 2 on the OSM website by clicking Edit, the loading screen appears, it gets to 100% loaded, and then nothing happens - the screen stays the same big green block. No map, nothing.
Clearly, that's not enough detail to fix anything, but please let me know what I can do to provide further information.
System: Ubuntu 12.04 (but same problem appeared on 10.10, which I recently upgraded from, and has been present for at least the past few weeks), Firefox Nightly, Adobe Shockwave Flash 11.2 r 202.
Gerv
| 1.0 | Potlatch 2 initializes but fails to get past loading screen - **[Submitted to the original trac issue database at 9.31am, Saturday, 5th May 2012]**
If I try and use Potlatch 2 on the OSM website by clicking Edit, the loading screen appears, it gets to 100% loaded, and then nothing happens - the screen stays the same big green block. No map, nothing.
Clearly, that's not enough detail to fix anything, but please let me know what I can do to provide further information.
System: Ubuntu 12.04 (but same problem appeared on 10.10, which I recently upgraded from, and has been present for at least the past few weeks), Firefox Nightly, Adobe Shockwave Flash 11.2 r 202.
Gerv
| defect | potlatch initializes but fails to get past loading screen if i try and use potlatch on the osm website by clicking edit the loading screen appears it gets to loaded and then nothing happens the screen stays the same big green block no map nothing clearly that s not enough detail to fix anything but please let me know what i can do to provide further information system ubuntu but same problem appeared on which i recently upgraded from and has been present for at least the past few weeks firefox nightly adobe shockwave flash r gerv | 1 |
14,936 | 10,228,219,198 | IssuesEvent | 2019-08-17 00:24:16 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Web App CPU percantage | Pri2 app-service/svc cxp product-question triaged | Why don't we have a CPU percentage metrics for each web app? But for App service plan we have.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2550993f-34f1-d963-3ee5-3dcebe2a64d7
* Version Independent ID: 3da9bc96-6e24-54fa-bd99-6a6a704f583a
* Content: [Monitor apps - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-monitor)
* Content Source: [articles/app-service/web-sites-monitor.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-monitor.md)
* Service: **app-service**
* GitHub Login: @btardif
* Microsoft Alias: **byvinyal** | 1.0 | Web App CPU percantage - Why don't we have a CPU percentage metrics for each web app? But for App service plan we have.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2550993f-34f1-d963-3ee5-3dcebe2a64d7
* Version Independent ID: 3da9bc96-6e24-54fa-bd99-6a6a704f583a
* Content: [Monitor apps - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-monitor)
* Content Source: [articles/app-service/web-sites-monitor.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/web-sites-monitor.md)
* Service: **app-service**
* GitHub Login: @btardif
* Microsoft Alias: **byvinyal** | non_defect | web app cpu percantage why don t we have a cpu percentage metrics for each web app but for app service plan we have document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login btardif microsoft alias byvinyal | 0 |
249,325 | 18,858,183,389 | IssuesEvent | 2021-11-12 09:28:42 | skythefire/pe | https://api.github.com/repos/skythefire/pe | opened | Inconsistency in UML Diagram in 4.2 Interactions feature | severity.VeryLow type.DocumentationBug | The following sequence diagram in the 4.2 Interactions feature
1. font is too small to be read, is significantly smaller than the remaining text in the DG
2. different fonts used in this particular diagram as compared to other sequence diagrams

<!--session: 1636703064291-bc0fbfc3-d40b-492d-ab13-2561dfaece1e-->
<!--Version: Web v3.4.1--> | 1.0 | Inconsistency in UML Diagram in 4.2 Interactions feature - The following sequence diagram in the 4.2 Interactions feature
1. font is too small to be read, is significantly smaller than the remaining text in the DG
2. different fonts used in this particular diagram as compared to other sequence diagrams

<!--session: 1636703064291-bc0fbfc3-d40b-492d-ab13-2561dfaece1e-->
<!--Version: Web v3.4.1--> | non_defect | inconsistency in uml diagram in interactions feature the following sequence diagram in the interactions feature font is too small to be read is significantly smaller than the remaining text in the dg different fonts used in this particular diagram as compared to other sequence diagrams | 0 |
16,868 | 2,955,783,717 | IssuesEvent | 2015-07-08 06:53:25 | icza/gowut | https://api.github.com/repos/icza/gowut | closed | Expander.Content() wrong implementation | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
func (c *expanderImpl) Content() Comp {
return c.header
}
What version of the product are you using? On what operating system?
go 1.1.2 for windows/amd64
Please provide any additional information below.
```
Original issue reported on code.google.com by `biorh...@gmail.com` on 7 Nov 2013 at 7:19 | 1.0 | Expander.Content() wrong implementation - ```
What steps will reproduce the problem?
func (c *expanderImpl) Content() Comp {
return c.header
}
What version of the product are you using? On what operating system?
go 1.1.2 for windows/amd64
Please provide any additional information below.
```
Original issue reported on code.google.com by `biorh...@gmail.com` on 7 Nov 2013 at 7:19 | defect | expander content wrong implementation what steps will reproduce the problem func c expanderimpl content comp return c header what version of the product are you using on what operating system go for windows please provide any additional information below original issue reported on code google com by biorh gmail com on nov at | 1 |
164,507 | 6,227,497,094 | IssuesEvent | 2017-07-10 20:54:00 | javascript-obfuscator/javascript-obfuscator | https://api.github.com/repos/javascript-obfuscator/javascript-obfuscator | reopened | eval() breaks Chrome Extensions | high priority | I just tried out the obfuscator online here:
https://javascriptobfuscator.herokuapp.com/
...which I believe is based on this TS obfuscator. But after obfuscating the js files in my Chrome Extension, my extension breaks hard because eval() is illegal in some contexts of Chrome Extensions. This is a show-stopper for me, so I was wondering if there's some way to instruct the obfuscator to not ever use eval() in the obfuscation. It would be especially nice if you could tell me which setting(s) to avoid at javascriptobfuscator.herokuapp.com so I can retry there first. And then, how would I do the same thing in the downloaded TS app?
Thanks so much in advance, this seems like an awesome free tool. | 1.0 | eval() breaks Chrome Extensions - I just tried out the obfuscator online here:
https://javascriptobfuscator.herokuapp.com/
...which I believe is based on this TS obfuscator. But after obfuscating the js files in my Chrome Extension, my extension breaks hard because eval() is illegal in some contexts of Chrome Extensions. This is a show-stopper for me, so I was wondering if there's some way to instruct the obfuscator to not ever use eval() in the obfuscation. It would be especially nice if you could tell me which setting(s) to avoid at javascriptobfuscator.herokuapp.com so I can retry there first. And then, how would I do the same thing in the downloaded TS app?
Thanks so much in advance, this seems like an awesome free tool. | non_defect | eval breaks chrome extensions i just tried out the obfuscator online here which i believe is based on this ts obfuscator but after obfuscating the js files in my chrome extension my extension breaks hard because eval is illegal in some contexts of chrome extensions this is a show stopper for me so i was wondering if there s some way to instruct the obfuscator to not ever use eval in the obfuscation it would be especially nice if you could tell me which setting s to avoid at javascriptobfuscator herokuapp com so i can retry there first and then how would i do the same thing in the downloaded ts app thanks so much in advance this seems like an awesome free tool | 0 |
5,697 | 2,610,193,847 | IssuesEvent | 2015-02-26 19:01:14 | chrsmith/quchuseban | https://api.github.com/repos/chrsmith/quchuseban | opened | 深入吃什么水果能淡化色斑 | auto-migrated Priority-Medium Type-Defect | ```
《摘要》
令人崩溃的是,痘痘只是青春期的烦恼,而祛斑大战,一旦��
�响就没有尽头。
买更贵的化妆品、去专业美容院护理、尝试各种散发古怪气��
�的药膏……我们以神农试百草的精神尝试各种看到的、听到�
��偏方,但效果真的寥寥。吃什么水果能淡化色斑,
《客户案例》
好像做什么事都有得有失,比如生孩子,在宝宝的降临��
�给我们无限喜悦的同时,其他的问题又让我们陷入了烦恼,�
��娠斑就是我最大的烦恼。<br>
在没怀宝宝之前,白白嫩嫩的皮肤是最让我引以为豪的��
�每个人见了都说我是个很清秀的女孩子。结婚没多久就怀孕�
��,本为肚子里的孩子而欣喜,但是,我的脸蛋也越来越糟糕
。斑点陆续冒出来,据说是妊娠斑,颧骨两边的色斑由于蝴��
�的翅膀一样,原来较好的面容一下子大打折扣。这使我很是�
��恼,由于要顾及肚子里的孩子,没敢用什么祛斑产品。<br>
生完宝宝后,本以为妊娠斑会慢慢消失,可是,顾及是��
�有产后抑郁症的原因,变得情绪很突变,不爱说话,斑也就�
��多了。为此我跑了无数趟美容院,先是用祛斑霜,但斑一点
没下去还越来越多了,在美容院老板的介绍下又做了光子嫩��
�,开始还有效果,但是没去做之后又反弹了,比没做之前还�
��重。后来还陆续用了不少祛斑产品,都不见效。<br>
祛斑,成了我的心头大事,花了不少时间在上面,当我��
�到「黛芙薇尔精华液」的介绍时,心中暗喜,说不定这次真�
��希望了。我觉得这个产品成分独特,一定错不了,就买了两
个周期的。不到一个月,脸上的斑真的淡了,板块明显消退��
�两个月的时候,板块的面积也变小了,皮肤变得湿润细腻,�
��细小皱纹都淡化了。又用了一个周期之后,斑基本上就没有
了,脸色红润有光泽,精神状况也明显改善。加上饮食调节��
�效果更好,更让我想不到的是睡眠好了,原来的便秘也没有�
��。脸色白里透红,皮肤细腻有弹性。<br>
也许是斑去掉了,我的心情好转,也恢复了以前开朗、��
�观的个性,产后抑郁症也消失了,现在,看到宝宝健康成长�
��又加上丈夫的精心照顾,我是倍感幸福啊!
阅读了吃什么水果能淡化色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
吃什么水果能淡化色斑,同时为您分享祛斑小方法
1
采含苞待放的桃花与冬瓜子(鲜品100克、干品则用30克),一��
�捣烂如泥
2 将白丁香研成粉末,与白蜂蜜调入桃花瓜子泥中即成
3 早晚用以涂抹面部,20-30分钟后用清水洗去
本方有祛除痤疮,消斑痕的功效。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:47 | 1.0 | 深入吃什么水果能淡化色斑 - ```
《摘要》
令人崩溃的是,痘痘只是青春期的烦恼,而祛斑大战,一旦��
�响就没有尽头。
买更贵的化妆品、去专业美容院护理、尝试各种散发古怪气��
�的药膏……我们以神农试百草的精神尝试各种看到的、听到�
��偏方,但效果真的寥寥。吃什么水果能淡化色斑,
《客户案例》
好像做什么事都有得有失,比如生孩子,在宝宝的降临��
�给我们无限喜悦的同时,其他的问题又让我们陷入了烦恼,�
��娠斑就是我最大的烦恼。<br>
在没怀宝宝之前,白白嫩嫩的皮肤是最让我引以为豪的��
�每个人见了都说我是个很清秀的女孩子。结婚没多久就怀孕�
��,本为肚子里的孩子而欣喜,但是,我的脸蛋也越来越糟糕
。斑点陆续冒出来,据说是妊娠斑,颧骨两边的色斑由于蝴��
�的翅膀一样,原来较好的面容一下子大打折扣。这使我很是�
��恼,由于要顾及肚子里的孩子,没敢用什么祛斑产品。<br>
生完宝宝后,本以为妊娠斑会慢慢消失,可是,顾及是��
�有产后抑郁症的原因,变得情绪很突变,不爱说话,斑也就�
��多了。为此我跑了无数趟美容院,先是用祛斑霜,但斑一点
没下去还越来越多了,在美容院老板的介绍下又做了光子嫩��
�,开始还有效果,但是没去做之后又反弹了,比没做之前还�
��重。后来还陆续用了不少祛斑产品,都不见效。<br>
祛斑,成了我的心头大事,花了不少时间在上面,当我��
�到「黛芙薇尔精华液」的介绍时,心中暗喜,说不定这次真�
��希望了。我觉得这个产品成分独特,一定错不了,就买了两
个周期的。不到一个月,脸上的斑真的淡了,板块明显消退��
�两个月的时候,板块的面积也变小了,皮肤变得湿润细腻,�
��细小皱纹都淡化了。又用了一个周期之后,斑基本上就没有
了,脸色红润有光泽,精神状况也明显改善。加上饮食调节��
�效果更好,更让我想不到的是睡眠好了,原来的便秘也没有�
��。脸色白里透红,皮肤细腻有弹性。<br>
也许是斑去掉了,我的心情好转,也恢复了以前开朗、��
�观的个性,产后抑郁症也消失了,现在,看到宝宝健康成长�
��又加上丈夫的精心照顾,我是倍感幸福啊!
阅读了吃什么水果能淡化色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
吃什么水果能淡化色斑,同时为您分享祛斑小方法
1
采含苞待放的桃花与冬瓜子(鲜品100克、干品则用30克),一��
�捣烂如泥
2 将白丁香研成粉末,与白蜂蜜调入桃花瓜子泥中即成
3 早晚用以涂抹面部,20-30分钟后用清水洗去
本方有祛除痤疮,消斑痕的功效。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:47 | defect | 深入吃什么水果能淡化色斑 《摘要》 令人崩溃的是,痘痘只是青春期的烦恼,而祛斑大战,一旦�� �响就没有尽头。 买更贵的化妆品、去专业美容院护理、尝试各种散发古怪气�� �的药膏……我们以神农试百草的精神尝试各种看到的、听到� ��偏方,但效果真的寥寥。吃什么水果能淡化色斑, 《客户案例》 好像做什么事都有得有失,比如生孩子,在宝宝的降临�� �给我们无限喜悦的同时,其他的问题又让我们陷入了烦恼,� ��娠斑就是我最大的烦恼。 在没怀宝宝之前,白白嫩嫩的皮肤是最让我引以为豪的�� �每个人见了都说我是个很清秀的女孩子。结婚没多久就怀孕� ��,本为肚子里的孩子而欣喜,但是,我的脸蛋也越来越糟糕 。斑点陆续冒出来,据说是妊娠斑,颧骨两边的色斑由于蝴�� �的翅膀一样,原来较好的面容一下子大打折扣。这使我很是� ��恼,由于要顾及肚子里的孩子,没敢用什么祛斑产品。 生完宝宝后,本以为妊娠斑会慢慢消失,可是,顾及是�� �有产后抑郁症的原因,变得情绪很突变,不爱说话,斑也就� ��多了。为此我跑了无数趟美容院,先是用祛斑霜,但斑一点 没下去还越来越多了,在美容院老板的介绍下又做了光子嫩�� �,开始还有效果,但是没去做之后又反弹了,比没做之前还� ��重。后来还陆续用了不少祛斑产品,都不见效。 祛斑,成了我的心头大事,花了不少时间在上面,当我�� �到「黛芙薇尔精华液」的介绍时,心中暗喜,说不定这次真� ��希望了。我觉得这个产品成分独特,一定错不了,就买了两 个周期的。不到一个月,脸上的斑真的淡了,板块明显消退�� �两个月的时候,板块的面积也变小了,皮肤变得湿润细腻,� ��细小皱纹都淡化了。又用了一个周期之后,斑基本上就没有 了,脸色红润有光泽,精神状况也明显改善。加上饮食调节�� �效果更好,更让我想不到的是睡眠好了,原来的便秘也没有� ��。脸色白里透红,皮肤细腻有弹性。 也许是斑去掉了,我的心情好转,也恢复了以前开朗、�� �观的个性,产后抑郁症也消失了,现在,看到宝宝健康成长� ��又加上丈夫的精心照顾,我是倍感幸福啊 阅读了吃什么水果能淡化色斑,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 吃什么水果能淡化色斑,同时为您分享祛斑小方法 采含苞待放的桃花与冬瓜子 、 ),一�� �捣烂如泥 将白丁香研成粉末,与白蜂蜜调入桃花瓜子泥中即成 早晚用以涂抹面部, - 本方有祛除痤疮,消斑痕的功效。 original issue reported on code google com by additive gmail com on jul at | 1 |
100,018 | 30,597,735,863 | IssuesEvent | 2023-07-22 01:54:52 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | Libraries missing, according to console | stat:awaiting response type:build/install stale subtype: ubuntu/linux TF 2.10 | Ok so I've got this issue where when I run my program I get a few different errors. I believe this likely has been brought up before but the issue that I did find, was all over the place with a ton of different edits and things which were frankly too difficult to follow, at least for me. I'm using tensorflow 2.10.0, cuda 11.7 and the corresponding cudnn, and I'm not sure if it's a version issue, or perhaps just a non-issue in general but I'll provide the error that I'm receiving:
2023-05-17 12:32:37.272738: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-05-17 12:32:38.702176: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-05-17 12:32:38.702271: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-05-17 12:32:38.702297: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-05-17 12:32:41.132568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-05-17 12:32:41.141491: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-05-17 12:32:41.141557: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
It's just that in order to get my program to work, as far as I can tell, I need tensorflow that has GPU support which was removed in the latest versions. Anyway, would really appreciate it if anyone knows how I can solve this please and thank you. | 1.0 | Libraries missing, according to console - Ok so I've got this issue where when I run my program I get a few different errors. I believe this likely has been brought up before but the issue that I did find, was all over the place with a ton of different edits and things which were frankly too difficult to follow, at least for me. I'm using tensorflow 2.10.0, cuda 11.7 and the corresponding cudnn, and I'm not sure if it's a version issue, or perhaps just a non-issue in general but I'll provide the error that I'm receiving:
2023-05-17 12:32:37.272738: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-05-17 12:32:38.702176: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-05-17 12:32:38.702271: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-05-17 12:32:38.702297: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-05-17 12:32:41.132568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-05-17 12:32:41.141491: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-05-17 12:32:41.141557: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
It's just that in order to get my program to work, as far as I can tell, I need tensorflow that has GPU support which was removed in the latest versions. Anyway, would really appreciate it if anyone knows how I can solve this please and thank you. | non_defect | libraries missing according to console ok so i ve got this issue where when i run my program i get a few different errors i believe this likely has been brought up before but the issue that i did find was all over the place with a ton of different edits and things which were frankly too difficult to follow at least for me i m using tensorflow cuda and the corresponding cudnn and i m not sure if it s a version issue or perhaps just a non issue in general but i ll provide the error that i m receiving e tensorflow stream executor cuda cuda blas cc unable to register cublas factory attempting to register factory for plugin cublas when one has already been registered w tensorflow stream executor platform default dso loader cc could not load dynamic library libnvinfer so dlerror libnvinfer so cannot open shared object file no such file or directory w tensorflow stream executor platform default dso loader cc could not load dynamic library libnvinfer plugin so dlerror libnvinfer plugin so cannot open shared object file no such file or directory w tensorflow compiler utils py utils cc tf trt warning cannot dlopen some tensorrt libraries if you would like to use nvidia gpu with tensorrt please make sure the missing libraries mentioned above are installed properly i tensorflow stream executor cuda cuda gpu executor cc could not open file to read numa node sys bus pci devices numa node your kernel may have been built without numa support i tensorflow stream executor cuda cuda gpu executor cc could not open file to read numa node sys bus pci devices numa node your kernel may have been built without numa support i tensorflow stream executor cuda cuda gpu executor cc could not open file to read numa node sys bus pci devices numa node your kernel may have been built without numa support it s just that in order to get my program to work as far as i can tell i need tensorflow that has gpu support which was removed in the latest versions anyway would really appreciate it if anyone knows how i can solve this please and thank you | 0 |
1,272 | 2,603,742,474 | IssuesEvent | 2015-02-24 17:41:34 | chrsmith/bwapi | https://api.github.com/repos/chrsmith/bwapi | closed | Vulture AttackMove Crashes StartCraft | auto-migrated Priority-Critical Type-Defect | ```
What steps will reproduce the problem?
1. Compile ExampleAIModule.cpp
2. Run the map: vulture.scm
3. After 5 seconds, the module will try to attack move the vulture, but
StarCraft will crash
Additional Information:
-Other orders given to vultures will crash StarCraft, such as attack,
rightClick, patrol, etc.
-Placing mines does not crash StarCraft
-This may be related to Issue 39: which was caused by the select code.
What is the expected output? What do you see instead?
Crash after 5 seconds
What version of the product are you using? On what operating system?
BW Beta 2.1.4
Please provide any additional information below.
ExampleAIModule.cpp:
#include "ExampleAIModule.h"
using namespace BWAPI;
void ExampleAIModule::onStart()
{
Broodwar->sendText("Vulture Test!");
}
void ExampleAIModule::onFrame()
{
if (Broodwar->getFrameCount()%120==0)
{
std::set<Unit*> myUnits = Broodwar->self()->getUnits();
for(std::set<Unit*>::iterator i=myUnits.begin();i!=myUnits.end();i++)
{
if ((*i)->getType().canMove()) {
Broodwar->sendText("attackMove");
(*i)->attackMove(BWAPI::Position(32, 32));
}
}
}
}
```
-----
Original issue reported on code.google.com by `bgwe...@gmail.com` on 17 Nov 2009 at 12:33
Attachments:
* [vulture.scm](https://storage.googleapis.com/google-code-attachments/bwapi/issue-108/comment-0/vulture.scm)
| 1.0 | Vulture AttackMove Crashes StartCraft - ```
What steps will reproduce the problem?
1. Compile ExampleAIModule.cpp
2. Run the map: vulture.scm
3. After 5 seconds, the module will try to attack move the vulture, but
StarCraft will crash
Additional Information:
-Other orders given to vultures will crash StarCraft, such as attack,
rightClick, patrol, etc.
-Placing mines does not crash StarCraft
-This may be related to Issue 39: which was caused by the select code.
What is the expected output? What do you see instead?
Crash after 5 seconds
What version of the product are you using? On what operating system?
BW Beta 2.1.4
Please provide any additional information below.
ExampleAIModule.cpp:
#include "ExampleAIModule.h"
using namespace BWAPI;
void ExampleAIModule::onStart()
{
Broodwar->sendText("Vulture Test!");
}
void ExampleAIModule::onFrame()
{
if (Broodwar->getFrameCount()%120==0)
{
std::set<Unit*> myUnits = Broodwar->self()->getUnits();
for(std::set<Unit*>::iterator i=myUnits.begin();i!=myUnits.end();i++)
{
if ((*i)->getType().canMove()) {
Broodwar->sendText("attackMove");
(*i)->attackMove(BWAPI::Position(32, 32));
}
}
}
}
```
-----
Original issue reported on code.google.com by `bgwe...@gmail.com` on 17 Nov 2009 at 12:33
Attachments:
* [vulture.scm](https://storage.googleapis.com/google-code-attachments/bwapi/issue-108/comment-0/vulture.scm)
| defect | vulture attackmove crashes startcraft what steps will reproduce the problem compile exampleaimodule cpp run the map vulture scm after seconds the module will try to attack move the vulture but starcraft will crash additional information other orders given to vultures will crash starcraft such as attack rightclick patrol etc placing mines does not crash starcraft this may be related to issue which was caused by the select code what is the expected output what do you see instead crash after seconds what version of the product are you using on what operating system bw beta please provide any additional information below exampleaimodule cpp include exampleaimodule h using namespace bwapi void exampleaimodule onstart broodwar sendtext vulture test void exampleaimodule onframe if broodwar getframecount std set myunits broodwar self getunits for std set iterator i myunits begin i myunits end i if i gettype canmove broodwar sendtext attackmove i attackmove bwapi position original issue reported on code google com by bgwe gmail com on nov at attachments | 1 |
26,917 | 4,827,984,483 | IssuesEvent | 2016-11-07 15:10:02 | barricklab/breseq | https://api.github.com/repos/barricklab/breseq | closed | 3 low quality snps (false positives) rather than 1 quality deletion. change in call between versions | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
Run Breseq Version 0.16 on REL2057 and compare results to Version 0.23.
What is the expected output? What do you see instead?
In version 0.16 a 84 bp deletion is predicted starting at 474,258 junctions
look very good as does the missing coverage evidence, but there is a small
amount of contaminant reads. In version 0.23, a smaller (66) bp missing
coverage evidence is predicted along with 2 predicted snps and a 3rd possible
mixed read alignment evidence. None of the 3 read alignments look good (very
few registers, and poor aligned bases on ends).
This may be related to issue 68.
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `Daniel.D...@gmail.com` on 26 Jun 2013 at 5:50
| 1.0 | 3 low quality snps (false positives) rather than 1 quality deletion. change in call between versions - ```
What steps will reproduce the problem?
Run Breseq Version 0.16 on REL2057 and compare results to Version 0.23.
What is the expected output? What do you see instead?
In version 0.16 a 84 bp deletion is predicted starting at 474,258 junctions
look very good as does the missing coverage evidence, but there is a small
amount of contaminant reads. In version 0.23, a smaller (66) bp missing
coverage evidence is predicted along with 2 predicted snps and a 3rd possible
mixed read alignment evidence. None of the 3 read alignments look good (very
few registers, and poor aligned bases on ends).
This may be related to issue 68.
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `Daniel.D...@gmail.com` on 26 Jun 2013 at 5:50
| defect | low quality snps false positives rather than quality deletion change in call between versions what steps will reproduce the problem run breseq version on and compare results to version what is the expected output what do you see instead in version a bp deletion is predicted starting at junctions look very good as does the missing coverage evidence but there is a small amount of contaminant reads in version a smaller bp missing coverage evidence is predicted along with predicted snps and a possible mixed read alignment evidence none of the read alignments look good very few registers and poor aligned bases on ends this may be related to issue please use labels and text to provide additional information original issue reported on code google com by daniel d gmail com on jun at | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.