Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,056
| 5,168,065,213
|
IssuesEvent
|
2017-01-17 20:32:06
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
Doc string for MooseMesh/dim is not accurate
|
C: MOOSE P: normal T: defect
|
### Description of the enhancement or error report
It says that `This is completely ignored for ExodusII meshes!`, but actually not.
https://github.com/idaholab/moose/blob/devel/framework/src/mesh/MooseMesh.C#L191 causes the default spatial dimension of the mesh is 3 regardless of the actual dimension of the mesh. I discovered this with our custom mesh extruder with a 2D exodus mesh file where we check if the mesh dimension and the mesh spatial dimension are the same. Once we specify `Mesh/dim=2 or 1` in the input, the extruder will work fine.
### Rationale for the enhancement or information for reproducing the error
Something is not very right. I suspect we should use dim=1 to construct the default empty mesh.
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
No impact except my particular mesh extruder.
|
1.0
|
Doc string for MooseMesh/dim is not accurate - ### Description of the enhancement or error report
It says that `This is completely ignored for ExodusII meshes!`, but actually not.
https://github.com/idaholab/moose/blob/devel/framework/src/mesh/MooseMesh.C#L191 causes the default spatial dimension of the mesh is 3 regardless of the actual dimension of the mesh. I discovered this with our custom mesh extruder with a 2D exodus mesh file where we check if the mesh dimension and the mesh spatial dimension are the same. Once we specify `Mesh/dim=2 or 1` in the input, the extruder will work fine.
### Rationale for the enhancement or information for reproducing the error
Something is not very right. I suspect we should use dim=1 to construct the default empty mesh.
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
No impact except my particular mesh extruder.
|
defect
|
doc string for moosemesh dim is not accurate description of the enhancement or error report it says that this is completely ignored for exodusii meshes but actually not causes the default spatial dimension of the mesh is regardless of the actual dimension of the mesh i discovered this with our custom mesh extruder with a exodus mesh file where we check if the mesh dimension and the mesh spatial dimension are the same once we specify mesh dim or in the input the extruder will work fine rationale for the enhancement or information for reproducing the error something is not very right i suspect we should use dim to construct the default empty mesh identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted no impact except my particular mesh extruder
| 1
|
255,496
| 21,929,659,253
|
IssuesEvent
|
2022-05-23 08:37:51
|
enonic/app-contentstudio
|
https://api.github.com/repos/enonic/app-contentstudio
|
closed
|
Shortcut- selected options are not refreshed after reverting versions
|
Bug Test is Failing
|
1. Open wizard for new shortcut, fill in the name input and select an option in target selector, save
2. Open Versions Panel and revert the previous version.
3. Revert the verdion with selected target.
BUG - selected options not refreshed after revertiong versions
https://user-images.githubusercontent.com/3728712/169778088-bd261c7d-c469-4d23-a3fe-50c379d66534.mp4
|
1.0
|
Shortcut- selected options are not refreshed after reverting versions - 1. Open wizard for new shortcut, fill in the name input and select an option in target selector, save
2. Open Versions Panel and revert the previous version.
3. Revert the verdion with selected target.
BUG - selected options not refreshed after revertiong versions
https://user-images.githubusercontent.com/3728712/169778088-bd261c7d-c469-4d23-a3fe-50c379d66534.mp4
|
non_defect
|
shortcut selected options are not refreshed after reverting versions open wizard for new shortcut fill in the name input and select an option in target selector save open versions panel and revert the previous version revert the verdion with selected target bug selected options not refreshed after revertiong versions
| 0
|
31,496
| 6,541,485,678
|
IssuesEvent
|
2017-09-01 20:12:29
|
ironjan/metal-only
|
https://api.github.com/repos/ironjan/metal-only
|
closed
|
KotlinNullPointerException reported via Play Store
|
defect
|
Reported on 2 devices for 0.6.7.
```
kotlin.KotlinNullPointerException:
at com.github.ironjan.metalonly.client_library.MetalOnlyAPIWrapper.getStats (MetalOnlyAPIWrapper.kt:55)
at com.codingspezis.android.metalonly.player.StreamControlActivity$2.run (StreamControlActivity.java:195)
at java.lang.Thread.run (Thread.java:856)
```
|
1.0
|
KotlinNullPointerException reported via Play Store - Reported on 2 devices for 0.6.7.
```
kotlin.KotlinNullPointerException:
at com.github.ironjan.metalonly.client_library.MetalOnlyAPIWrapper.getStats (MetalOnlyAPIWrapper.kt:55)
at com.codingspezis.android.metalonly.player.StreamControlActivity$2.run (StreamControlActivity.java:195)
at java.lang.Thread.run (Thread.java:856)
```
|
defect
|
kotlinnullpointerexception reported via play store reported on devices for kotlin kotlinnullpointerexception at com github ironjan metalonly client library metalonlyapiwrapper getstats metalonlyapiwrapper kt at com codingspezis android metalonly player streamcontrolactivity run streamcontrolactivity java at java lang thread run thread java
| 1
|
48,768
| 13,184,733,450
|
IssuesEvent
|
2020-08-12 19:59:46
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
amanda-core giving nasty segfault with release build on gorgon (ubu 8.10 x64) (Trac #163)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/163
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-10-21T15:56:26",
"description": "nasty tracebacks:\n*** buffer overflow detected ***: python terminated\n======= Backtrace: =========\n/lib/libc.so.6(__fortify_fail+0x37)[0x7f4eec560887]\n/lib/libc.so.6[0x7f4eec55e750]\n/lib/libc.so.6[0x7f4eec55dae9]\n/lib/libc.so.6(_IO_default_xsputn+0x96)[0x7f4eec4d9116]\n/lib/libc.so.6(_IO_vfprintf+0x1c1c)[0x7f4eec4aa29c]\n/lib/libc.so.6(__vsprintf_chk+0x9d)[0x7f4eec55db8d]\n/lib/libc.so.6(__sprintf_chk+0x80)[0x7f4eec55dad0]\n/opt/slave_build/manual/offline-software/build_release/lib/libamanda-core.so(_ZN9F2kReader20FillTrigger_Muon_DAQERK4mhitR6I3TreeI9I3TriggerE+0x40)[0x7f4ee1a53140]\n",
"reporter": "blaufuss",
"cc": "fabian.kislat@desy.de",
"resolution": "fixed",
"_ts": "1256140586000000",
"component": "combo core",
"summary": "amanda-core giving nasty segfault with release build on gorgon (ubu 8.10 x64)",
"priority": "normal",
"keywords": "",
"time": "2009-06-12T20:59:39",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
amanda-core giving nasty segfault with release build on gorgon (ubu 8.10 x64) (Trac #163) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/163
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-10-21T15:56:26",
"description": "nasty tracebacks:\n*** buffer overflow detected ***: python terminated\n======= Backtrace: =========\n/lib/libc.so.6(__fortify_fail+0x37)[0x7f4eec560887]\n/lib/libc.so.6[0x7f4eec55e750]\n/lib/libc.so.6[0x7f4eec55dae9]\n/lib/libc.so.6(_IO_default_xsputn+0x96)[0x7f4eec4d9116]\n/lib/libc.so.6(_IO_vfprintf+0x1c1c)[0x7f4eec4aa29c]\n/lib/libc.so.6(__vsprintf_chk+0x9d)[0x7f4eec55db8d]\n/lib/libc.so.6(__sprintf_chk+0x80)[0x7f4eec55dad0]\n/opt/slave_build/manual/offline-software/build_release/lib/libamanda-core.so(_ZN9F2kReader20FillTrigger_Muon_DAQERK4mhitR6I3TreeI9I3TriggerE+0x40)[0x7f4ee1a53140]\n",
"reporter": "blaufuss",
"cc": "fabian.kislat@desy.de",
"resolution": "fixed",
"_ts": "1256140586000000",
"component": "combo core",
"summary": "amanda-core giving nasty segfault with release build on gorgon (ubu 8.10 x64)",
"priority": "normal",
"keywords": "",
"time": "2009-06-12T20:59:39",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
amanda core giving nasty segfault with release build on gorgon ubu trac migrated from reported by blaufuss and owned by blaufuss json status closed changetime description nasty tracebacks n buffer overflow detected python terminated n backtrace n lib libc so fortify fail n lib libc so n lib libc so n lib libc so io default xsputn n lib libc so io vfprintf n lib libc so vsprintf chk n lib libc so sprintf chk n opt slave build manual offline software build release lib libamanda core so muon n reporter blaufuss cc fabian kislat desy de resolution fixed ts component combo core summary amanda core giving nasty segfault with release build on gorgon ubu priority normal keywords time milestone owner blaufuss type defect
| 1
|
77,410
| 7,573,865,422
|
IssuesEvent
|
2018-04-23 19:06:45
|
metafizzy/flickity
|
https://api.github.com/repos/metafizzy/flickity
|
closed
|
padding on caroucel-cell needed when dpr > 1
|
test case required
|
on my google pixel 2 (device pixel ratio about 2.5) all of my carousel-cell content is cropped from the left side. i need to set a left padding to see the content, which is hard to find a correct value that works for all pixel ratios (1 for desktop, and 2-3 for mobile phones)
the easiest way to test it is with firefox developer tools and the responsive size tools


|
1.0
|
padding on caroucel-cell needed when dpr > 1 - on my google pixel 2 (device pixel ratio about 2.5) all of my carousel-cell content is cropped from the left side. i need to set a left padding to see the content, which is hard to find a correct value that works for all pixel ratios (1 for desktop, and 2-3 for mobile phones)
the easiest way to test it is with firefox developer tools and the responsive size tools


|
non_defect
|
padding on caroucel cell needed when dpr on my google pixel device pixel ratio about all of my carousel cell content is cropped from the left side i need to set a left padding to see the content which is hard to find a correct value that works for all pixel ratios for desktop and for mobile phones the easiest way to test it is with firefox developer tools and the responsive size tools
| 0
|
208,793
| 23,654,450,677
|
IssuesEvent
|
2022-08-26 09:49:43
|
finos/FDC3-conformance-framework
|
https://api.github.com/repos/finos/FDC3-conformance-framework
|
closed
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz
|
security vulnerability
|
## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- @fdc3-conformance-framework/app-1.0.0.tgz (Root Library)
- react-scripts-4.0.3.tgz
- webpack-dev-server-3.11.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/finos/FDC3-conformance-framework/commit/464478c8d773c9f1db106df334cccbe96b76f1e7">464478c8d773c9f1db106df334cccbe96b76f1e7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23424">https://nvd.nist.gov/vuln/detail/CVE-2021-23424</a></p>
<p>Release Date: 2021-08-18</p>
<p>Fix Resolution: VueJS.NetCore - 1.1.1;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;Fable.Template.Elmish.React - 0.1.6;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;Envisia.DotNet.Templates - 3.0.1</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- @fdc3-conformance-framework/app-1.0.0.tgz (Root Library)
- react-scripts-4.0.3.tgz
- webpack-dev-server-3.11.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/finos/FDC3-conformance-framework/commit/464478c8d773c9f1db106df334cccbe96b76f1e7">464478c8d773c9f1db106df334cccbe96b76f1e7</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23424">https://nvd.nist.gov/vuln/detail/CVE-2021-23424</a></p>
<p>Release Date: 2021-08-18</p>
<p>Fix Resolution: VueJS.NetCore - 1.1.1;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;Fable.Template.Elmish.React - 0.1.6;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;Envisia.DotNet.Templates - 3.0.1</p>
</p>
</details>
<p></p>
|
non_defect
|
cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file package json path to vulnerable library node modules ansi html package json dependency hierarchy conformance framework app tgz root library react scripts tgz webpack dev server tgz x ansi html tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution vuejs netcore indianadavy vuejswebapitemplate csharp nordron angulartemplate corevuewebtest dotnetng template fable template elmish react safe template gr pagerender razor envisia dotnet templates
| 0
|
53,429
| 13,261,597,960
|
IssuesEvent
|
2020-08-20 20:11:25
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
muongun - UNIX api violation (malloc) (Trac #1374)
|
Migrated from Trac combo simulation defect
|
http://software.icecube.wisc.edu/static_analysis/00_LATEST/report-c5454a.html#EndPath
The behavior of `malloc()` when allocating zero bytes is implementation defined. This could result in this section of code **always** failing. This code should be refactored to "do what I want" from "do what I mean".
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1374">https://code.icecube.wisc.edu/projects/icecube/ticket/1374</a>, reported by negaand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:07",
"_ts": "1458335647931556",
"description": "http://software.icecube.wisc.edu/static_analysis/00_LATEST/report-c5454a.html#EndPath\n\nThe behavior of `malloc()` when allocating zero bytes is implementation defined. This could result in this section of code **always** failing. This code should be refactored to \"do what I want\" from \"do what I mean\".",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2015-10-01T13:59:52",
"component": "combo simulation",
"summary": "muongun - UNIX api violation (malloc)",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
muongun - UNIX api violation (malloc) (Trac #1374) - http://software.icecube.wisc.edu/static_analysis/00_LATEST/report-c5454a.html#EndPath
The behavior of `malloc()` when allocating zero bytes is implementation defined. This could result in this section of code **always** failing. This code should be refactored to "do what I want" from "do what I mean".
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1374">https://code.icecube.wisc.edu/projects/icecube/ticket/1374</a>, reported by negaand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:07",
"_ts": "1458335647931556",
"description": "http://software.icecube.wisc.edu/static_analysis/00_LATEST/report-c5454a.html#EndPath\n\nThe behavior of `malloc()` when allocating zero bytes is implementation defined. This could result in this section of code **always** failing. This code should be refactored to \"do what I want\" from \"do what I mean\".",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2015-10-01T13:59:52",
"component": "combo simulation",
"summary": "muongun - UNIX api violation (malloc)",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
defect
|
muongun unix api violation malloc trac the behavior of malloc when allocating zero bytes is implementation defined this could result in this section of code always failing this code should be refactored to do what i want from do what i mean migrated from json status closed changetime ts description behavior of malloc when allocating zero bytes is implementation defined this could result in this section of code always failing this code should be refactored to do what i want from do what i mean reporter nega cc resolution fixed time component combo simulation summary muongun unix api violation malloc priority normal keywords milestone owner jvansanten type defect
| 1
|
87,507
| 15,779,916,193
|
IssuesEvent
|
2021-04-01 09:17:51
|
AlexRogalskiy/gradle-java-sample
|
https://api.github.com/repos/AlexRogalskiy/gradle-java-sample
|
opened
|
CVE-2021-21343 (High) detected in xstream-1.4.10.jar
|
security vulnerability
|
## CVE-2021-21343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: gradle-java-sample/buildSrc/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.10/dfecae23647abc9d9fd0416629a4213a3882b101/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- gradle-versions-plugin-0.28.0.jar (Root Library)
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/gradle-java-sample/commit/faab29c6da2c042014b345fb42b18ed6d5648688">faab29c6da2c042014b345fb42b18ed6d5648688</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects. XStream creates therefore new instances based on these type information. An attacker can manipulate the processed input stream and replace or inject objects, that result in the deletion of a file on the local host. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21343>CVE-2021-21343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf">https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21343 (High) detected in xstream-1.4.10.jar - ## CVE-2021-21343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: gradle-java-sample/buildSrc/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.10/dfecae23647abc9d9fd0416629a4213a3882b101/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- gradle-versions-plugin-0.28.0.jar (Root Library)
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/gradle-java-sample/commit/faab29c6da2c042014b345fb42b18ed6d5648688">faab29c6da2c042014b345fb42b18ed6d5648688</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects. XStream creates therefore new instances based on these type information. An attacker can manipulate the processed input stream and replace or inject objects, that result in the deletion of a file on the local host. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21343>CVE-2021-21343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf">https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file gradle java sample buildsrc build gradle path to vulnerable library home wss scanner gradle caches modules files com thoughtworks xstream xstream xstream jar dependency hierarchy gradle versions plugin jar root library x xstream jar vulnerable library found in head commit a href vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects xstream creates therefore new instances based on these type information an attacker can manipulate the processed input stream and replace or inject objects that result in the deletion of a file on the local host no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream step up your open source security game with whitesource
| 0
|
66,210
| 20,052,363,116
|
IssuesEvent
|
2022-02-03 08:22:14
|
martinrotter/rssguard
|
https://api.github.com/repos/martinrotter/rssguard
|
closed
|
[BUG]: "Has new articles" - but actually doesn't
|
Type-Defect
|
### Brief description of the issue

 (all articles already read long ago and no new ones are there as you can see)
### How to reproduce the bug?
This feed: `https://hairywizardztranslations.blogspot.com/feeds/posts/default`
### What was the expected result?
says "new articles" only when there are new articles
### What actually happened?
the opposite
### Other information
If I click "mark selected as read" on the fead - it is still shown as unread,
but if I click "mark unread" on it first and then mark as read - then it becomes "read", but only before next feed update
[rssguard.log](https://github.com/martinrotter/rssguard/files/7988622/rssguard.log)
### Operating system and version
* OS: Win10 x64
* RSS Guard version:
````
RSS Guard
Version: 4.1.3 (built on Windows/x86_64)
Revision: 03d56b30-nowebengine
Build date: 1/19/22 11:57 AM
Qt: 6.2.2 (compiled against 6.2.2)
````
|
1.0
|
[BUG]: "Has new articles" - but actually doesn't - ### Brief description of the issue

 (all articles already read long ago and no new ones are there as you can see)
### How to reproduce the bug?
This feed: `https://hairywizardztranslations.blogspot.com/feeds/posts/default`
### What was the expected result?
says "new articles" only when there are new articles
### What actually happened?
the opposite
### Other information
If I click "mark selected as read" on the fead - it is still shown as unread,
but if I click "mark unread" on it first and then mark as read - then it becomes "read", but only before next feed update
[rssguard.log](https://github.com/martinrotter/rssguard/files/7988622/rssguard.log)
### Operating system and version
* OS: Win10 x64
* RSS Guard version:
````
RSS Guard
Version: 4.1.3 (built on Windows/x86_64)
Revision: 03d56b30-nowebengine
Build date: 1/19/22 11:57 AM
Qt: 6.2.2 (compiled against 6.2.2)
````
|
defect
|
has new articles but actually doesn t brief description of the issue all articles already read long ago and no new ones are there as you can see how to reproduce the bug this feed what was the expected result says new articles only when there are new articles what actually happened the opposite other information if i click mark selected as read on the fead it is still shown as unread but if i click mark unread on it first and then mark as read then it becomes read but only before next feed update operating system and version os rss guard version rss guard version built on windows revision nowebengine build date am qt compiled against
| 1
|
70,758
| 18,269,011,620
|
IssuesEvent
|
2021-10-04 11:54:04
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
closed
|
F-Droid build failed
|
OS-android L1 Very few P6 Blocks build T7 Build/test failure
|
```
+ make -C .. release_aar
make: Entering directory '/home/vagrant/build/com.tailscale.ipn'
find: ‘/home/vagrant/.cache/tailscale-android-go-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2’: No such file or directory
find: ‘/home/vagrant/.cache/tailscale-android-go-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2’: No such file or directory
want: e336267ea2a637426c0302945d0c1f8fa2f2f627
got: 5cd337198ead0768975610a135e26257153198c7
rm -rf /home/vagrant/.cache/tailscale-android-go-*
wget https://github.com/tailscale/go/releases/download/build-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2/linux.tar.gz -O "/tmp/tmp.cIl4qYDSFH.tgz"
--2021-08-06 05:49:53-- https://github.com/tailscale/go/releases/download/build-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2/linux.tar.gz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2021-08-06 05:49:54 ERROR 404: Not Found.
Makefile:42: recipe for target 'toolchain' failed
make: *** [toolchain] Error 8
make: Leaving directory '/home/vagrant/build/com.tailscale.ipn'
```
It seems since 1.13.42-tfd7b738e5-ga68462ec65f tailscale-android uses your own go fork. But it seems the go binary hosted on GItHub is removed? And if it works with the origin go?
|
2.0
|
F-Droid build failed - ```
+ make -C .. release_aar
make: Entering directory '/home/vagrant/build/com.tailscale.ipn'
find: ‘/home/vagrant/.cache/tailscale-android-go-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2’: No such file or directory
find: ‘/home/vagrant/.cache/tailscale-android-go-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2’: No such file or directory
want: e336267ea2a637426c0302945d0c1f8fa2f2f627
got: 5cd337198ead0768975610a135e26257153198c7
rm -rf /home/vagrant/.cache/tailscale-android-go-*
wget https://github.com/tailscale/go/releases/download/build-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2/linux.tar.gz -O "/tmp/tmp.cIl4qYDSFH.tgz"
--2021-08-06 05:49:53-- https://github.com/tailscale/go/releases/download/build-6fa85e8201f1f75fb7323eb48a0b24274a6e33b2/linux.tar.gz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2021-08-06 05:49:54 ERROR 404: Not Found.
Makefile:42: recipe for target 'toolchain' failed
make: *** [toolchain] Error 8
make: Leaving directory '/home/vagrant/build/com.tailscale.ipn'
```
It seems since 1.13.42-tfd7b738e5-ga68462ec65f tailscale-android uses your own go fork. But it seems the go binary hosted on GItHub is removed? And if it works with the origin go?
|
non_defect
|
f droid build failed make c release aar make entering directory home vagrant build com tailscale ipn find ‘ home vagrant cache tailscale android go ’ no such file or directory find ‘ home vagrant cache tailscale android go ’ no such file or directory want got rm rf home vagrant cache tailscale android go wget o tmp tmp tgz resolving github com github com connecting to github com github com connected http request sent awaiting response not found error not found makefile recipe for target toolchain failed make error make leaving directory home vagrant build com tailscale ipn it seems since tailscale android uses your own go fork but it seems the go binary hosted on github is removed and if it works with the origin go
| 0
|
691,406
| 23,696,024,400
|
IssuesEvent
|
2022-08-29 14:43:04
|
celo-org/celo-monorepo
|
https://api.github.com/repos/celo-org/celo-monorepo
|
closed
|
New contract instances of existing versioned implementations should have appropriate semantic versioning
|
Priority: P3 Component: Contracts CAP stale
|
### Expected Behavior
Formally specified versioning of new contracts (`StableTokenEUR is StableToken`) inheriting from existing implementations
### Current Behavior
Independent versioning in `getVersionNumber` and as understood by contract release/verification tooling
|
1.0
|
New contract instances of existing versioned implementations should have appropriate semantic versioning - ### Expected Behavior
Formally specified versioning of new contracts (`StableTokenEUR is StableToken`) inheriting from existing implementations
### Current Behavior
Independent versioning in `getVersionNumber` and as understood by contract release/verification tooling
|
non_defect
|
new contract instances of existing versioned implementations should have appropriate semantic versioning expected behavior formally specified versioning of new contracts stabletokeneur is stabletoken inheriting from existing implementations current behavior independent versioning in getversionnumber and as understood by contract release verification tooling
| 0
|
139,301
| 20,823,065,545
|
IssuesEvent
|
2022-03-18 17:22:01
|
nextcloud/desktop
|
https://api.github.com/repos/nextcloud/desktop
|
closed
|
Main dialog should respect system theme
|
bug design approved
|
<!--
Thanks for reporting issues back to Nextcloud!
This is the **issue tracker of Nextcloud**, please do NOT use this to get answers to your questions or get help for fixing your installation. You can find help debugging your system on our home user forums: https://help.nextcloud.com or, if you use Nextcloud in a large organization, ask our engineers on https://portal.nextcloud.com. See also https://nextcloud.com/support for support options.
Guidelines for submitting issues:
* Please search the existing issues first, it's likely that your issue was already reported or even fixed.
- Go to https://github.com/nextcloud and type any word in the top search/command bar. You probably see something like "We couldn’t find any repositories matching ..." then click "Issues" in the left navigation.
- You can also filter by appending e. g. "state:open" to the search string.
- More info on search syntax within github: https://help.github.com/articles/searching-issues
* Please fill in as much of the template below as possible. The logs are absolutely crucial for the developers to be able to help you. Expect us to quickly close issues without logs or other information we need.
* Also note that we have a https://nextcloud.com/contribute/code-of-conduct/ that applies on Github. To summarize it: be kind. We try our best to be nice, too. If you can't be bothered to be polite, please just don't bother to report issues as we won't feel motivated to help you.
-->
<!--- Please keep the note below for others who read your bug report -->
### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
### Expected behaviour
The main dialog should respect the system theme. Like for example if a dark color theme is chosen, the main dialogs background should not be white. Since the rest of the client respects the system theme, this looks off in my opinion.
### Actual behaviour
The main dialogs background has always the same color, regardless which theme was chosen.
### Steps to reproduce
1. Set a dark theme
2. Open the main dialog
### Client configuration
Client version: 3.1.81
As this involves design questions, a comment from @jancborchardt you would be valuable:)
|
1.0
|
Main dialog should respect system theme - <!--
Thanks for reporting issues back to Nextcloud!
This is the **issue tracker of Nextcloud**, please do NOT use this to get answers to your questions or get help for fixing your installation. You can find help debugging your system on our home user forums: https://help.nextcloud.com or, if you use Nextcloud in a large organization, ask our engineers on https://portal.nextcloud.com. See also https://nextcloud.com/support for support options.
Guidelines for submitting issues:
* Please search the existing issues first, it's likely that your issue was already reported or even fixed.
- Go to https://github.com/nextcloud and type any word in the top search/command bar. You probably see something like "We couldn’t find any repositories matching ..." then click "Issues" in the left navigation.
- You can also filter by appending e. g. "state:open" to the search string.
- More info on search syntax within github: https://help.github.com/articles/searching-issues
* Please fill in as much of the template below as possible. The logs are absolutely crucial for the developers to be able to help you. Expect us to quickly close issues without logs or other information we need.
* Also note that we have a https://nextcloud.com/contribute/code-of-conduct/ that applies on Github. To summarize it: be kind. We try our best to be nice, too. If you can't be bothered to be polite, please just don't bother to report issues as we won't feel motivated to help you.
-->
<!--- Please keep the note below for others who read your bug report -->
### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
### Expected behaviour
The main dialog should respect the system theme. Like for example if a dark color theme is chosen, the main dialogs background should not be white. Since the rest of the client respects the system theme, this looks off in my opinion.
### Actual behaviour
The main dialogs background has always the same color, regardless which theme was chosen.
### Steps to reproduce
1. Set a dark theme
2. Open the main dialog
### Client configuration
Client version: 3.1.81
As this involves design questions, a comment from @jancborchardt you would be valuable:)
|
non_defect
|
main dialog should respect system theme thanks for reporting issues back to nextcloud this is the issue tracker of nextcloud please do not use this to get answers to your questions or get help for fixing your installation you can find help debugging your system on our home user forums or if you use nextcloud in a large organization ask our engineers on see also for support options guidelines for submitting issues please search the existing issues first it s likely that your issue was already reported or even fixed go to and type any word in the top search command bar you probably see something like we couldn’t find any repositories matching then click issues in the left navigation you can also filter by appending e g state open to the search string more info on search syntax within github please fill in as much of the template below as possible the logs are absolutely crucial for the developers to be able to help you expect us to quickly close issues without logs or other information we need also note that we have a that applies on github to summarize it be kind we try our best to be nice too if you can t be bothered to be polite please just don t bother to report issues as we won t feel motivated to help you how to use github please use the 👍 to show that you are affected by the same issue please don t comment if you have no relevant information to add it s just extra noise for everyone subscribed to this issue subscribe to receive notifications on status change and new comments expected behaviour the main dialog should respect the system theme like for example if a dark color theme is chosen the main dialogs background should not be white since the rest of the client respects the system theme this looks off in my opinion actual behaviour the main dialogs background has always the same color regardless which theme was chosen steps to reproduce set a dark theme open the main dialog client configuration client version as this involves design questions a comment from jancborchardt you would be valuable
| 0
|
512,905
| 14,911,861,256
|
IssuesEvent
|
2021-01-22 11:44:43
|
conan-io/conan
|
https://api.github.com/repos/conan-io/conan
|
closed
|
[feature] Expose scm data to ``conan info``
|
complex: low priority: medium stage: queue type: feature
|
Would be very useful to have information about the commits for scm=auto specially, to allow checking out dependencies at the exact commits used in a dependency graph.
|
1.0
|
[feature] Expose scm data to ``conan info`` - Would be very useful to have information about the commits for scm=auto specially, to allow checking out dependencies at the exact commits used in a dependency graph.
|
non_defect
|
expose scm data to conan info would be very useful to have information about the commits for scm auto specially to allow checking out dependencies at the exact commits used in a dependency graph
| 0
|
23,344
| 3,796,387,369
|
IssuesEvent
|
2016-03-23 00:10:42
|
extnet/Ext.NET
|
https://api.github.com/repos/extnet/Ext.NET
|
opened
|
pt_BR locale wrong for Ext.locale.pt_BR.grid.feature.Grouping.showGroupsText
|
3.x 4.x defect sencha
|
The translated text reads _"Mostrar agrupad"_ while it should show _"Mostrar agrupado"_.
Reported on this forum thread: [Ext.locale.pt_BR.grid.feature.Grouping](http://forums.ext.net/showthread.php?60750).
|
1.0
|
pt_BR locale wrong for Ext.locale.pt_BR.grid.feature.Grouping.showGroupsText - The translated text reads _"Mostrar agrupad"_ while it should show _"Mostrar agrupado"_.
Reported on this forum thread: [Ext.locale.pt_BR.grid.feature.Grouping](http://forums.ext.net/showthread.php?60750).
|
defect
|
pt br locale wrong for ext locale pt br grid feature grouping showgroupstext the translated text reads mostrar agrupad while it should show mostrar agrupado reported on this forum thread
| 1
|
338,165
| 10,225,164,344
|
IssuesEvent
|
2019-08-16 14:32:11
|
linkerd/website
|
https://api.github.com/repos/linkerd/website
|
closed
|
Update Tap RBAC docs with linkerd dashboard info
|
docs priority/P0
|
https://linkerd.io/tap-rbac provides info on managing Tap RBAC access. This becomes more relevant to `linkerd dashboard` once linkerd/linkerd2#3203 and linkerd/linkerd2#3208 ship. Update that page with information around managing Tap RBAC for the `linkerd-web` service account.
also document how to create a binding using the new `linkerd-linkerd-tap-admin` ClusterRole.
|
1.0
|
Update Tap RBAC docs with linkerd dashboard info - https://linkerd.io/tap-rbac provides info on managing Tap RBAC access. This becomes more relevant to `linkerd dashboard` once linkerd/linkerd2#3203 and linkerd/linkerd2#3208 ship. Update that page with information around managing Tap RBAC for the `linkerd-web` service account.
also document how to create a binding using the new `linkerd-linkerd-tap-admin` ClusterRole.
|
non_defect
|
update tap rbac docs with linkerd dashboard info provides info on managing tap rbac access this becomes more relevant to linkerd dashboard once linkerd and linkerd ship update that page with information around managing tap rbac for the linkerd web service account also document how to create a binding using the new linkerd linkerd tap admin clusterrole
| 0
|
68,945
| 21,995,097,857
|
IssuesEvent
|
2022-05-26 05:02:04
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
opened
|
Showing 'Joined' button instead of 'Join' while user trying to rejoin into a room
|
T-Defect
|
### Steps to reproduce
1. Leave any room
2. Search same room to rejoin
3. It shows 'Joined' button instead of 'Join'
4. But inside this room it shows option for 'Join'
### Outcome
#### What did you expect?
While trying to rejoin the room Button status should be 'Join' instead of 'Joined'
#### What happened instead?
Now While trying to rejoin the room It shows 'Joined' button instead of 'Join'
### Your phone model
iPad mini
### Operating system version
iOS 15.1
### Application version
Element 1.8.16
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Showing 'Joined' button instead of 'Join' while user trying to rejoin into a room - ### Steps to reproduce
1. Leave any room
2. Search same room to rejoin
3. It shows 'Joined' button instead of 'Join'
4. But inside this room it shows option for 'Join'
### Outcome
#### What did you expect?
While trying to rejoin the room Button status should be 'Join' instead of 'Joined'
#### What happened instead?
Now While trying to rejoin the room It shows 'Joined' button instead of 'Join'
### Your phone model
iPad mini
### Operating system version
iOS 15.1
### Application version
Element 1.8.16
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
showing joined button instead of join while user trying to rejoin into a room steps to reproduce leave any room search same room to rejoin it shows joined button instead of join but inside this room it shows option for join outcome what did you expect while trying to rejoin the room button status should be join instead of joined what happened instead now while trying to rejoin the room it shows joined button instead of join your phone model ipad mini operating system version ios application version element homeserver matrix org will you send logs no
| 1
|
590,895
| 17,790,523,350
|
IssuesEvent
|
2021-08-31 15:40:40
|
cdklabs/construct-hub-webapp
|
https://api.github.com/repos/cdklabs/construct-hub-webapp
|
closed
|
Small fixes to the site terms
|
risk/low priority/p3 effort/half-day
|
Remove the the external link warning for the AWS site terms link.
In addition, there is an orphan "." which should be removed
|
1.0
|
Small fixes to the site terms - Remove the the external link warning for the AWS site terms link.
In addition, there is an orphan "." which should be removed
|
non_defect
|
small fixes to the site terms remove the the external link warning for the aws site terms link in addition there is an orphan which should be removed
| 0
|
104,845
| 9,011,420,614
|
IssuesEvent
|
2019-02-05 14:39:38
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed test: 05:06.789zulu
|
C-test-failure O-robot
|
The following tests appear to have failed on master (testrace): 05:06.789zulu/ParseTime
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+05:06.789zulu).
[#1124489](https://teamcity.cockroachdb.com/viewLog.html?buildId=1124489):
```
05:06.789zulu/ParseTime
--- FAIL: testrace/TestParse/ParseModeMDY/January_8,_99_BC/04:05:06.789zulu/ParseTime (0.000s)
Test ended in panic.
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed test: 05:06.789zulu - The following tests appear to have failed on master (testrace): 05:06.789zulu/ParseTime
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+05:06.789zulu).
[#1124489](https://teamcity.cockroachdb.com/viewLog.html?buildId=1124489):
```
05:06.789zulu/ParseTime
--- FAIL: testrace/TestParse/ParseModeMDY/January_8,_99_BC/04:05:06.789zulu/ParseTime (0.000s)
Test ended in panic.
```
Please assign, take a look and update the issue accordingly.
|
non_defect
|
teamcity failed test the following tests appear to have failed on master testrace parsetime you may want to check parsetime fail testrace testparse parsemodemdy january bc parsetime test ended in panic please assign take a look and update the issue accordingly
| 0
|
53,670
| 13,262,079,507
|
IssuesEvent
|
2020-08-20 21:03:57
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
ice-models project cannot be listed as a dependency (Trac #1852)
|
Migrated from Trac cmake defect
|
Since we now have the ice-models project, we would like to transition CLSim to fetching more (eventually all) of its parameterizations from there. To prevent mistakes it seems like a good idea to list ice-models as a dependency of clsim, so that if it isn't present the user will be warned/stopped by cmake. However, simply adding ice-models to clsim's `USE_PROJECTS` list actually prevents clsim from building, since `i3_project` currently assumes that a used project includes a library which must be linked against, and ice-models doesn't actually contain any code at all. It should be possible to instead manually add a check in clsim's CMakeLists that the ice-models directory is present, without going through `USE_PROJECTS`, but I'm not sure if that's a good long-term solution or not.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1852">https://code.icecube.wisc.edu/projects/icecube/ticket/1852</a>, reported by cweaverand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:08:39",
"_ts": "1550066919084021",
"description": "Since we now have the ice-models project, we would like to transition CLSim to fetching more (eventually all) of its parameterizations from there. To prevent mistakes it seems like a good idea to list ice-models as a dependency of clsim, so that if it isn't present the user will be warned/stopped by cmake. However, simply adding ice-models to clsim's `USE_PROJECTS` list actually prevents clsim from building, since `i3_project` currently assumes that a used project includes a library which must be linked against, and ice-models doesn't actually contain any code at all. It should be possible to instead manually add a check in clsim's CMakeLists that the ice-models directory is present, without going through `USE_PROJECTS`, but I'm not sure if that's a good long-term solution or not. ",
"reporter": "cweaver",
"cc": "",
"resolution": "insufficient resources",
"time": "2016-09-07T17:16:08",
"component": "cmake",
"summary": "ice-models project cannot be listed as a dependency",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
ice-models project cannot be listed as a dependency (Trac #1852) - Since we now have the ice-models project, we would like to transition CLSim to fetching more (eventually all) of its parameterizations from there. To prevent mistakes it seems like a good idea to list ice-models as a dependency of clsim, so that if it isn't present the user will be warned/stopped by cmake. However, simply adding ice-models to clsim's `USE_PROJECTS` list actually prevents clsim from building, since `i3_project` currently assumes that a used project includes a library which must be linked against, and ice-models doesn't actually contain any code at all. It should be possible to instead manually add a check in clsim's CMakeLists that the ice-models directory is present, without going through `USE_PROJECTS`, but I'm not sure if that's a good long-term solution or not.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1852">https://code.icecube.wisc.edu/projects/icecube/ticket/1852</a>, reported by cweaverand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:08:39",
"_ts": "1550066919084021",
"description": "Since we now have the ice-models project, we would like to transition CLSim to fetching more (eventually all) of its parameterizations from there. To prevent mistakes it seems like a good idea to list ice-models as a dependency of clsim, so that if it isn't present the user will be warned/stopped by cmake. However, simply adding ice-models to clsim's `USE_PROJECTS` list actually prevents clsim from building, since `i3_project` currently assumes that a used project includes a library which must be linked against, and ice-models doesn't actually contain any code at all. It should be possible to instead manually add a check in clsim's CMakeLists that the ice-models directory is present, without going through `USE_PROJECTS`, but I'm not sure if that's a good long-term solution or not. ",
"reporter": "cweaver",
"cc": "",
"resolution": "insufficient resources",
"time": "2016-09-07T17:16:08",
"component": "cmake",
"summary": "ice-models project cannot be listed as a dependency",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
ice models project cannot be listed as a dependency trac since we now have the ice models project we would like to transition clsim to fetching more eventually all of its parameterizations from there to prevent mistakes it seems like a good idea to list ice models as a dependency of clsim so that if it isn t present the user will be warned stopped by cmake however simply adding ice models to clsim s use projects list actually prevents clsim from building since project currently assumes that a used project includes a library which must be linked against and ice models doesn t actually contain any code at all it should be possible to instead manually add a check in clsim s cmakelists that the ice models directory is present without going through use projects but i m not sure if that s a good long term solution or not migrated from json status closed changetime ts description since we now have the ice models project we would like to transition clsim to fetching more eventually all of its parameterizations from there to prevent mistakes it seems like a good idea to list ice models as a dependency of clsim so that if it isn t present the user will be warned stopped by cmake however simply adding ice models to clsim s use projects list actually prevents clsim from building since project currently assumes that a used project includes a library which must be linked against and ice models doesn t actually contain any code at all it should be possible to instead manually add a check in clsim s cmakelists that the ice models directory is present without going through use projects but i m not sure if that s a good long term solution or not reporter cweaver cc resolution insufficient resources time component cmake summary ice models project cannot be listed as a dependency priority normal keywords milestone long term future owner olivas type defect
| 1
|
28,649
| 7,010,369,439
|
IssuesEvent
|
2017-12-19 22:55:41
|
PyvesB/AdvancedAchievements
|
https://api.github.com/repos/PyvesB/AdvancedAchievements
|
closed
|
SQL error while retrieving playedtime stats
|
code
|
Hello this error i found when first player join on server.
`[12:02:23] [Server thread/ERROR]: [AdvancedAchievements] SQL error while retrieving playedtime stats:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 152,494 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_144]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_144]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_144]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_144]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3559) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3459) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3900) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2483) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2441) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1381) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.hm.achievement.db.AbstractSQLDatabaseManager.getNormalAchievementAmount(AbstractSQLDatabaseManager.java:478) ~[?:?]
at com.hm.achievement.db.DatabaseCacheManager.getAndIncrementStatisticAmount(DatabaseCacheManager.java:116) ~[?:?]
at com.hm.achievement.runnable.AchievePlayTimeRunnable.updateTime(AchievePlayTimeRunnable.java:77) ~[?:?]
at java.util.Iterator.forEachRemaining(Iterator.java:116) [?:1.8.0_144]
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) [?:1.8.0_144]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) [?:1.8.0_144]
at com.hm.achievement.runnable.AchievePlayTimeRunnable.run(AchievePlayTimeRunnable.java:48) [AdvancedAchievements%20(2).jar:?]
at org.bukkit.craftbukkit.v1_12_R1.scheduler.CraftTask.run(CraftTask.java:71) [spigot.jar:git-Spigot-3d850ec-809c399]
at org.bukkit.craftbukkit.v1_12_R1.scheduler.CraftScheduler.mainThreadHeartbeat(CraftScheduler.java:353) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.D(MinecraftServer.java:739) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.DedicatedServer.D(DedicatedServer.java:406) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.C(MinecraftServer.java:679) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.run(MinecraftServer.java:577) [spigot.jar:git-Spigot-3d850ec-809c399]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3469) ~[spigot.jar:git-Spigot-3d850ec-809c399]
... 21 more`
|
1.0
|
SQL error while retrieving playedtime stats - Hello this error i found when first player join on server.
`[12:02:23] [Server thread/ERROR]: [AdvancedAchievements] SQL error while retrieving playedtime stats:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 152,494 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_144]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_144]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_144]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_144]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3559) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3459) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3900) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2483) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2441) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1381) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.hm.achievement.db.AbstractSQLDatabaseManager.getNormalAchievementAmount(AbstractSQLDatabaseManager.java:478) ~[?:?]
at com.hm.achievement.db.DatabaseCacheManager.getAndIncrementStatisticAmount(DatabaseCacheManager.java:116) ~[?:?]
at com.hm.achievement.runnable.AchievePlayTimeRunnable.updateTime(AchievePlayTimeRunnable.java:77) ~[?:?]
at java.util.Iterator.forEachRemaining(Iterator.java:116) [?:1.8.0_144]
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) [?:1.8.0_144]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) [?:1.8.0_144]
at com.hm.achievement.runnable.AchievePlayTimeRunnable.run(AchievePlayTimeRunnable.java:48) [AdvancedAchievements%20(2).jar:?]
at org.bukkit.craftbukkit.v1_12_R1.scheduler.CraftTask.run(CraftTask.java:71) [spigot.jar:git-Spigot-3d850ec-809c399]
at org.bukkit.craftbukkit.v1_12_R1.scheduler.CraftScheduler.mainThreadHeartbeat(CraftScheduler.java:353) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.D(MinecraftServer.java:739) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.DedicatedServer.D(DedicatedServer.java:406) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.C(MinecraftServer.java:679) [spigot.jar:git-Spigot-3d850ec-809c399]
at net.minecraft.server.v1_12_R1.MinecraftServer.run(MinecraftServer.java:577) [spigot.jar:git-Spigot-3d850ec-809c399]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011) ~[spigot.jar:git-Spigot-3d850ec-809c399]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3469) ~[spigot.jar:git-Spigot-3d850ec-809c399]
... 21 more`
|
non_defect
|
sql error while retrieving playedtime stats hello this error i found when first player join on server sql error while retrieving playedtime stats com mysql jdbc exceptions communicationsexception communications link failure the last packet successfully received from the server was milliseconds ago the last packet sent successfully to the server was milliseconds ago at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at com mysql jdbc util handlenewinstance util java at com mysql jdbc sqlerror createcommunicationsexception sqlerror java at com mysql jdbc mysqlio reuseandreadpacket mysqlio java at com mysql jdbc mysqlio reuseandreadpacket mysqlio java at com mysql jdbc mysqlio checkerrorpacket mysqlio java at com mysql jdbc mysqlio sendcommand mysqlio java at com mysql jdbc mysqlio sqlquerydirect mysqlio java at com mysql jdbc connectionimpl execsql connectionimpl java at com mysql jdbc connectionimpl execsql connectionimpl java at com mysql jdbc statementimpl executequery statementimpl java at com hm achievement db abstractsqldatabasemanager getnormalachievementamount abstractsqldatabasemanager java at com hm achievement db databasecachemanager getandincrementstatisticamount databasecachemanager java at com hm achievement runnable achieveplaytimerunnable updatetime achieveplaytimerunnable java at java util iterator foreachremaining iterator java at java util spliterators iteratorspliterator foreachremaining spliterators java at java util stream referencepipeline head foreach referencepipeline java at com hm achievement runnable achieveplaytimerunnable run achieveplaytimerunnable java at org bukkit craftbukkit scheduler crafttask run crafttask java at org bukkit craftbukkit scheduler craftscheduler mainthreadheartbeat craftscheduler java at net minecraft server minecraftserver d minecraftserver java at net minecraft server dedicatedserver d dedicatedserver java at net minecraft server minecraftserver c minecraftserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run thread java caused by java io eofexception can not read response from server expected to read bytes read bytes before connection was unexpectedly lost at com mysql jdbc mysqlio readfully mysqlio java at com mysql jdbc mysqlio reuseandreadpacket mysqlio java more
| 0
|
17,286
| 2,997,349,603
|
IssuesEvent
|
2015-07-23 06:50:36
|
contao/core
|
https://api.github.com/repos/contao/core
|
closed
|
tinymce_legacy funktioniert nicht zu 100%
|
defect
|
Es wäre schön wenn der TinyMCE 3.5 auch für Contao 3.5 funktionieren würde.
https://contao.org/de/erweiterungsliste/view/tinymce_legacy.10000009.de.html
Grundsätzlich tut er das auch, allerdings sind keine Links möglich, weder Verweis auf Seite noch auf Datei - der Button "Anwenden" funktioniert nicht!
Vermutung von BugBuster
Da haben sich die CSS Klassen geändert weswegen das JavaScript nicht mehr funkt.
|
1.0
|
tinymce_legacy funktioniert nicht zu 100% - Es wäre schön wenn der TinyMCE 3.5 auch für Contao 3.5 funktionieren würde.
https://contao.org/de/erweiterungsliste/view/tinymce_legacy.10000009.de.html
Grundsätzlich tut er das auch, allerdings sind keine Links möglich, weder Verweis auf Seite noch auf Datei - der Button "Anwenden" funktioniert nicht!
Vermutung von BugBuster
Da haben sich die CSS Klassen geändert weswegen das JavaScript nicht mehr funkt.
|
defect
|
tinymce legacy funktioniert nicht zu es wäre schön wenn der tinymce auch für contao funktionieren würde grundsätzlich tut er das auch allerdings sind keine links möglich weder verweis auf seite noch auf datei der button anwenden funktioniert nicht vermutung von bugbuster da haben sich die css klassen geändert weswegen das javascript nicht mehr funkt
| 1
|
39,705
| 12,698,859,343
|
IssuesEvent
|
2020-06-22 14:03:10
|
mahonec/WebGoat-Legacy
|
https://api.github.com/repos/mahonec/WebGoat-Legacy
|
opened
|
CVE-2020-11620 (High) detected in jackson-databind-2.0.4.jar
|
security vulnerability
|
## CVE-2020-11620 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar,/WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620>CVE-2020-11620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-11620","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11620 (High) detected in jackson-databind-2.0.4.jar - ## CVE-2020-11620 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar,/WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620>CVE-2020-11620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-11620","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file tmp ws scm webgoat legacy pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar webgoat legacy target webgoat web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons jelly impl embedded aka commons jelly publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons jelly impl embedded aka commons jelly vulnerabilityurl
| 0
|
306,808
| 26,497,913,849
|
IssuesEvent
|
2023-01-18 07:55:32
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
closed
|
Release 4.4.0 - Beta 1 - C Unit tests
|
release test/4.4.0
|
|Main RC issue|Version|RC|Tag|Previous issue|
|---|---|---|---|---|
|#15891|4.4.0|Beta 1|[v4.4.0-beta1](https://github.com/wazuh/wazuh/tree/v4.4.0-beta1)|#15528|
This issue aims to run all `C unit tests` for the current RC and report the results. Any failing test should be properly addressed with a new issue, detailing the error and the possible cause. Then, it will be determined if that failure is to be fixed immediately or marked as expected-fail if there is enough support and approval for it. In case a test is marked as expected-fail, the issue detailing the error should always be used as a reference in the test.
## Auditors' validation
In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC.
- [x] @chemamartinez
|
1.0
|
Release 4.4.0 - Beta 1 - C Unit tests - |Main RC issue|Version|RC|Tag|Previous issue|
|---|---|---|---|---|
|#15891|4.4.0|Beta 1|[v4.4.0-beta1](https://github.com/wazuh/wazuh/tree/v4.4.0-beta1)|#15528|
This issue aims to run all `C unit tests` for the current RC and report the results. Any failing test should be properly addressed with a new issue, detailing the error and the possible cause. Then, it will be determined if that failure is to be fixed immediately or marked as expected-fail if there is enough support and approval for it. In case a test is marked as expected-fail, the issue detailing the error should always be used as a reference in the test.
## Auditors' validation
In order to close and proceed with the release or the next candidate version, the following auditors must give the green light to this RC.
- [x] @chemamartinez
|
non_defect
|
release beta c unit tests main rc issue version rc tag previous issue beta this issue aims to run all c unit tests for the current rc and report the results any failing test should be properly addressed with a new issue detailing the error and the possible cause then it will be determined if that failure is to be fixed immediately or marked as expected fail if there is enough support and approval for it in case a test is marked as expected fail the issue detailing the error should always be used as a reference in the test auditors validation in order to close and proceed with the release or the next candidate version the following auditors must give the green light to this rc chemamartinez
| 0
|
65,734
| 19,674,441,318
|
IssuesEvent
|
2022-01-11 10:48:46
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Parser doesn't support Derby's FOR BIT DATA data type modifier
|
T: Defect P: Medium E: All Editions C: Parser
|
Derby doesn't have the usual binary data types:
- `BINARY`
- `VARBINARY`
But instead supports these:
- `{ CHAR | CHARACTER }[(length)] FOR BIT DATA` (instead of `BINARY`)
- `{ VARCHAR | CHAR VARYING | CHARACTER VARYING } (length) FOR BIT DATA` (instead of `VARBINARY`)
- `LONG VARCHAR FOR BIT DATA` instead of `LONGVARBINARY`
Our parser does not yet support these types
----
See also:
https://db.apache.org/derby/docs/10.15/ref/rrefsqlj32714.html
|
1.0
|
Parser doesn't support Derby's FOR BIT DATA data type modifier - Derby doesn't have the usual binary data types:
- `BINARY`
- `VARBINARY`
But instead supports these:
- `{ CHAR | CHARACTER }[(length)] FOR BIT DATA` (instead of `BINARY`)
- `{ VARCHAR | CHAR VARYING | CHARACTER VARYING } (length) FOR BIT DATA` (instead of `VARBINARY`)
- `LONG VARCHAR FOR BIT DATA` instead of `LONGVARBINARY`
Our parser does not yet support these types
----
See also:
https://db.apache.org/derby/docs/10.15/ref/rrefsqlj32714.html
|
defect
|
parser doesn t support derby s for bit data data type modifier derby doesn t have the usual binary data types binary varbinary but instead supports these char character for bit data instead of binary varchar char varying character varying length for bit data instead of varbinary long varchar for bit data instead of longvarbinary our parser does not yet support these types see also
| 1
|
8,845
| 2,612,907,190
|
IssuesEvent
|
2015-02-27 17:26:16
|
chrsmith/windows-package-manager
|
https://api.github.com/repos/chrsmith/windows-package-manager
|
closed
|
OS X Homebrew style package management
|
auto-migrated Type-Defect
|
```
Please use more 'agile' approach to package management, e.g. like OS X Homebrew
app http://mxcl.github.com/homebrew/
Personally, I think that your app is essential for Windows power-users but I'd
like to see more community involvement other than raising issues about adding
new packages or upgrading their versions.
Also, once a month for upgrades seems a bit to long nowadays.
Tnx :)
```
Original issue reported on code.google.com by `sanja.ko...@gmail.com` on 1 Sep 2012 at 10:13
|
1.0
|
OS X Homebrew style package management - ```
Please use more 'agile' approach to package management, e.g. like OS X Homebrew
app http://mxcl.github.com/homebrew/
Personally, I think that your app is essential for Windows power-users but I'd
like to see more community involvement other than raising issues about adding
new packages or upgrading their versions.
Also, once a month for upgrades seems a bit to long nowadays.
Tnx :)
```
Original issue reported on code.google.com by `sanja.ko...@gmail.com` on 1 Sep 2012 at 10:13
|
defect
|
os x homebrew style package management please use more agile approach to package management e g like os x homebrew app personally i think that your app is essential for windows power users but i d like to see more community involvement other than raising issues about adding new packages or upgrading their versions also once a month for upgrades seems a bit to long nowadays tnx original issue reported on code google com by sanja ko gmail com on sep at
| 1
|
39,376
| 8,637,583,836
|
IssuesEvent
|
2018-11-23 11:47:21
|
openhealthcare/elcid-rfh
|
https://api.github.com/repos/openhealthcare/elcid-rfh
|
closed
|
Add Assessment and a reason for interaction
|
Code Review
|
Adds an assessment text field and a reason for interaction of "LTBI Assessment" to the inline clinical discussion form.
|
1.0
|
Add Assessment and a reason for interaction - Adds an assessment text field and a reason for interaction of "LTBI Assessment" to the inline clinical discussion form.
|
non_defect
|
add assessment and a reason for interaction adds an assessment text field and a reason for interaction of ltbi assessment to the inline clinical discussion form
| 0
|
14,358
| 2,799,330,194
|
IssuesEvent
|
2015-05-12 23:45:27
|
FIX94/Nintendont
|
https://api.github.com/repos/FIX94/Nintendont
|
closed
|
F-Zero GX, No controls, and memory card not loading.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.Use Generic Blue V-Shape PS2/1 USB Adapter (using bottom slot)
2.Boot F-Zero GX (Using USB storage, top slot)
3.No Input in-game or on boot, game asks if it should save after automatically
selecting yes on previous prompt.
What is the expected output? What do you see instead?
Game starts normally but input on either controller slot does not go through.
What revision of Nintendont are you using? On what system Wii/Wii U?
Latest available at the time of post (R167) and Latest Release (R160)
Wii U 5.1.2E
Please provide any additional information below.
HIDTest recognizes the controller and all inputs correctly, and loads the
controller.ini, displaying the buttons below the hex IDs properly.
```
Original issue reported on code.google.com by `i4cuh...@gmail.com` on 6 Oct 2014 at 4:30
|
1.0
|
F-Zero GX, No controls, and memory card not loading. - ```
What steps will reproduce the problem?
1.Use Generic Blue V-Shape PS2/1 USB Adapter (using bottom slot)
2.Boot F-Zero GX (Using USB storage, top slot)
3.No Input in-game or on boot, game asks if it should save after automatically
selecting yes on previous prompt.
What is the expected output? What do you see instead?
Game starts normally but input on either controller slot does not go through.
What revision of Nintendont are you using? On what system Wii/Wii U?
Latest available at the time of post (R167) and Latest Release (R160)
Wii U 5.1.2E
Please provide any additional information below.
HIDTest recognizes the controller and all inputs correctly, and loads the
controller.ini, displaying the buttons below the hex IDs properly.
```
Original issue reported on code.google.com by `i4cuh...@gmail.com` on 6 Oct 2014 at 4:30
|
defect
|
f zero gx no controls and memory card not loading what steps will reproduce the problem use generic blue v shape usb adapter using bottom slot boot f zero gx using usb storage top slot no input in game or on boot game asks if it should save after automatically selecting yes on previous prompt what is the expected output what do you see instead game starts normally but input on either controller slot does not go through what revision of nintendont are you using on what system wii wii u latest available at the time of post and latest release wii u please provide any additional information below hidtest recognizes the controller and all inputs correctly and loads the controller ini displaying the buttons below the hex ids properly original issue reported on code google com by gmail com on oct at
| 1
|
28,036
| 5,167,306,368
|
IssuesEvent
|
2017-01-17 18:26:26
|
eliasferreyra/googlesitemapgenerator
|
https://api.github.com/repos/eliasferreyra/googlesitemapgenerator
|
closed
|
Incorrect namespace -: Your Sitemap or Sitemap index file doesn't properly declare the namespace.
|
auto-migrated Priority-Medium Type-Defect
|
```
Hello Sir,
When i add ROR Sitemap in Google webmaster tools the google show's error.
Error-:
"Incorrect namespace -: Your Sitemap or Sitemap index file doesn't properly
declare the namespace"
http://www.dixiepackingandseal.com/ror.xml
Please reply ASAP its urgent
Regards
WSISearchresults
```
Original issue reported on code.google.com by `wsisearc...@gmail.com` on 3 Sep 2009 at 3:52
Attachments:
- [ROR-Incorrect-namespace.JPG](https://storage.googleapis.com/google-code-attachments/googlesitemapgenerator/issue-67/comment-0/ROR-Incorrect-namespace.JPG)
|
1.0
|
Incorrect namespace -: Your Sitemap or Sitemap index file doesn't properly declare the namespace. - ```
Hello Sir,
When i add ROR Sitemap in Google webmaster tools the google show's error.
Error-:
"Incorrect namespace -: Your Sitemap or Sitemap index file doesn't properly
declare the namespace"
http://www.dixiepackingandseal.com/ror.xml
Please reply ASAP its urgent
Regards
WSISearchresults
```
Original issue reported on code.google.com by `wsisearc...@gmail.com` on 3 Sep 2009 at 3:52
Attachments:
- [ROR-Incorrect-namespace.JPG](https://storage.googleapis.com/google-code-attachments/googlesitemapgenerator/issue-67/comment-0/ROR-Incorrect-namespace.JPG)
|
defect
|
incorrect namespace your sitemap or sitemap index file doesn t properly declare the namespace hello sir when i add ror sitemap in google webmaster tools the google show s error error incorrect namespace your sitemap or sitemap index file doesn t properly declare the namespace please reply asap its urgent regards wsisearchresults original issue reported on code google com by wsisearc gmail com on sep at attachments
| 1
|
21,319
| 3,488,405,187
|
IssuesEvent
|
2016-01-02 22:53:06
|
catmaid/CATMAID
|
https://api.github.com/repos/catmaid/CATMAID
|
opened
|
The confidence-compartment-subgraph returns way too much data in the JSON
|
type: defect
|
The premise of the "basic_graph" was to return a minimal representation. Now it returns an array per entry, containing the distribution of each synapse in the edge into their confidence values.
While I understand the need for homogeneity in the responses, this change broke a number of scripts using the default basic graph for analysis. Changing the structure of the returned JSON of a server function is not a good idea. Instead, add a new function. The CATMAID client is not the only client anymore.
Additionally there is little point in a gigantic JSON file containing a 5-entry array for every single synapse in the neuron, when none of these are to be displayed or used in any way in almost all cases. It adds overhead to the server and overhead to the client both for processing and for memory storage.
The general rule when additional information is necessary to construct the graph is to fetch that additional data. For example when splitting neurons into axon and dendrite in the graph. For the case of the confidence filter, this would mean fetching only when changing the confidence to a value other than the default and then storing the extra data for further filtering to other values. And fetch ONLY what is needed, as opposed to the current situation where confidence values are fetched for ALL connectors related to the loaded neurons. And these data would be reloaded when pushing "Refresh". In other words, pay the price only when necessary.
I advocate for the return of the basic representation as a separate REST API entry point, and for slimming down the current one to that basic graph when no filtering is to occur.
|
1.0
|
The confidence-compartment-subgraph returns way too much data in the JSON - The premise of the "basic_graph" was to return a minimal representation. Now it returns an array per entry, containing the distribution of each synapse in the edge into their confidence values.
While I understand the need for homogeneity in the responses, this change broke a number of scripts using the default basic graph for analysis. Changing the structure of the returned JSON of a server function is not a good idea. Instead, add a new function. The CATMAID client is not the only client anymore.
Additionally there is little point in a gigantic JSON file containing a 5-entry array for every single synapse in the neuron, when none of these are to be displayed or used in any way in almost all cases. It adds overhead to the server and overhead to the client both for processing and for memory storage.
The general rule when additional information is necessary to construct the graph is to fetch that additional data. For example when splitting neurons into axon and dendrite in the graph. For the case of the confidence filter, this would mean fetching only when changing the confidence to a value other than the default and then storing the extra data for further filtering to other values. And fetch ONLY what is needed, as opposed to the current situation where confidence values are fetched for ALL connectors related to the loaded neurons. And these data would be reloaded when pushing "Refresh". In other words, pay the price only when necessary.
I advocate for the return of the basic representation as a separate REST API entry point, and for slimming down the current one to that basic graph when no filtering is to occur.
|
defect
|
the confidence compartment subgraph returns way too much data in the json the premise of the basic graph was to return a minimal representation now it returns an array per entry containing the distribution of each synapse in the edge into their confidence values while i understand the need for homogeneity in the responses this change broke a number of scripts using the default basic graph for analysis changing the structure of the returned json of a server function is not a good idea instead add a new function the catmaid client is not the only client anymore additionally there is little point in a gigantic json file containing a entry array for every single synapse in the neuron when none of these are to be displayed or used in any way in almost all cases it adds overhead to the server and overhead to the client both for processing and for memory storage the general rule when additional information is necessary to construct the graph is to fetch that additional data for example when splitting neurons into axon and dendrite in the graph for the case of the confidence filter this would mean fetching only when changing the confidence to a value other than the default and then storing the extra data for further filtering to other values and fetch only what is needed as opposed to the current situation where confidence values are fetched for all connectors related to the loaded neurons and these data would be reloaded when pushing refresh in other words pay the price only when necessary i advocate for the return of the basic representation as a separate rest api entry point and for slimming down the current one to that basic graph when no filtering is to occur
| 1
|
832,213
| 32,076,311,613
|
IssuesEvent
|
2023-09-25 11:15:08
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.politico.com - The gallery is not shown with "Trackers and Scripts to Block- Content" option enabled
|
priority-normal browser-focus-geckoview engine-gecko
|
<!-- @browser: Firefox Focus -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/117.0 Firefox/117.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/127471 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.politico.com/gallery/2023/09/22/the-nations-cartoonists-on-the-week-in-politics-00117479
**Browser / Version**: Firefox Focus
**Operating System**: Android 13
**Tested Another Browser**: Yes Firefox
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
Content doesn't load properly.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/9/cda46de1-6e56-4633-8202-eaedcb7af690.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>buildID: 20230912013654</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/9/f04c5d6b-9167-4ab8-9de2-053afb6d484e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.politico.com - The gallery is not shown with "Trackers and Scripts to Block- Content" option enabled - <!-- @browser: Firefox Focus -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/117.0 Firefox/117.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/127471 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.politico.com/gallery/2023/09/22/the-nations-cartoonists-on-the-week-in-politics-00117479
**Browser / Version**: Firefox Focus
**Operating System**: Android 13
**Tested Another Browser**: Yes Firefox
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
Content doesn't load properly.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/9/cda46de1-6e56-4633-8202-eaedcb7af690.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>buildID: 20230912013654</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/9/f04c5d6b-9167-4ab8-9de2-053afb6d484e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
the gallery is not shown with trackers and scripts to block content option enabled url browser version firefox focus operating system android tested another browser yes firefox problem type site is not usable description missing items steps to reproduce content doesn t load properly view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
4,130
| 2,610,087,901
|
IssuesEvent
|
2015-02-26 18:26:39
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳痤疮如何祛除最好
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳痤疮如何祛除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:23
|
1.0
|
深圳痤疮如何祛除最好 - ```
深圳痤疮如何祛除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:23
|
defect
|
深圳痤疮如何祛除最好 深圳痤疮如何祛除最好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
| 1
|
43,174
| 11,531,568,071
|
IssuesEvent
|
2020-02-17 01:23:41
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
InterfaceUserObject can NOT get MaterialProperty and NeighborMaterialProperty
|
C: MOOSE P: normal T: defect
|
## Bug Description
`InterfaceUserObject` can't properly get neighbor material property.
@lindsayad The introduction of the interface material should have granted this to work automatically but apparently it doesn't and I can't figure out where the problem is.
## Steps to Reproduce
I prepared a branch here demonstrating the issue [interfaceUO_neighbor_material](https://github.com/arovinelli/moose/tree/interfaceUO_neighbor_material)
execute the `/moose/test/tests/userobjects/interface_user_object/interface_material_value_user_object_QP.i` test and see that _mp[qp] and _mp_neighbor[qp] are always 0, when they should be 10 and 4, respectively, after one step
## Impact
This does prevent user to compute interface average values material properties, and yes does prevent some of my work to be done
|
1.0
|
InterfaceUserObject can NOT get MaterialProperty and NeighborMaterialProperty - ## Bug Description
`InterfaceUserObject` can't properly get neighbor material property.
@lindsayad The introduction of the interface material should have granted this to work automatically but apparently it doesn't and I can't figure out where the problem is.
## Steps to Reproduce
I prepared a branch here demonstrating the issue [interfaceUO_neighbor_material](https://github.com/arovinelli/moose/tree/interfaceUO_neighbor_material)
execute the `/moose/test/tests/userobjects/interface_user_object/interface_material_value_user_object_QP.i` test and see that _mp[qp] and _mp_neighbor[qp] are always 0, when they should be 10 and 4, respectively, after one step
## Impact
This does prevent user to compute interface average values material properties, and yes does prevent some of my work to be done
|
defect
|
interfaceuserobject can not get materialproperty and neighbormaterialproperty bug description interfaceuserobject can t properly get neighbor material property lindsayad the introduction of the interface material should have granted this to work automatically but apparently it doesn t and i can t figure out where the problem is steps to reproduce i prepared a branch here demonstrating the issue execute the moose test tests userobjects interface user object interface material value user object qp i test and see that mp and mp neighbor are always when they should be and respectively after one step impact this does prevent user to compute interface average values material properties and yes does prevent some of my work to be done
| 1
|
22,879
| 3,727,389,282
|
IssuesEvent
|
2016-03-06 08:04:54
|
godfather1103/mentohust
|
https://api.github.com/repos/godfather1103/mentohust
|
closed
|
软件在windows下安装
|
auto-migrated Priority-Medium Type-Defect
|
```
软件安装系统为:windows8.1 x64
首先采用bat文件安装相应文件,点击认证软件显示无:winpcap.
dll;之后安装那个winpcap软件后不出现上述错误,反而是认证�
��件点击之后,没有任何反应,切换到任务管理器,可以看到
软件已经在后台运行,但是没有出现任何提示,而且还是上��
�去网。。。求高手解决。。。
```
Original issue reported on code.google.com by `longteng...@gmail.com` on 30 Jul 2014 at 7:51
|
1.0
|
软件在windows下安装 - ```
软件安装系统为:windows8.1 x64
首先采用bat文件安装相应文件,点击认证软件显示无:winpcap.
dll;之后安装那个winpcap软件后不出现上述错误,反而是认证�
��件点击之后,没有任何反应,切换到任务管理器,可以看到
软件已经在后台运行,但是没有出现任何提示,而且还是上��
�去网。。。求高手解决。。。
```
Original issue reported on code.google.com by `longteng...@gmail.com` on 30 Jul 2014 at 7:51
|
defect
|
软件在windows下安装 软件安装系统为: 首先采用bat文件安装相应文件,点击认证软件显示无:winpcap dll;之后安装那个winpcap软件后不出现上述错误,反而是认证� ��件点击之后,没有任何反应,切换到任务管理器,可以看到 软件已经在后台运行,但是没有出现任何提示,而且还是上�� �去网。。。求高手解决。。。 original issue reported on code google com by longteng gmail com on jul at
| 1
|
335,587
| 30,053,533,476
|
IssuesEvent
|
2023-06-28 04:00:48
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix nn.test_tensorflow_silu
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix nn.test_tensorflow_silu - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5396462220/jobs/9800117174"><img src=https://img.shields.io/badge/-success-success></a>
|
non_defect
|
fix nn test tensorflow silu jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 0
|
5,307
| 2,610,185,063
|
IssuesEvent
|
2015-02-26 18:58:47
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
纠结治疗色斑的最佳方法
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
有一天,一只狐狸走到一个葡萄园外,看见里面水灵灵的葡萄垂涎欲滴。可是外面有栅栏挡着,无法进去。于是它一狠心绝食三日,减肥之后,终于钻进葡萄园内饱餐一顿。当它心满意足地想离开葡萄园时,发觉自己吃得太饱,怎么也钻不出栅栏了。相信任何人都不愿做这样的狐狸。退路同样重要。饱带干粮,晴带雨伞,点滴积累,水到渠成。有的东西今天似乎一文不值,但有朝一日也许就会身价百倍。治疗色斑的最佳方法,
《客户案例》
我之所以会长斑跟我的生活压力有很大的关系,本以为��
�满的婚姻会幸福的过一辈子的,可我万万没有想到,在我的�
��子还没到一周岁时,我就和我的先生闹离婚,只能说生活很
现实、也很无奈,因为种种原因我只能带着我那可怜的孩子��
�我先生离婚了,虽然已经是心力交瘁了,可为了我的孩子,�
��苦再累我还要坚强的活下去,就这样,我白天把孩子放在妈
妈那里自己起早贪黑的去上班,晚上回来还要带我孩子,生��
�就这样过着,很快孩子就已经是三周岁了,看着孩子一天天�
��长大、董事,自己真的比谁都开心,可心里的痛却只能自己
体会,因为婚姻不幸带来的压力,和这几年生活的劳累,本��
�一张白皙靓丽的脸现在却长满了斑,在孩子两周岁的时候,�
��爸妈妈还有邻居一直劝我再婚,可是因为孩子还小,还因为
这张烦人的脸实在拿不出手,所以就一直这样耗着。说实话��
�了脸上的斑,我也费了不少心。</br>
我下定决心一定要祛除脸上的黄褐斑,美容院我是放弃��
�,以前曾听说有些祛斑产品也能达到彻底祛斑的效果,于是�
��大量的收集黄褐斑治疗的相关资料和各种祛斑信息,希望能
从这里找到一丝希望。通过这种比较和朋友们的建议,最终��
�选择了天然精华祛斑产品——黛芙薇尔。这个产品口碑很好�
��而且得到了很多斑友们的好评,我相信群众的眼睛一定是雪
亮,于是我从他们的官网订购了三个周期的黛芙薇尔。</br>
慢慢的用一个周期之后,发现斑才开始慢慢的变淡,效��
�确实不错,而且对身体也没有任何的影响,后来又接着买了�
��个周期。一共下来用完了3个周期脸上斑基本已经看不清楚��
�,而且皮肤也比以前干净光泽多了,而且还很有弹性,没想�
��黛芙薇尔能有这么好的效果,能成功祛除我脸上的黄褐斑。
现在的我像变了一个人似的,变的乐观开朗了,积极向上了��
�工作起来也有了动力了,生活也美满幸福了。真的太感谢黛�
��薇尔了,希望它能让更多的姐妹们摆脱黄褐斑的困惑,恢复
靓丽容颜,重新找到自信的人生。
阅读了治疗色斑的最佳方法,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
治疗色斑的最佳方法,同时为您分享祛斑小方法
每天要保证充足的睡眠,劳累会导致皮肤紧张疲倦,血液偏��
�,新陈代谢减缓,那时皮肤将无法取得充足的养分;角质层因
缺乏水分而使皮肤黯然无光。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:31
|
1.0
|
纠结治疗色斑的最佳方法 - ```
《摘要》
有一天,一只狐狸走到一个葡萄园外,看见里面水灵灵的葡萄垂涎欲滴。可是外面有栅栏挡着,无法进去。于是它一狠心绝食三日,减肥之后,终于钻进葡萄园内饱餐一顿。当它心满意足地想离开葡萄园时,发觉自己吃得太饱,怎么也钻不出栅栏了。相信任何人都不愿做这样的狐狸。退路同样重要。饱带干粮,晴带雨伞,点滴积累,水到渠成。有的东西今天似乎一文不值,但有朝一日也许就会身价百倍。治疗色斑的最佳方法,
《客户案例》
我之所以会长斑跟我的生活压力有很大的关系,本以为��
�满的婚姻会幸福的过一辈子的,可我万万没有想到,在我的�
��子还没到一周岁时,我就和我的先生闹离婚,只能说生活很
现实、也很无奈,因为种种原因我只能带着我那可怜的孩子��
�我先生离婚了,虽然已经是心力交瘁了,可为了我的孩子,�
��苦再累我还要坚强的活下去,就这样,我白天把孩子放在妈
妈那里自己起早贪黑的去上班,晚上回来还要带我孩子,生��
�就这样过着,很快孩子就已经是三周岁了,看着孩子一天天�
��长大、董事,自己真的比谁都开心,可心里的痛却只能自己
体会,因为婚姻不幸带来的压力,和这几年生活的劳累,本��
�一张白皙靓丽的脸现在却长满了斑,在孩子两周岁的时候,�
��爸妈妈还有邻居一直劝我再婚,可是因为孩子还小,还因为
这张烦人的脸实在拿不出手,所以就一直这样耗着。说实话��
�了脸上的斑,我也费了不少心。</br>
我下定决心一定要祛除脸上的黄褐斑,美容院我是放弃��
�,以前曾听说有些祛斑产品也能达到彻底祛斑的效果,于是�
��大量的收集黄褐斑治疗的相关资料和各种祛斑信息,希望能
从这里找到一丝希望。通过这种比较和朋友们的建议,最终��
�选择了天然精华祛斑产品——黛芙薇尔。这个产品口碑很好�
��而且得到了很多斑友们的好评,我相信群众的眼睛一定是雪
亮,于是我从他们的官网订购了三个周期的黛芙薇尔。</br>
慢慢的用一个周期之后,发现斑才开始慢慢的变淡,效��
�确实不错,而且对身体也没有任何的影响,后来又接着买了�
��个周期。一共下来用完了3个周期脸上斑基本已经看不清楚��
�,而且皮肤也比以前干净光泽多了,而且还很有弹性,没想�
��黛芙薇尔能有这么好的效果,能成功祛除我脸上的黄褐斑。
现在的我像变了一个人似的,变的乐观开朗了,积极向上了��
�工作起来也有了动力了,生活也美满幸福了。真的太感谢黛�
��薇尔了,希望它能让更多的姐妹们摆脱黄褐斑的困惑,恢复
靓丽容颜,重新找到自信的人生。
阅读了治疗色斑的最佳方法,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
治疗色斑的最佳方法,同时为您分享祛斑小方法
每天要保证充足的睡眠,劳累会导致皮肤紧张疲倦,血液偏��
�,新陈代谢减缓,那时皮肤将无法取得充足的养分;角质层因
缺乏水分而使皮肤黯然无光。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:31
|
defect
|
纠结治疗色斑的最佳方法 《摘要》 有一天,一只狐狸走到一个葡萄园外,看见里面水灵灵的葡萄垂涎欲滴。可是外面有栅栏挡着,无法进去。于是它一狠心绝食三日,减肥之后,终于钻进葡萄园内饱餐一顿。当它心满意足地想离开葡萄园时,发觉自己吃得太饱,怎么也钻不出栅栏了。相信任何人都不愿做这样的狐狸。退路同样重要。饱带干粮,晴带雨伞,点滴积累,水到渠成。有的东西今天似乎一文不值,但有朝一日也许就会身价百倍。治疗色斑的最佳方法, 《客户案例》 我之所以会长斑跟我的生活压力有很大的关系,本以为�� �满的婚姻会幸福的过一辈子的,可我万万没有想到,在我的� ��子还没到一周岁时,我就和我的先生闹离婚,只能说生活很 现实、也很无奈,因为种种原因我只能带着我那可怜的孩子�� �我先生离婚了,虽然已经是心力交瘁了,可为了我的孩子,� ��苦再累我还要坚强的活下去,就这样,我白天把孩子放在妈 妈那里自己起早贪黑的去上班,晚上回来还要带我孩子,生�� �就这样过着,很快孩子就已经是三周岁了,看着孩子一天天� ��长大、董事,自己真的比谁都开心,可心里的痛却只能自己 体会,因为婚姻不幸带来的压力,和这几年生活的劳累,本�� �一张白皙靓丽的脸现在却长满了斑,在孩子两周岁的时候,� ��爸妈妈还有邻居一直劝我再婚,可是因为孩子还小,还因为 这张烦人的脸实在拿不出手,所以就一直这样耗着。说实话�� �了脸上的斑,我也费了不少心。 我下定决心一定要祛除脸上的黄褐斑,美容院我是放弃�� �,以前曾听说有些祛斑产品也能达到彻底祛斑的效果,于是� ��大量的收集黄褐斑治疗的相关资料和各种祛斑信息,希望能 从这里找到一丝希望。通过这种比较和朋友们的建议,最终�� �选择了天然精华祛斑产品——黛芙薇尔。这个产品口碑很好� ��而且得到了很多斑友们的好评,我相信群众的眼睛一定是雪 亮,于是我从他们的官网订购了三个周期的黛芙薇尔。 慢慢的用一个周期之后,发现斑才开始慢慢的变淡,效�� �确实不错,而且对身体也没有任何的影响,后来又接着买了� ��个周期。 �� �,而且皮肤也比以前干净光泽多了,而且还很有弹性,没想� ��黛芙薇尔能有这么好的效果,能成功祛除我脸上的黄褐斑。 现在的我像变了一个人似的,变的乐观开朗了,积极向上了�� �工作起来也有了动力了,生活也美满幸福了。真的太感谢黛� ��薇尔了,希望它能让更多的姐妹们摆脱黄褐斑的困惑,恢复 靓丽容颜,重新找到自信的人生。 阅读了治疗色斑的最佳方法,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 治疗色斑的最佳方法,同时为您分享祛斑小方法 每天要保证充足的睡眠,劳累会导致皮肤紧张疲倦,血液偏�� �,新陈代谢减缓,那时皮肤将无法取得充足的养分 角质层因 缺乏水分而使皮肤黯然无光。 original issue reported on code google com by additive gmail com on jul at
| 1
|
7,063
| 2,610,324,815
|
IssuesEvent
|
2015-02-26 19:44:42
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Typo
|
auto-migrated Priority-Low Type-Defect
|
```
There are 2 squares that need getting rid of at the end of Plo Koon's fighter
description. (I think Plokoon needs to be changed to Plo Koon, too.)
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 Jun 2011 at 10:26
|
1.0
|
Typo - ```
There are 2 squares that need getting rid of at the end of Plo Koon's fighter
description. (I think Plokoon needs to be changed to Plo Koon, too.)
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 Jun 2011 at 10:26
|
defect
|
typo there are squares that need getting rid of at the end of plo koon s fighter description i think plokoon needs to be changed to plo koon too original issue reported on code google com by gmail com on jun at
| 1
|
211,193
| 16,189,341,008
|
IssuesEvent
|
2021-05-04 05:37:11
|
yujenyu/Group11_Project
|
https://api.github.com/repos/yujenyu/Group11_Project
|
closed
|
Testing strategy
|
Testing help wanted
|
Did anyone start to think about the testing strategy? (I'm sorry - I didn't although I'm asking)
|
1.0
|
Testing strategy - Did anyone start to think about the testing strategy? (I'm sorry - I didn't although I'm asking)
|
non_defect
|
testing strategy did anyone start to think about the testing strategy i m sorry i didn t although i m asking
| 0
|
23,468
| 2,659,727,068
|
IssuesEvent
|
2015-03-18 22:52:43
|
jeffbryner/MozDef
|
https://api.github.com/repos/jeffbryner/MozDef
|
closed
|
healthAndStatus should allow more granular notifications
|
category:feature priority:medium
|
The current healthAndStatus script does excellent job for monitoring if events from category A are still flowing.
I'd like to have it alert for every host that does not send N events of category CAT in M minutes. That would be very useful for the NSM, to monitor if every sensor sends data and not just 'something in Bro category' does.
I can (on my end) create kind of an internal "cron" job in Bro to periodically output a status message saying "i am still alive". If this script would not see it from every preconfigured sensor in the last 15 minutes, it will let us know.
|
1.0
|
healthAndStatus should allow more granular notifications - The current healthAndStatus script does excellent job for monitoring if events from category A are still flowing.
I'd like to have it alert for every host that does not send N events of category CAT in M minutes. That would be very useful for the NSM, to monitor if every sensor sends data and not just 'something in Bro category' does.
I can (on my end) create kind of an internal "cron" job in Bro to periodically output a status message saying "i am still alive". If this script would not see it from every preconfigured sensor in the last 15 minutes, it will let us know.
|
non_defect
|
healthandstatus should allow more granular notifications the current healthandstatus script does excellent job for monitoring if events from category a are still flowing i d like to have it alert for every host that does not send n events of category cat in m minutes that would be very useful for the nsm to monitor if every sensor sends data and not just something in bro category does i can on my end create kind of an internal cron job in bro to periodically output a status message saying i am still alive if this script would not see it from every preconfigured sensor in the last minutes it will let us know
| 0
|
21,740
| 3,549,144,728
|
IssuesEvent
|
2016-01-20 16:56:01
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Isolate.spawn is broken in snapshots when run near a foreign .packages file
|
area-vm priority-critical Type-Defect vm-regression
|
This breakage appears under the following conditions:
* The script contains a `package:` import.
* The script uses `Isolate.spawn()`.
* The script is run from a snapshot.
* The directory containing the snapshot, or a parent of that directory, contains a `.packages` file that's different from the one used to create the snapshot.
To reproduce this, create a package with the following files:
```dart
// bin/bin.dart
import 'dart:isolate';
import 'package:app/app.dart';
main() async {
await Isolate.spawn(entrypoint, null);
}
void entrypoint(_) {
}
```
```dart
// lib/app.dart
// This file can be empty.
```
```yaml
# pubspec.yaml
name: app
```
Run:
```
$ pub get
$ dart --snapshot=bin.dart.snapshot bin/bin.dart
$ rm .packages
$ touch .packages
$ dart bin.dart.snapshot
```
You should see an error like the following:
```
Unhandled exception:
IsolateSpawnException: Unable to spawn isolate: Unhandled exception:
Load Error for "package:app/app.dart": No mapping for 'app' package when resolving 'package:app/app.dart'.
#0 _asyncLoadErrorCallback (dart:_builtin:155)
#1 _asyncLoadError (dart:_builtin:566)
#2 _loadPackage (dart:_builtin:605)
#3 _loadData (dart:_builtin:637)
#4 _loadDataAsync (dart:_builtin:657)
#5 _loadScriptCallback (dart:_builtin:153)
#6 _handleLoaderReply (dart:_builtin:370)
#7 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
'file:///tmp/app/bin/bin.dart': error: line 3 pos 1: library handler failed
import 'package:app/app.dart';
^
#0 Isolate.spawn.<spawn_async_body> (dart:isolate-patch/isolate_patch.dart)
#1 _asyncErrorWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:34)
#2 _RootZone.runBinary (dart:async/zone.dart:1154)
#3 _Future._propagateToListeners.handleError (dart:async/future_impl.dart:579)
#4 _Future._propagateToListeners (dart:async/future_impl.dart:641)
#5 _Future._completeError (dart:async/future_impl.dart:432)
#6 _SyncCompleter._completeError (dart:async/future_impl.dart:56)
#7 _Completer.completeError (dart:async/future_impl.dart:27)
#8 Isolate._spawnCommon.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:413)
#9 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
```
This is a regression. Running `git bisect` indicates that this error appeared as of dart-lang/sdk@6d066c7e53f82b70c3a5bfc4916d8a0c9a26a6f4. It's likely related to dart-lang/pub#1379.
|
1.0
|
Isolate.spawn is broken in snapshots when run near a foreign .packages file - This breakage appears under the following conditions:
* The script contains a `package:` import.
* The script uses `Isolate.spawn()`.
* The script is run from a snapshot.
* The directory containing the snapshot, or a parent of that directory, contains a `.packages` file that's different from the one used to create the snapshot.
To reproduce this, create a package with the following files:
```dart
// bin/bin.dart
import 'dart:isolate';
import 'package:app/app.dart';
main() async {
await Isolate.spawn(entrypoint, null);
}
void entrypoint(_) {
}
```
```dart
// lib/app.dart
// This file can be empty.
```
```yaml
# pubspec.yaml
name: app
```
Run:
```
$ pub get
$ dart --snapshot=bin.dart.snapshot bin/bin.dart
$ rm .packages
$ touch .packages
$ dart bin.dart.snapshot
```
You should see an error like the following:
```
Unhandled exception:
IsolateSpawnException: Unable to spawn isolate: Unhandled exception:
Load Error for "package:app/app.dart": No mapping for 'app' package when resolving 'package:app/app.dart'.
#0 _asyncLoadErrorCallback (dart:_builtin:155)
#1 _asyncLoadError (dart:_builtin:566)
#2 _loadPackage (dart:_builtin:605)
#3 _loadData (dart:_builtin:637)
#4 _loadDataAsync (dart:_builtin:657)
#5 _loadScriptCallback (dart:_builtin:153)
#6 _handleLoaderReply (dart:_builtin:370)
#7 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
'file:///tmp/app/bin/bin.dart': error: line 3 pos 1: library handler failed
import 'package:app/app.dart';
^
#0 Isolate.spawn.<spawn_async_body> (dart:isolate-patch/isolate_patch.dart)
#1 _asyncErrorWrapperHelper.<anonymous closure> (dart:async-patch/async_patch.dart:34)
#2 _RootZone.runBinary (dart:async/zone.dart:1154)
#3 _Future._propagateToListeners.handleError (dart:async/future_impl.dart:579)
#4 _Future._propagateToListeners (dart:async/future_impl.dart:641)
#5 _Future._completeError (dart:async/future_impl.dart:432)
#6 _SyncCompleter._completeError (dart:async/future_impl.dart:56)
#7 _Completer.completeError (dart:async/future_impl.dart:27)
#8 Isolate._spawnCommon.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:413)
#9 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
```
This is a regression. Running `git bisect` indicates that this error appeared as of dart-lang/sdk@6d066c7e53f82b70c3a5bfc4916d8a0c9a26a6f4. It's likely related to dart-lang/pub#1379.
|
defect
|
isolate spawn is broken in snapshots when run near a foreign packages file this breakage appears under the following conditions the script contains a package import the script uses isolate spawn the script is run from a snapshot the directory containing the snapshot or a parent of that directory contains a packages file that s different from the one used to create the snapshot to reproduce this create a package with the following files dart bin bin dart import dart isolate import package app app dart main async await isolate spawn entrypoint null void entrypoint dart lib app dart this file can be empty yaml pubspec yaml name app run pub get dart snapshot bin dart snapshot bin bin dart rm packages touch packages dart bin dart snapshot you should see an error like the following unhandled exception isolatespawnexception unable to spawn isolate unhandled exception load error for package app app dart no mapping for app package when resolving package app app dart asyncloaderrorcallback dart builtin asyncloaderror dart builtin loadpackage dart builtin loaddata dart builtin loaddataasync dart builtin loadscriptcallback dart builtin handleloaderreply dart builtin rawreceiveportimpl handlemessage dart isolate patch isolate patch dart file tmp app bin bin dart error line pos library handler failed import package app app dart isolate spawn dart isolate patch isolate patch dart asyncerrorwrapperhelper dart async patch async patch dart rootzone runbinary dart async zone dart future propagatetolisteners handleerror dart async future impl dart future propagatetolisteners dart async future impl dart future completeerror dart async future impl dart synccompleter completeerror dart async future impl dart completer completeerror dart async future impl dart isolate spawncommon dart isolate patch isolate patch dart rawreceiveportimpl handlemessage dart isolate patch isolate patch dart this is a regression running git bisect indicates that this error appeared as of dart lang sdk it s likely related to dart lang pub
| 1
|
16,361
| 2,889,796,463
|
IssuesEvent
|
2015-06-13 19:24:13
|
damonkohler/android-scripting
|
https://api.github.com/repos/damonkohler/android-scripting
|
closed
|
Importing java class files within beanshell
|
auto-migrated Priority-Medium Type-Defect
|
```
What device(s) are you experiencing the problem on?
HTC Legend
What firmware version are you running on the device?
2.1-update1
What steps will reproduce the problem?
1. I put 2 java6 classes (MyStaticClass.class and MyDynamicClass.class) in
/sdcard/myjavaclasses
2. MyStaticClass.doIt() prints "do it"
MyDynamicClass.doIt() returns "do it" String
3. I executed the useMyClasses.bsh script, this is the code:
addClassPath( "/sdcard/myjavaclasses" );
try {
MyStaticClass.doIt();
} catch( e ){
print( e + "\n" );
}
try {
MyDynamicClass p = new MyDynamicClass();
print( p.doIt() );
} catch( e ){
print( e + "\n" );
}
What is the expected output? What do you see instead?
I have the following error. Evaluation error: Sourced file: [...] unknown
error: can't load this type of class file : at line 3 : in file [...]
What version of the product are you using? On what operating system?
sl4a_r2.apk beanshell_for_android_r1.apk on Android 2.1
Please provide any additional information below.
```
Original issue reported on code.google.com by `andrea.i...@gmail.com` on 9 Sep 2010 at 3:04
|
1.0
|
Importing java class files within beanshell - ```
What device(s) are you experiencing the problem on?
HTC Legend
What firmware version are you running on the device?
2.1-update1
What steps will reproduce the problem?
1. I put 2 java6 classes (MyStaticClass.class and MyDynamicClass.class) in
/sdcard/myjavaclasses
2. MyStaticClass.doIt() prints "do it"
MyDynamicClass.doIt() returns "do it" String
3. I executed the useMyClasses.bsh script, this is the code:
addClassPath( "/sdcard/myjavaclasses" );
try {
MyStaticClass.doIt();
} catch( e ){
print( e + "\n" );
}
try {
MyDynamicClass p = new MyDynamicClass();
print( p.doIt() );
} catch( e ){
print( e + "\n" );
}
What is the expected output? What do you see instead?
I have the following error. Evaluation error: Sourced file: [...] unknown
error: can't load this type of class file : at line 3 : in file [...]
What version of the product are you using? On what operating system?
sl4a_r2.apk beanshell_for_android_r1.apk on Android 2.1
Please provide any additional information below.
```
Original issue reported on code.google.com by `andrea.i...@gmail.com` on 9 Sep 2010 at 3:04
|
defect
|
importing java class files within beanshell what device s are you experiencing the problem on htc legend what firmware version are you running on the device what steps will reproduce the problem i put classes mystaticclass class and mydynamicclass class in sdcard myjavaclasses mystaticclass doit prints do it mydynamicclass doit returns do it string i executed the usemyclasses bsh script this is the code addclasspath sdcard myjavaclasses try mystaticclass doit catch e print e n try mydynamicclass p new mydynamicclass print p doit catch e print e n what is the expected output what do you see instead i have the following error evaluation error sourced file unknown error can t load this type of class file at line in file what version of the product are you using on what operating system apk beanshell for android apk on android please provide any additional information below original issue reported on code google com by andrea i gmail com on sep at
| 1
|
28,225
| 5,221,389,134
|
IssuesEvent
|
2017-01-27 01:18:46
|
elTiempoVuela/https-finder
|
https://api.github.com/repos/elTiempoVuela/https-finder
|
closed
|
HTTPS Finder refreshes pages unexpectedly instead of testing for HTTPS in the background
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Click a Fishbowl e-mail link (Example: Applebees birthday e-mail) that shows
a coupon once
2. HTTPS Finder happily refreshes the page automatically to HTTPS
3. The coupon flashes on the screen then refuses to load again
What is the expected output? What do you see instead?
I wanted it to not automatically refresh, but it automatically refreshed.
What version of the product are you using? On what operating system?
0.85 on OS X 10.7 in Firefox 16.0
Please provide any additional information below.
I like the idea of HTTPS Finder, but why can't it try these things in the
background? I have been whitelisting a ton of sites but this bug takes the
cake. Why can't HTTPS finder, instead of defaulting to HTTPS and giving me the
option to whitelist, say "HTTPS found for this site, do you want to switch?"
It could even be done as an alternate method of "discovery" in the background.
I wager if HTTPS Everywhere already knew that https://applebees.fbmta.com
existed, it wouldn't have had the chance to load the http version first and
would have succeeded in the ViewOnce coupon displaying.
Thanks.
```
Original issue reported on code.google.com by `tob...@gmail.com` on 15 Oct 2012 at 11:27
|
1.0
|
HTTPS Finder refreshes pages unexpectedly instead of testing for HTTPS in the background - ```
What steps will reproduce the problem?
1. Click a Fishbowl e-mail link (Example: Applebees birthday e-mail) that shows
a coupon once
2. HTTPS Finder happily refreshes the page automatically to HTTPS
3. The coupon flashes on the screen then refuses to load again
What is the expected output? What do you see instead?
I wanted it to not automatically refresh, but it automatically refreshed.
What version of the product are you using? On what operating system?
0.85 on OS X 10.7 in Firefox 16.0
Please provide any additional information below.
I like the idea of HTTPS Finder, but why can't it try these things in the
background? I have been whitelisting a ton of sites but this bug takes the
cake. Why can't HTTPS finder, instead of defaulting to HTTPS and giving me the
option to whitelist, say "HTTPS found for this site, do you want to switch?"
It could even be done as an alternate method of "discovery" in the background.
I wager if HTTPS Everywhere already knew that https://applebees.fbmta.com
existed, it wouldn't have had the chance to load the http version first and
would have succeeded in the ViewOnce coupon displaying.
Thanks.
```
Original issue reported on code.google.com by `tob...@gmail.com` on 15 Oct 2012 at 11:27
|
defect
|
https finder refreshes pages unexpectedly instead of testing for https in the background what steps will reproduce the problem click a fishbowl e mail link example applebees birthday e mail that shows a coupon once https finder happily refreshes the page automatically to https the coupon flashes on the screen then refuses to load again what is the expected output what do you see instead i wanted it to not automatically refresh but it automatically refreshed what version of the product are you using on what operating system on os x in firefox please provide any additional information below i like the idea of https finder but why can t it try these things in the background i have been whitelisting a ton of sites but this bug takes the cake why can t https finder instead of defaulting to https and giving me the option to whitelist say https found for this site do you want to switch it could even be done as an alternate method of discovery in the background i wager if https everywhere already knew that existed it wouldn t have had the chance to load the http version first and would have succeeded in the viewonce coupon displaying thanks original issue reported on code google com by tob gmail com on oct at
| 1
|
218,763
| 17,020,041,231
|
IssuesEvent
|
2021-07-02 17:24:30
|
geerlingguy/raspberry-pi-pcie-devices
|
https://api.github.com/repos/geerlingguy/raspberry-pi-pcie-devices
|
closed
|
USB 3.0 uPD720201 working
|
not-on-site-yet testing complete
|
I have been running a uPD720201 based card for a while now.
The card is a [6amLifestyle USB 3.0 PCIe card from amazon](https://www.amazon.de/gp/product/B07S6S9Y96/).
It required a [mad flashing](https://github.com/markusj/upd72020x-load/issues/15) session to working reliably (or use upd72020x-load).
I would assume that any uPD720201 card with decent on-card flash should work out of the box. This chip seems to be a common competitor vs. the VL805.
|
1.0
|
USB 3.0 uPD720201 working - I have been running a uPD720201 based card for a while now.
The card is a [6amLifestyle USB 3.0 PCIe card from amazon](https://www.amazon.de/gp/product/B07S6S9Y96/).
It required a [mad flashing](https://github.com/markusj/upd72020x-load/issues/15) session to working reliably (or use upd72020x-load).
I would assume that any uPD720201 card with decent on-card flash should work out of the box. This chip seems to be a common competitor vs. the VL805.
|
non_defect
|
usb working i have been running a based card for a while now the card is a it required a session to working reliably or use load i would assume that any card with decent on card flash should work out of the box this chip seems to be a common competitor vs the
| 0
|
177,371
| 13,710,956,665
|
IssuesEvent
|
2020-10-02 02:48:11
|
FasterXML/jackson-dataformats-text
|
https://api.github.com/repos/FasterXML/jackson-dataformats-text
|
closed
|
(no Creators, like default construct, exist): no String-argument constructor/factory method to deserialize from String value ('Name')
|
csv need-test-case
|
I'm trying to receive a list of objects, but i have this error. Could you help me?
```
@Service
class DomainService {
fun getDomains(): List<Domain> {
val mapper = CsvMapper()
val csvFile = File("myCsv")
val response = mapper.readerFor(Domain::class.java).readValues<Domain>(csvFile).readAll()
return response
}
}
data class Domain(val name: String){}
```
|
1.0
|
(no Creators, like default construct, exist): no String-argument constructor/factory method to deserialize from String value ('Name') - I'm trying to receive a list of objects, but i have this error. Could you help me?
```
@Service
class DomainService {
fun getDomains(): List<Domain> {
val mapper = CsvMapper()
val csvFile = File("myCsv")
val response = mapper.readerFor(Domain::class.java).readValues<Domain>(csvFile).readAll()
return response
}
}
data class Domain(val name: String){}
```
|
non_defect
|
no creators like default construct exist no string argument constructor factory method to deserialize from string value name i m trying to receive a list of objects but i have this error could you help me service class domainservice fun getdomains list val mapper csvmapper val csvfile file mycsv val response mapper readerfor domain class java readvalues csvfile readall return response data class domain val name string
| 0
|
52,272
| 6,225,953,757
|
IssuesEvent
|
2017-07-10 17:21:25
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
closed
|
Test failure: Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_/_Bool_BoolTest_BoolTest_cmd
|
arch-arm32 test-run-uwp-coreclr
|
Opened on behalf of @Jiayili1
The test `Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_/_Bool_BoolTest_BoolTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\e845cd8d-4a00-40c2-8eee-7453008a68a8\Unzip\Reports\Interop.PrimitiveMarshalling\Bool\BoolTest\BoolTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" BoolTest.exe \r
Expected: 100\r
Actual: -532462766\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\e845cd8d-4a00-40c2-8eee-7453008a68a8\Unzip\Bool\BoolTest\BoolTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_._Bool_BoolTest_BoolTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.PrimitiveMarshalling.XUnitWrapper/analysis/xunit/Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_~2F_Bool_BoolTest_BoolTest_cmd
|
1.0
|
Test failure: Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_/_Bool_BoolTest_BoolTest_cmd - Opened on behalf of @Jiayili1
The test `Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_/_Bool_BoolTest_BoolTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\e845cd8d-4a00-40c2-8eee-7453008a68a8\Unzip\Reports\Interop.PrimitiveMarshalling\Bool\BoolTest\BoolTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" BoolTest.exe \r
Expected: 100\r
Actual: -532462766\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\e845cd8d-4a00-40c2-8eee-7453008a68a8\Unzip\Bool\BoolTest\BoolTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_._Bool_BoolTest_BoolTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.PrimitiveMarshalling.XUnitWrapper/analysis/xunit/Interop_PrimitiveMarshalling._Bool_BoolTest_BoolTest_~2F_Bool_BoolTest_BoolTest_cmd
|
non_defect
|
test failure interop primitivemarshalling bool booltest booltest bool booltest booltest cmd opened on behalf of the test interop primitivemarshalling bool booltest booltest bool booltest booltest cmd has failed return code raw output file c dotnetbuild work work unzip reports interop primitivemarshalling bool booltest booltest output txt raw output begin execution r c dotnetbuild work payload corerun exe booltest exe r expected r actual r end execution failed r failed r test harness exitcode is r to run the test set core root c dotnetbuild work payload c dotnetbuild work work unzip bool booltest booltest cmd r expected true r actual false stack trace at interop primitivemarshalling bool booltest booltest bool booltest booltest cmd build master core tests failing configurations windows arm detail
| 0
|
110,371
| 16,979,871,564
|
IssuesEvent
|
2021-06-30 07:26:19
|
SmartBear/ready-mqtt-plugin
|
https://api.github.com/repos/SmartBear/ready-mqtt-plugin
|
closed
|
CVE-2019-12086 (High) detected in jackson-databind-2.1.4.jar - autoclosed
|
security vulnerability
|
## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.1.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: ready-mqtt-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.1.4/jackson-databind-2.1.4.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-3.3.1.jar (Root Library)
- jasperreports-6.4.0-sb-fixed.jar
- :x: **jackson-databind-2.1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ready-mqtt-plugin/commit/72456065a443f2258660fde64bebd87fcbc170bb">72456065a443f2258660fde64bebd87fcbc170bb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.1.4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:3.3.1;net.sf.jasperreports:jasperreports:6.4.0-sb-fixed;com.fasterxml.jackson.core:jackson-databind:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12086","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-12086 (High) detected in jackson-databind-2.1.4.jar - autoclosed - ## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.1.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: ready-mqtt-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.1.4/jackson-databind-2.1.4.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-3.3.1.jar (Root Library)
- jasperreports-6.4.0-sb-fixed.jar
- :x: **jackson-databind-2.1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/ready-mqtt-plugin/commit/72456065a443f2258660fde64bebd87fcbc170bb">72456065a443f2258660fde64bebd87fcbc170bb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.1.4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:3.3.1;net.sf.jasperreports:jasperreports:6.4.0-sb-fixed;com.fasterxml.jackson.core:jackson-databind:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12086","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file ready mqtt plugin pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy ready api soapui pro jar root library jasperreports sb fixed jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com smartbear ready api soapui pro net sf jasperreports jasperreports sb fixed com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation vulnerabilityurl
| 0
|
39,231
| 9,333,022,315
|
IssuesEvent
|
2019-03-28 13:35:59
|
SasView/temp
|
https://api.github.com/repos/SasView/temp
|
closed
|
Clarify no longer any pure python orientation/magnetism plugin support in docs (Trac #1253)
|
Migrated from Trac SasView defect
|
When @pkienzle converted to the new jitter-view orientation representation he dropped python support, judging that it would be too slow to be useful, and so not worth the extra hassle to maintain it in both python and C (release 0.97). Similarly, he never implemented magnetism for pure python models.
That Users can no longer produce pure python oriented/magnetism plugins needs to be clarified in the documentation (plugin.rst).
Migrated from http://trac.sasview.org/ticket/1253
```json
{
"status": "closed",
"changetime": "2019-03-25T10:38:16",
"_ts": "2019-03-25 10:38:16.458460+00:00",
"description": "When @pkienzle converted to the new jitter-view orientation representation he dropped python support, judging that it would be too slow to be useful, and so not worth the extra hassle to maintain it in both python and C (release 0.97). Similarly, he never implemented magnetism for pure python models.\n\nThat Users can no longer produce pure python oriented/magnetism plugins needs to be clarified in the documentation (plugin.rst).",
"reporter": "smk78",
"cc": "",
"resolution": "fixed",
"workpackage": "SasView Documentation",
"time": "2019-03-22T10:28:31",
"component": "SasView",
"summary": "Clarify no longer any pure python orientation/magnetism plugin support in docs",
"priority": "critical",
"keywords": "",
"milestone": "SasView 4.2.2",
"owner": "smk78",
"type": "defect"
}
```
|
1.0
|
Clarify no longer any pure python orientation/magnetism plugin support in docs (Trac #1253) - When @pkienzle converted to the new jitter-view orientation representation he dropped python support, judging that it would be too slow to be useful, and so not worth the extra hassle to maintain it in both python and C (release 0.97). Similarly, he never implemented magnetism for pure python models.
That Users can no longer produce pure python oriented/magnetism plugins needs to be clarified in the documentation (plugin.rst).
Migrated from http://trac.sasview.org/ticket/1253
```json
{
"status": "closed",
"changetime": "2019-03-25T10:38:16",
"_ts": "2019-03-25 10:38:16.458460+00:00",
"description": "When @pkienzle converted to the new jitter-view orientation representation he dropped python support, judging that it would be too slow to be useful, and so not worth the extra hassle to maintain it in both python and C (release 0.97). Similarly, he never implemented magnetism for pure python models.\n\nThat Users can no longer produce pure python oriented/magnetism plugins needs to be clarified in the documentation (plugin.rst).",
"reporter": "smk78",
"cc": "",
"resolution": "fixed",
"workpackage": "SasView Documentation",
"time": "2019-03-22T10:28:31",
"component": "SasView",
"summary": "Clarify no longer any pure python orientation/magnetism plugin support in docs",
"priority": "critical",
"keywords": "",
"milestone": "SasView 4.2.2",
"owner": "smk78",
"type": "defect"
}
```
|
defect
|
clarify no longer any pure python orientation magnetism plugin support in docs trac when pkienzle converted to the new jitter view orientation representation he dropped python support judging that it would be too slow to be useful and so not worth the extra hassle to maintain it in both python and c release similarly he never implemented magnetism for pure python models that users can no longer produce pure python oriented magnetism plugins needs to be clarified in the documentation plugin rst migrated from json status closed changetime ts description when pkienzle converted to the new jitter view orientation representation he dropped python support judging that it would be too slow to be useful and so not worth the extra hassle to maintain it in both python and c release similarly he never implemented magnetism for pure python models n nthat users can no longer produce pure python oriented magnetism plugins needs to be clarified in the documentation plugin rst reporter cc resolution fixed workpackage sasview documentation time component sasview summary clarify no longer any pure python orientation magnetism plugin support in docs priority critical keywords milestone sasview owner type defect
| 1
|
27,555
| 5,048,292,501
|
IssuesEvent
|
2016-12-20 12:24:18
|
TASVideos/BizHawk
|
https://api.github.com/repos/TASVideos/BizHawk
|
closed
|
Console log window initially displays no output
|
Assigned-zeromus auto-migrated Core-EmuHawk OpSys-Windows Priority-Low Type-Defect
|
```
What steps will reproduce the problem?
1. Install Bizhawk into a new and completely empty folder.
2. Start Bizhawk.
3. Ensure that Config -> GUI -> Log Window as Console is enabled. (This is the
default setting).
4. Open the log window.
5. Open a ROM.
What is the expected output? What do you see instead?
Expected output: ROM information.
Actual output: Nothing. Data is still being written to standard output as
confirmed by starting via the command line.
What version of the product are you using? On what operating system?
r2313, Win 7 x64
Please provide any additional information below.
The desired behavior is achievable by restarting when the log window is open
(on restart, both the console and main frame are visible).
```
Original issue reported on code.google.com by `stop.squark` on 1 Jun 2012 at 3:47
|
1.0
|
Console log window initially displays no output - ```
What steps will reproduce the problem?
1. Install Bizhawk into a new and completely empty folder.
2. Start Bizhawk.
3. Ensure that Config -> GUI -> Log Window as Console is enabled. (This is the
default setting).
4. Open the log window.
5. Open a ROM.
What is the expected output? What do you see instead?
Expected output: ROM information.
Actual output: Nothing. Data is still being written to standard output as
confirmed by starting via the command line.
What version of the product are you using? On what operating system?
r2313, Win 7 x64
Please provide any additional information below.
The desired behavior is achievable by restarting when the log window is open
(on restart, both the console and main frame are visible).
```
Original issue reported on code.google.com by `stop.squark` on 1 Jun 2012 at 3:47
|
defect
|
console log window initially displays no output what steps will reproduce the problem install bizhawk into a new and completely empty folder start bizhawk ensure that config gui log window as console is enabled this is the default setting open the log window open a rom what is the expected output what do you see instead expected output rom information actual output nothing data is still being written to standard output as confirmed by starting via the command line what version of the product are you using on what operating system win please provide any additional information below the desired behavior is achievable by restarting when the log window is open on restart both the console and main frame are visible original issue reported on code google com by stop squark on jun at
| 1
|
281,457
| 8,695,495,746
|
IssuesEvent
|
2018-12-04 15:18:20
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
agscientific.com - design is broken
|
browser-firefox-mobile priority-normal
|
<!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: http://agscientific.com/blog/2011/11/side-by-side-comparison-dtt-vs-tcep-preferred-reducing-reagents/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: table on page is cut off on mobile view and can't scroll sideways
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
agscientific.com - design is broken - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: http://agscientific.com/blog/2011/11/side-by-side-comparison-dtt-vs-tcep-preferred-reducing-reagents/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: table on page is cut off on mobile view and can't scroll sideways
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
agscientific com design is broken url browser version firefox mobile operating system android tested another browser no problem type design is broken description table on page is cut off on mobile view and can t scroll sideways steps to reproduce browser configuration none from with ❤️
| 0
|
635,512
| 20,404,325,979
|
IssuesEvent
|
2022-02-23 02:12:33
|
sonia-auv/sonia-simulation
|
https://api.github.com/repos/sonia-auv/sonia-simulation
|
closed
|
Unity is running slow, but only took 1% cpu usage and 2% gpu
|
Priority: High Type: Bug
|
## Expected Behavior
The simulation should be smooth and responsive.
## Current Behavior
Curently, the simulation is lock at 10fps. Also there is a input latency
## Possible Solution
check for optimisation for ros msg.
|
1.0
|
Unity is running slow, but only took 1% cpu usage and 2% gpu -
## Expected Behavior
The simulation should be smooth and responsive.
## Current Behavior
Curently, the simulation is lock at 10fps. Also there is a input latency
## Possible Solution
check for optimisation for ros msg.
|
non_defect
|
unity is running slow but only took cpu usage and gpu expected behavior the simulation should be smooth and responsive current behavior curently the simulation is lock at also there is a input latency possible solution check for optimisation for ros msg
| 0
|
40,479
| 10,014,630,571
|
IssuesEvent
|
2019-07-15 18:00:47
|
sm0svx/svxlink
|
https://api.github.com/repos/sm0svx/svxlink
|
closed
|
AsyncTcpClient unable to handle unreliable internet connection
|
T: defect
|
Function connectRemote is not called when the reconnection timer expires and previous connect
is still pending. It would be better to cancel previous connection and try to establish a new connection.
This essential for using svxlink in unreliable 3G/4G networks environments.
|
1.0
|
AsyncTcpClient unable to handle unreliable internet connection - Function connectRemote is not called when the reconnection timer expires and previous connect
is still pending. It would be better to cancel previous connection and try to establish a new connection.
This essential for using svxlink in unreliable 3G/4G networks environments.
|
defect
|
asynctcpclient unable to handle unreliable internet connection function connectremote is not called when the reconnection timer expires and previous connect is still pending it would be better to cancel previous connection and try to establish a new connection this essential for using svxlink in unreliable networks environments
| 1
|
2,570
| 2,607,928,833
|
IssuesEvent
|
2015-02-26 00:25:49
|
chrsmithdemos/minify
|
https://api.github.com/repos/chrsmithdemos/minify
|
opened
|
preg_replace_callback crashes on PHP 5.1.6
|
auto-migrated Priority-High Type-Defect
|
```
Minify version: 2.1.0
PHP version: 5.1.6
What steps will reproduce the problem?
1. Minify the attached CSS file
Expected output: A minified CSS
Actual output: Nothing, Apache (2.0.61) crashes
* When I switch to PHP 5.2.5, it works fine.
* When I comment the "remove ws in selectors", it works.
* _selectorsCB is not called.
Hi,
Even if I can use PHP 5.2.5, I have to develop with PHP 5.1.6.
Can you fix this ?
Thanks in advance
```
-----
Original issue reported on code.google.com by `coelho....@gmail.com` on 15 Oct 2008 at 2:03
Attachments:
* [get.css](https://storage.googleapis.com/google-code-attachments/minify/issue-62/comment-0/get.css)
|
1.0
|
preg_replace_callback crashes on PHP 5.1.6 - ```
Minify version: 2.1.0
PHP version: 5.1.6
What steps will reproduce the problem?
1. Minify the attached CSS file
Expected output: A minified CSS
Actual output: Nothing, Apache (2.0.61) crashes
* When I switch to PHP 5.2.5, it works fine.
* When I comment the "remove ws in selectors", it works.
* _selectorsCB is not called.
Hi,
Even if I can use PHP 5.2.5, I have to develop with PHP 5.1.6.
Can you fix this ?
Thanks in advance
```
-----
Original issue reported on code.google.com by `coelho....@gmail.com` on 15 Oct 2008 at 2:03
Attachments:
* [get.css](https://storage.googleapis.com/google-code-attachments/minify/issue-62/comment-0/get.css)
|
defect
|
preg replace callback crashes on php minify version php version what steps will reproduce the problem minify the attached css file expected output a minified css actual output nothing apache crashes when i switch to php it works fine when i comment the remove ws in selectors it works selectorscb is not called hi even if i can use php i have to develop with php can you fix this thanks in advance original issue reported on code google com by coelho gmail com on oct at attachments
| 1
|
88,942
| 17,754,907,255
|
IssuesEvent
|
2021-08-28 15:06:14
|
D1sconnected/sandbox
|
https://api.github.com/repos/D1sconnected/sandbox
|
opened
|
Arrays - Inserting, Deleting, Search
|
С LeetCode Arrays
|
## Какая цель у данной задачи?
Выполнить задачи следующих блоков:
1. Inserting Items Into an Array
1.1. [Duplicate Zeros](https://leetcode.com/explore/learn/card/fun-with-arrays/525/inserting-items-into-an-array/3245/)
1.2. [Merge Sorted Array](https://leetcode.com/explore/learn/card/fun-with-arrays/525/inserting-items-into-an-array/3253/)
2. Deleting Items From an Array
2.1. [Remove Element](https://leetcode.com/explore/learn/card/fun-with-arrays/526/deleting-items-from-an-array/3247/)
2.2. [Remove Duplicates from Sorted Array](https://leetcode.com/explore/learn/card/fun-with-arrays/526/deleting-items-from-an-array/3248/)
3. Searching for Items in an Array
3.1 [Check If N and Its Double Exist](https://leetcode.com/explore/learn/card/fun-with-arrays/527/searching-for-items-in-an-array/3250/)
3.2. [Valid Mountain Array](https://leetcode.com/explore/learn/card/fun-with-arrays/527/searching-for-items-in-an-array/3251/)
## Что на выходе?
* [ ] Inserting Items Into an Array
* [ ] Deleting Items From an Array
* [ ] Searching for Items in an Array
|
1.0
|
Arrays - Inserting, Deleting, Search - ## Какая цель у данной задачи?
Выполнить задачи следующих блоков:
1. Inserting Items Into an Array
1.1. [Duplicate Zeros](https://leetcode.com/explore/learn/card/fun-with-arrays/525/inserting-items-into-an-array/3245/)
1.2. [Merge Sorted Array](https://leetcode.com/explore/learn/card/fun-with-arrays/525/inserting-items-into-an-array/3253/)
2. Deleting Items From an Array
2.1. [Remove Element](https://leetcode.com/explore/learn/card/fun-with-arrays/526/deleting-items-from-an-array/3247/)
2.2. [Remove Duplicates from Sorted Array](https://leetcode.com/explore/learn/card/fun-with-arrays/526/deleting-items-from-an-array/3248/)
3. Searching for Items in an Array
3.1 [Check If N and Its Double Exist](https://leetcode.com/explore/learn/card/fun-with-arrays/527/searching-for-items-in-an-array/3250/)
3.2. [Valid Mountain Array](https://leetcode.com/explore/learn/card/fun-with-arrays/527/searching-for-items-in-an-array/3251/)
## Что на выходе?
* [ ] Inserting Items Into an Array
* [ ] Deleting Items From an Array
* [ ] Searching for Items in an Array
|
non_defect
|
arrays inserting deleting search какая цель у данной задачи выполнить задачи следующих блоков inserting items into an array deleting items from an array searching for items in an array что на выходе inserting items into an array deleting items from an array searching for items in an array
| 0
|
142,104
| 19,058,121,197
|
IssuesEvent
|
2021-11-26 01:05:51
|
dreamboy9/mongo
|
https://api.github.com/repos/dreamboy9/mongo
|
closed
|
CVE-2019-20838 (High) detected in mongor5.0.0-rc5 - autoclosed
|
security vulnerability
|
## CVE-2019-20838 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongor5.0.0-rc5</b></p></summary>
<p>
<p>The MongoDB Database</p>
<p>Library home page: <a href=https://github.com/mongodb/mongo.git>https://github.com/mongodb/mongo.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/mongo/commit/60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b">60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
libpcre in PCRE before 8.43 allows a subject buffer over-read in JIT when UTF is disabled, and \X or \R has more than one fixed quantifier, a related issue to CVE-2019-20454.
<p>Publish Date: 2020-06-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20838>CVE-2019-20838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838</a></p>
<p>Release Date: 2020-06-15</p>
<p>Fix Resolution: 8.43</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-20838 (High) detected in mongor5.0.0-rc5 - autoclosed - ## CVE-2019-20838 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongor5.0.0-rc5</b></p></summary>
<p>
<p>The MongoDB Database</p>
<p>Library home page: <a href=https://github.com/mongodb/mongo.git>https://github.com/mongodb/mongo.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/mongo/commit/60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b">60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>mongo/src/third_party/pcre-8.42/pcre_jit_compile.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
libpcre in PCRE before 8.43 allows a subject buffer over-read in JIT when UTF is disabled, and \X or \R has more than one fixed quantifier, a related issue to CVE-2019-20454.
<p>Publish Date: 2020-06-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20838>CVE-2019-20838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20838</a></p>
<p>Release Date: 2020-06-15</p>
<p>Fix Resolution: 8.43</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in autoclosed cve high severity vulnerability vulnerable library the mongodb database library home page a href found in head commit a href found in base branch master vulnerable source files mongo src third party pcre pcre jit compile c mongo src third party pcre pcre jit compile c mongo src third party pcre pcre jit compile c vulnerability details libpcre in pcre before allows a subject buffer over read in jit when utf is disabled and x or r has more than one fixed quantifier a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
61,194
| 8,494,336,976
|
IssuesEvent
|
2018-10-28 20:34:23
|
jupyterlab/jupyterlab
|
https://api.github.com/repos/jupyterlab/jupyterlab
|
closed
|
Describe terminal copy/paste operation to windows user
|
help wanted pkg:terminal tag:Documentation type:Enhancement
|
Using the terminal under linux is pretty confortable, since copy/paste occurs moste of the time on text selection on double mouse.
I would suggest to indicate to users the generic copy/paste hotkey that nobody use but will work under windows:
- Cut: Shift + Delete
- Copy : Ctrl + Insert
- Paste: Shift + Insert
Ctrl+C for instance is reserved under linux for sigkill. Some browser extentions override Ctrl+Shift+C for instance.
|
1.0
|
Describe terminal copy/paste operation to windows user - Using the terminal under linux is pretty confortable, since copy/paste occurs moste of the time on text selection on double mouse.
I would suggest to indicate to users the generic copy/paste hotkey that nobody use but will work under windows:
- Cut: Shift + Delete
- Copy : Ctrl + Insert
- Paste: Shift + Insert
Ctrl+C for instance is reserved under linux for sigkill. Some browser extentions override Ctrl+Shift+C for instance.
|
non_defect
|
describe terminal copy paste operation to windows user using the terminal under linux is pretty confortable since copy paste occurs moste of the time on text selection on double mouse i would suggest to indicate to users the generic copy paste hotkey that nobody use but will work under windows cut shift delete copy ctrl insert paste shift insert ctrl c for instance is reserved under linux for sigkill some browser extentions override ctrl shift c for instance
| 0
|
281,815
| 24,421,910,880
|
IssuesEvent
|
2022-10-05 21:10:05
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Test item range is wrong when workspace edit is applied
|
bug testing
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.66.1 (user setup)
Commit: 8dfae7a5cd50421d10cd99cb873990460525a898
Date: 2022-04-06T14:50:12.141Z
Electron: 17.2.0
Chromium: 98.0.4758.109
Node.js: 16.13.0
V8: 9.8.177.11-electron.0
OS: Windows_NT x64 10.0.22000
Steps to Reproduce:
1. Clone the repository: https://github.com/jdneo/test-api-issue.
2. Setup the extension and launch it.
3. Open `test.txt`
4. Trigger command `Hello World`, which will apply a workspace edit.
5. One test item disappear and the range is different from that set via api.
https://user-images.githubusercontent.com/6193897/162564176-84c73b2e-ef0a-4031-8000-eb67878ac57d.mp4
Note: From my observation, this error relates with how many lines between the 'parent' and 'child' item. When the lines added by the workspace edit is the same as the lines between the parent and child, then this will happen. If they are different, then everything works fine:
https://user-images.githubusercontent.com/6193897/162564220-bbcd9564-5b6d-445f-a6f9-c1b56d4c2ea7.mp4
|
1.0
|
Test item range is wrong when workspace edit is applied - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.66.1 (user setup)
Commit: 8dfae7a5cd50421d10cd99cb873990460525a898
Date: 2022-04-06T14:50:12.141Z
Electron: 17.2.0
Chromium: 98.0.4758.109
Node.js: 16.13.0
V8: 9.8.177.11-electron.0
OS: Windows_NT x64 10.0.22000
Steps to Reproduce:
1. Clone the repository: https://github.com/jdneo/test-api-issue.
2. Setup the extension and launch it.
3. Open `test.txt`
4. Trigger command `Hello World`, which will apply a workspace edit.
5. One test item disappear and the range is different from that set via api.
https://user-images.githubusercontent.com/6193897/162564176-84c73b2e-ef0a-4031-8000-eb67878ac57d.mp4
Note: From my observation, this error relates with how many lines between the 'parent' and 'child' item. When the lines added by the workspace edit is the same as the lines between the parent and child, then this will happen. If they are different, then everything works fine:
https://user-images.githubusercontent.com/6193897/162564220-bbcd9564-5b6d-445f-a6f9-c1b56d4c2ea7.mp4
|
non_defect
|
test item range is wrong when workspace edit is applied does this issue occur when all extensions are disabled yes no report issue dialog can assist with this version user setup commit date electron chromium node js electron os windows nt steps to reproduce clone the repository setup the extension and launch it open test txt trigger command hello world which will apply a workspace edit one test item disappear and the range is different from that set via api note from my observation this error relates with how many lines between the parent and child item when the lines added by the workspace edit is the same as the lines between the parent and child then this will happen if they are different then everything works fine
| 0
|
145,099
| 19,319,792,645
|
IssuesEvent
|
2021-12-14 03:15:07
|
AOSC-Dev/aosc-os-abbs
|
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
|
opened
|
firefox: update to 95.0
|
security
|
### CVE IDs
CVE-2021-43536,CVE-2021-43537,CVE-2021-43538,CVE-2021-43539,CVE-2021-43540,CVE-2021-43541,CVE-2021-43542,CVE-2021-43543,CVE-2021-43544,CVE-2021-43545,CVE-2021-43546
### Other security advisory IDs
N/A
### Description
https://www.mozilla.org/en-US/security/advisories/mfsa2021-52/
### Patches
N/A
### PoC(s)
N/A
|
True
|
firefox: update to 95.0 - ### CVE IDs
CVE-2021-43536,CVE-2021-43537,CVE-2021-43538,CVE-2021-43539,CVE-2021-43540,CVE-2021-43541,CVE-2021-43542,CVE-2021-43543,CVE-2021-43544,CVE-2021-43545,CVE-2021-43546
### Other security advisory IDs
N/A
### Description
https://www.mozilla.org/en-US/security/advisories/mfsa2021-52/
### Patches
N/A
### PoC(s)
N/A
|
non_defect
|
firefox update to cve ids cve cve cve cve cve cve cve cve cve cve cve other security advisory ids n a description patches n a poc s n a
| 0
|
17,634
| 3,631,877,545
|
IssuesEvent
|
2016-02-11 05:30:17
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
opened
|
Add Snippets integration test for caret non-movement on Enter outside snippet fields
|
Area-IDE Test
|
Add a test to cover the scenario fixed in https://github.com/dotnet/roslyn/pull/8426
|
1.0
|
Add Snippets integration test for caret non-movement on Enter outside snippet fields - Add a test to cover the scenario fixed in https://github.com/dotnet/roslyn/pull/8426
|
non_defect
|
add snippets integration test for caret non movement on enter outside snippet fields add a test to cover the scenario fixed in
| 0
|
33,708
| 7,200,400,851
|
IssuesEvent
|
2018-02-05 18:56:04
|
DivinumOfficium/divinum-officium
|
https://api.github.com/repos/DivinumOfficium/divinum-officium
|
closed
|
Sancta Missa: Vigil of the Apostles
|
Priority-Medium Rubric-Divino Rubric-Tridentine Type-Defect auto-migrated
|
```
The Mass from the Vigil of the Apostles (Ego Autem,
~/missa/Latin/Commune/C1v.txt) is not appearing for 11/29 and for 12/20 in the
Divino Afflatu and earlier forms of the Mass.
```
Original issue reported on code.google.com by `canon.mi...@gmail.com` on 16 Dec 2011 at 5:47
|
1.0
|
Sancta Missa: Vigil of the Apostles - ```
The Mass from the Vigil of the Apostles (Ego Autem,
~/missa/Latin/Commune/C1v.txt) is not appearing for 11/29 and for 12/20 in the
Divino Afflatu and earlier forms of the Mass.
```
Original issue reported on code.google.com by `canon.mi...@gmail.com` on 16 Dec 2011 at 5:47
|
defect
|
sancta missa vigil of the apostles the mass from the vigil of the apostles ego autem missa latin commune txt is not appearing for and for in the divino afflatu and earlier forms of the mass original issue reported on code google com by canon mi gmail com on dec at
| 1
|
4,756
| 2,610,154,851
|
IssuesEvent
|
2015-02-26 18:49:14
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Text
|
auto-migrated Priority-Medium Type-Defect
|
```
Sabaoth Squadron missing texts
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:15
|
1.0
|
Text - ```
Sabaoth Squadron missing texts
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:15
|
defect
|
text sabaoth squadron missing texts original issue reported on code google com by gmail com on jan at
| 1
|
242,816
| 20,265,068,092
|
IssuesEvent
|
2022-02-15 11:14:24
|
ably/ably-go
|
https://api.github.com/repos/ably/ably-go
|
opened
|
Test improvements - Standardise test assertions
|
tests
|
Currently this project uses two different ways of asserting in its tests.
In some places like `error_test.go` the `github.com/stretchr/testify/assert` package is used to assert values are equal with `assert.Equal`, values are true with `assert.True` etc.
This project also appears to have 'rolled its own' assertion code in `ably_test.go` where there are functions named `assertEquals`, `assertTrue` etc. Our own assertion code in `ably_test.go` is not covered by any unit tests.
We should standardise our approach to making assertions in tests. As we already use the `/testify/assert` package, which has 15.5k stars on GitHub, we should not have to reinvent the wheel and also write our own assertion code.
The advantages of using a widely adopted assertion package like `/testify/assert` are that this package is already covered by many of unit tests, is actively maintained and updated. It's API has been well designed and thought out making tests that use it highly readable.
I would propose we move from partial adoption of `/testify/assert` and fully adopt this package. This would allow us to delete the assertion code in `ably_test.go`.
|
1.0
|
Test improvements - Standardise test assertions - Currently this project uses two different ways of asserting in its tests.
In some places like `error_test.go` the `github.com/stretchr/testify/assert` package is used to assert values are equal with `assert.Equal`, values are true with `assert.True` etc.
This project also appears to have 'rolled its own' assertion code in `ably_test.go` where there are functions named `assertEquals`, `assertTrue` etc. Our own assertion code in `ably_test.go` is not covered by any unit tests.
We should standardise our approach to making assertions in tests. As we already use the `/testify/assert` package, which has 15.5k stars on GitHub, we should not have to reinvent the wheel and also write our own assertion code.
The advantages of using a widely adopted assertion package like `/testify/assert` are that this package is already covered by many of unit tests, is actively maintained and updated. It's API has been well designed and thought out making tests that use it highly readable.
I would propose we move from partial adoption of `/testify/assert` and fully adopt this package. This would allow us to delete the assertion code in `ably_test.go`.
|
non_defect
|
test improvements standardise test assertions currently this project uses two different ways of asserting in its tests in some places like error test go the github com stretchr testify assert package is used to assert values are equal with assert equal values are true with assert true etc this project also appears to have rolled its own assertion code in ably test go where there are functions named assertequals asserttrue etc our own assertion code in ably test go is not covered by any unit tests we should standardise our approach to making assertions in tests as we already use the testify assert package which has stars on github we should not have to reinvent the wheel and also write our own assertion code the advantages of using a widely adopted assertion package like testify assert are that this package is already covered by many of unit tests is actively maintained and updated it s api has been well designed and thought out making tests that use it highly readable i would propose we move from partial adoption of testify assert and fully adopt this package this would allow us to delete the assertion code in ably test go
| 0
|
540,984
| 15,819,926,940
|
IssuesEvent
|
2021-04-05 18:12:48
|
buttercup/buttercup-desktop
|
https://api.github.com/repos/buttercup/buttercup-desktop
|
closed
|
Shared files in GoogleDrive are not shown
|
Priority: Medium Status: In Progress Type: Bug
|
When connecting GoogleDrive as a cloud source shared files (as in files shared with me) don't show up.
I think I found a fix for it in https://github.com/buttercup/file-interface/pull/2 but I need someone to review and test it on other platforms.
Any feedback would be much appreciated.
|
1.0
|
Shared files in GoogleDrive are not shown - When connecting GoogleDrive as a cloud source shared files (as in files shared with me) don't show up.
I think I found a fix for it in https://github.com/buttercup/file-interface/pull/2 but I need someone to review and test it on other platforms.
Any feedback would be much appreciated.
|
non_defect
|
shared files in googledrive are not shown when connecting googledrive as a cloud source shared files as in files shared with me don t show up i think i found a fix for it in but i need someone to review and test it on other platforms any feedback would be much appreciated
| 0
|
19,247
| 3,167,867,897
|
IssuesEvent
|
2015-09-22 00:43:18
|
prettydiff/prettydiff
|
https://api.github.com/repos/prettydiff/prettydiff
|
closed
|
Minify and ternarius-js-code
|
Defect Underway
|
simple example:
```js
if(1)
var a = 1
? 2
: 3;
```
=>
```js
if(1)var a=1;?2:3;
```
need without ";"
```js
if(1)var a=1?2:3;
```
|
1.0
|
Minify and ternarius-js-code - simple example:
```js
if(1)
var a = 1
? 2
: 3;
```
=>
```js
if(1)var a=1;?2:3;
```
need without ";"
```js
if(1)var a=1?2:3;
```
|
defect
|
minify and ternarius js code simple example js if var a js if var a need without js if var a
| 1
|
79,648
| 28,495,914,241
|
IssuesEvent
|
2023-04-18 14:12:11
|
vector-im/element-desktop
|
https://api.github.com/repos/vector-im/element-desktop
|
opened
|
Element-desktop starts hidden - can't disable
|
T-Defect
|
### Description
When enabling `Start automatically after system login` in the Ubuntu Element-desktop package, the following command is added to the list of startup applications:
`/opt/Element/element-desktop --hidden`
There is no way to disable the --hidden flag. If manually removed, it is automatically added back on the next run of Element-desktop.
Also, when starting automatically, Element does not remember its last window position.
### Steps to reproduce
1. Enable `Start automatically after system login`
2. Log out, and back in
3. Element starts hidden
### Version information
- **Platform**: desktop
- **OS**: Ubuntu 20.04
- **Version**: 1.7.17
|
1.0
|
Element-desktop starts hidden - can't disable - ### Description
When enabling `Start automatically after system login` in the Ubuntu Element-desktop package, the following command is added to the list of startup applications:
`/opt/Element/element-desktop --hidden`
There is no way to disable the --hidden flag. If manually removed, it is automatically added back on the next run of Element-desktop.
Also, when starting automatically, Element does not remember its last window position.
### Steps to reproduce
1. Enable `Start automatically after system login`
2. Log out, and back in
3. Element starts hidden
### Version information
- **Platform**: desktop
- **OS**: Ubuntu 20.04
- **Version**: 1.7.17
|
defect
|
element desktop starts hidden can t disable description when enabling start automatically after system login in the ubuntu element desktop package the following command is added to the list of startup applications opt element element desktop hidden there is no way to disable the hidden flag if manually removed it is automatically added back on the next run of element desktop also when starting automatically element does not remember its last window position steps to reproduce enable start automatically after system login log out and back in element starts hidden version information platform desktop os ubuntu version
| 1
|
57,762
| 16,045,993,085
|
IssuesEvent
|
2021-04-22 13:42:44
|
snowplow/snowplow-android-tracker
|
https://api.github.com/repos/snowplow/snowplow-android-tracker
|
closed
|
Crash when sending screen event in beta version 2.0.0
|
type:defect
|
**Describe the bug**
When I send an event the app crashes.
**To Reproduce**
Send a screen event with the following dependencies.
```kotlin
@AnalyticsScope
@Provides
fun provideNetworkConfiguration(): NetworkConfiguration {
return NetworkConfiguration(
analyticsTrackerConfig.endpoint
)
}
@AnalyticsScope
@Provides
fun provideTrackerConfiguration(): TrackerConfiguration {
return TrackerConfiguration(analyticsTrackerConfig.appId)
.logLevel(LogLevel.DEBUG)
.installAutotracking(false)
}
@AnalyticsScope
@Provides
fun provideTrackerController(
networkConfiguration: NetworkConfiguration,
trackerConfiguration: TrackerConfiguration,
): TrackerController {
return Snowplow.createTracker(
analyticsTrackerConfig.context,
analyticsTrackerConfig.namespace,
networkConfiguration,
trackerConfiguration
)
}
````
Event code:
```kotlin
fun screenViewEvent(screenEvent: ScreenEvent) {
trackerController.track(
ScreenView.builder()
.id(screenEvent.id)
.name(screenEvent.name)
.type(screenEvent.type)
.previousName(screenEvent.previousName)
.previousType(screenEvent.previousType)
.build()
)
}
```
**Expected behavior**
The app should not crash.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Device informatoin (please complete the following information):**
- Device: Pixel 2 XL
- OS: Android 11
- Browser N/A
- Version 2.0.0 beta 1
**Additional context**
`2021-04-22 11:09:30.013 7671-7671/cartrawler.integration E/AndroidRuntime: FATAL EXCEPTION: main
Process: cartrawler.integration, PID: 7671
java.lang.NoClassDefFoundError: Failed resolution of: Landroidx/lifecycle/ProcessLifecycleOwner;
at com.snowplowanalytics.snowplow.internal.tracker.Tracker$5.run(Tracker.java:574)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:223)
at android.app.ActivityThread.main(ActivityThread.java:7656)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)
Caused by: java.lang.ClassNotFoundException: Didn't find class "androidx.lifecycle.ProcessLifecycleOwner" on path: DexPathList[[zip file "/data/app/~~niY9nc6cWz2RINKsmkwIvw==/cartrawler.integration-CL7zSPlc7Tp8KAmxPeukTw==/base.apk"],nativeLibraryDirectories=[/data/app/~~niY9nc6cWz2RINKsmkwIvw==/cartrawler.integration-CL7zSPlc7Tp8KAmxPeukTw==/lib/arm64, /system/lib64, /system/product/lib64]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:207)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
at com.snowplowanalytics.snowplow.internal.tracker.Tracker$5.run(Tracker.java:574)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:223)
at android.app.ActivityThread.main(ActivityThread.java:7656)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947) `
|
1.0
|
Crash when sending screen event in beta version 2.0.0 - **Describe the bug**
When I send an event the app crashes.
**To Reproduce**
Send a screen event with the following dependencies.
```kotlin
@AnalyticsScope
@Provides
fun provideNetworkConfiguration(): NetworkConfiguration {
return NetworkConfiguration(
analyticsTrackerConfig.endpoint
)
}
@AnalyticsScope
@Provides
fun provideTrackerConfiguration(): TrackerConfiguration {
return TrackerConfiguration(analyticsTrackerConfig.appId)
.logLevel(LogLevel.DEBUG)
.installAutotracking(false)
}
@AnalyticsScope
@Provides
fun provideTrackerController(
networkConfiguration: NetworkConfiguration,
trackerConfiguration: TrackerConfiguration,
): TrackerController {
return Snowplow.createTracker(
analyticsTrackerConfig.context,
analyticsTrackerConfig.namespace,
networkConfiguration,
trackerConfiguration
)
}
````
Event code:
```kotlin
fun screenViewEvent(screenEvent: ScreenEvent) {
trackerController.track(
ScreenView.builder()
.id(screenEvent.id)
.name(screenEvent.name)
.type(screenEvent.type)
.previousName(screenEvent.previousName)
.previousType(screenEvent.previousType)
.build()
)
}
```
**Expected behavior**
The app should not crash.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Device informatoin (please complete the following information):**
- Device: Pixel 2 XL
- OS: Android 11
- Browser N/A
- Version 2.0.0 beta 1
**Additional context**
`2021-04-22 11:09:30.013 7671-7671/cartrawler.integration E/AndroidRuntime: FATAL EXCEPTION: main
Process: cartrawler.integration, PID: 7671
java.lang.NoClassDefFoundError: Failed resolution of: Landroidx/lifecycle/ProcessLifecycleOwner;
at com.snowplowanalytics.snowplow.internal.tracker.Tracker$5.run(Tracker.java:574)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:223)
at android.app.ActivityThread.main(ActivityThread.java:7656)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)
Caused by: java.lang.ClassNotFoundException: Didn't find class "androidx.lifecycle.ProcessLifecycleOwner" on path: DexPathList[[zip file "/data/app/~~niY9nc6cWz2RINKsmkwIvw==/cartrawler.integration-CL7zSPlc7Tp8KAmxPeukTw==/base.apk"],nativeLibraryDirectories=[/data/app/~~niY9nc6cWz2RINKsmkwIvw==/cartrawler.integration-CL7zSPlc7Tp8KAmxPeukTw==/lib/arm64, /system/lib64, /system/product/lib64]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:207)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
at com.snowplowanalytics.snowplow.internal.tracker.Tracker$5.run(Tracker.java:574)
at android.os.Handler.handleCallback(Handler.java:938)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:223)
at android.app.ActivityThread.main(ActivityThread.java:7656)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947) `
|
defect
|
crash when sending screen event in beta version describe the bug when i send an event the app crashes to reproduce send a screen event with the following dependencies kotlin analyticsscope provides fun providenetworkconfiguration networkconfiguration return networkconfiguration analyticstrackerconfig endpoint analyticsscope provides fun providetrackerconfiguration trackerconfiguration return trackerconfiguration analyticstrackerconfig appid loglevel loglevel debug installautotracking false analyticsscope provides fun providetrackercontroller networkconfiguration networkconfiguration trackerconfiguration trackerconfiguration trackercontroller return snowplow createtracker analyticstrackerconfig context analyticstrackerconfig namespace networkconfiguration trackerconfiguration event code kotlin fun screenviewevent screenevent screenevent trackercontroller track screenview builder id screenevent id name screenevent name type screenevent type previousname screenevent previousname previoustype screenevent previoustype build expected behavior the app should not crash screenshots if applicable add screenshots to help explain your problem device informatoin please complete the following information device pixel xl os android browser n a version beta additional context cartrawler integration e androidruntime fatal exception main process cartrawler integration pid java lang noclassdeffounderror failed resolution of landroidx lifecycle processlifecycleowner at com snowplowanalytics snowplow internal tracker tracker run tracker java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java caused by java lang classnotfoundexception didn t find class androidx lifecycle processlifecycleowner on path dexpathlist nativelibrarydirectories at dalvik system basedexclassloader findclass basedexclassloader java at java lang classloader loadclass classloader java at java lang classloader loadclass classloader java at com snowplowanalytics snowplow internal tracker tracker run tracker java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java
| 1
|
26,609
| 11,351,890,813
|
IssuesEvent
|
2020-01-24 12:20:40
|
nodejs/security-wg
|
https://api.github.com/repos/nodejs/security-wg
|
opened
|
Introducing Audit Hooks to Node.js
|
security-wg-agenda
|
Last year, Python introduced audit hooks to the language.
In short, this is an API allowing users to:
* subscribe to certain actions (for instance, file system access)
* prevent certain calls (for instance, preventing accessing a file)
* add userland hooks (we could have some in DB libraries for instance)
Another bonus is that we could add a hook to `eval` and finally provide a good way to monitor its usage.
I remember that Microsoft team was intereted in having such an API (I understand they drove the effort in Python).
For reference, the Python spec is here https://www.python.org/dev/peps/pep-0578/
Is there any strong opinions on that topic?
|
True
|
Introducing Audit Hooks to Node.js - Last year, Python introduced audit hooks to the language.
In short, this is an API allowing users to:
* subscribe to certain actions (for instance, file system access)
* prevent certain calls (for instance, preventing accessing a file)
* add userland hooks (we could have some in DB libraries for instance)
Another bonus is that we could add a hook to `eval` and finally provide a good way to monitor its usage.
I remember that Microsoft team was intereted in having such an API (I understand they drove the effort in Python).
For reference, the Python spec is here https://www.python.org/dev/peps/pep-0578/
Is there any strong opinions on that topic?
|
non_defect
|
introducing audit hooks to node js last year python introduced audit hooks to the language in short this is an api allowing users to subscribe to certain actions for instance file system access prevent certain calls for instance preventing accessing a file add userland hooks we could have some in db libraries for instance another bonus is that we could add a hook to eval and finally provide a good way to monitor its usage i remember that microsoft team was intereted in having such an api i understand they drove the effort in python for reference the python spec is here is there any strong opinions on that topic
| 0
|
55,200
| 14,269,132,691
|
IssuesEvent
|
2020-11-21 00:18:12
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
Wrong output format for empty sparse results of kron
|
defect scipy.sparse
|
<!--
Thank you for taking the time to file a bug report.
Please fill in the fields below, deleting the sections that
don't apply to your issue. You can view the final output
by clicking the preview button above.
Note: This is a comment, and won't appear in the output.
-->
My issue is about the default behaviour of the Kronecker product: If the input matrices have nnz == 0, the output defaults to the output format "coo" even if I explicitly ask for a format like 'csr'. I would add a ".asformat(format)" in line 333 of construct.py (and potentially also in lines 319/ 325) or add additional documentation if that behaviour is intended.
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```
# Sample code to reproduce the problem
import scipy.sparse as sp
type(sp.kron(sp.csr_matrix((3,4)), sp.csr_matrix((2,1)), format='csr')) # -> should be csr but is coo
```
#### Error message:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
...
```
#### Scipy/Numpy/Python version information:
<!-- You can simply run the following and paste the result in a code block
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
--> 1.5.0/1.18.5/3.8.3 # but I don't think that matters
|
1.0
|
Wrong output format for empty sparse results of kron - <!--
Thank you for taking the time to file a bug report.
Please fill in the fields below, deleting the sections that
don't apply to your issue. You can view the final output
by clicking the preview button above.
Note: This is a comment, and won't appear in the output.
-->
My issue is about the default behaviour of the Kronecker product: If the input matrices have nnz == 0, the output defaults to the output format "coo" even if I explicitly ask for a format like 'csr'. I would add a ".asformat(format)" in line 333 of construct.py (and potentially also in lines 319/ 325) or add additional documentation if that behaviour is intended.
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```
# Sample code to reproduce the problem
import scipy.sparse as sp
type(sp.kron(sp.csr_matrix((3,4)), sp.csr_matrix((2,1)), format='csr')) # -> should be csr but is coo
```
#### Error message:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
...
```
#### Scipy/Numpy/Python version information:
<!-- You can simply run the following and paste the result in a code block
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
--> 1.5.0/1.18.5/3.8.3 # but I don't think that matters
|
defect
|
wrong output format for empty sparse results of kron thank you for taking the time to file a bug report please fill in the fields below deleting the sections that don t apply to your issue you can view the final output by clicking the preview button above note this is a comment and won t appear in the output my issue is about the default behaviour of the kronecker product if the input matrices have nnz the output defaults to the output format coo even if i explicitly ask for a format like csr i would add a asformat format in line of construct py and potentially also in lines or add additional documentation if that behaviour is intended reproducing code example if you place your code between the triple backticks below it will be rendered as a code block sample code to reproduce the problem import scipy sparse as sp type sp kron sp csr matrix sp csr matrix format csr should be csr but is coo error message if any paste the full error message inside a code block as above starting from line traceback traceback most recent call last file line in scipy numpy python version information you can simply run the following and paste the result in a code block import sys scipy numpy print scipy version numpy version sys version info but i don t think that matters
| 1
|
69,038
| 22,064,241,145
|
IssuesEvent
|
2022-05-30 23:35:50
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
2.1.4 is unable to build on ubuntu LTS 22.04
|
Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | ubuntu
Distribution Version | 22.04
Kernel Version | 5.15.0-33
Architecture |. x86_64
OpenZFS Version | 2.1.4
### Describe the problem you're observing
I try to build 2.1.4 from source on ubuntu 22.04, it failed to build and throw error on 'strlcpy'
Now sure what I am missing.
### Describe how to reproduce the problem
Install ubuntu 22.04, download 2.1.4 source and try to build.
### Include any warning/errors/backtraces from the system logs
```
configure:67388: result: no
configure:67394: checking for strlcpy
configure:67394: gcc -o conftest -g -O2 conftest.c >&5
/usr/bin/ld: /tmp/ccliDnPJ.o: in function `main':
/usr/src/zfs/conftest.c:103: undefined reference to `strlcpy'
collect2: error: ld returned 1 exit status
configure:67394: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "zfs"
| #define PACKAGE_TARNAME "zfs"
| #define PACKAGE_VERSION "2.1.4"
| #define PACKAGE_STRING "zfs 2.1.4"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define ZFS_META_NAME "zfs"
| #define ZFS_META_VERSION "2.1.4"
| #define SPL_META_VERSION ZFS_META_VERSION
| #define ZFS_META_RELEASE "1"
| #define SPL_META_RELEASE ZFS_META_RELEASE
| #define ZFS_META_LICENSE "CDDL"
| #define ZFS_META_ALIAS "zfs-2.1.4-1"
| #define SPL_META_ALIAS ZFS_META_ALIAS
| #define ZFS_META_AUTHOR "OpenZFS"
| #define ZFS_META_KVER_MIN "3.10"
| #define ZFS_META_KVER_MAX "5.17"
| #define PACKAGE "zfs"
| #define VERSION "2.1.4"
| #define HAVE_STDIO_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_UNISTD_H 1
| #define STDC_HEADERS 1
| #define HAVE_DLFCN_H 1
| #define LT_OBJDIR ".libs/"
| #define HAVE_IMPLICIT_FALLTHROUGH 1
| #define HAVE_SSE 1
| #define HAVE_SSE2 1
| #define HAVE_SSE3 1
| #define HAVE_SSSE3 1
| #define HAVE_SSE4_1 1
| #define HAVE_SSE4_2 1
| #define HAVE_AVX 1
| #define HAVE_AVX2 1
| #define HAVE_AVX512F 1
| #define HAVE_AVX512CD 1
| #define HAVE_AVX512DQ 1
| #define HAVE_AVX512BW 1
| #define HAVE_AVX512IFMA 1
| #define HAVE_AVX512VBMI 1
| #define HAVE_AVX512PF 1
| #define HAVE_AVX512ER 1
| #define HAVE_AVX512VL 1
| #define HAVE_AES 1
| #define HAVE_PCLMULQDQ 1
| #define HAVE_MOVBE 1
| #define HAVE_XSAVE 1
| #define HAVE_XSAVEOPT 1
| #define HAVE_XSAVES 1
| #define SYSTEM_LINUX 1
| #define ENABLE_NLS 1
| #define HAVE_GETTEXT 1
| #define HAVE_DCGETTEXT 1
| #define HAVE_ZLIB 1
| #define HAVE_LIBUUID 1
| #define HAVE_LIBBLKID 1
| #define HAVE_LIBTIRPC 1
| #define HAVE_LIBUDEV 1
| #define HAVE_UDEV_DEVICE_GET_IS_INITIALIZED 1
| #define HAVE_LIBCRYPTO 1
| #define HAVE_LIBAIO 1
| #define LIBFETCH_IS_FETCH 0
| #define LIBFETCH_IS_LIBCURL 0
| #define LIBFETCH_DYNAMIC 0
| #define LIBFETCH_SONAME ""
| #define HAVE_MAKEDEV_IN_SYSMACROS 1
| #define HAVE_MLOCKALL 1
| /* end confdefs.h. */
| /* Define strlcpy to an innocuous variant, in case <limits.h> declares strlcpy.
| For example, HP-UX 11i <limits.h> declares gettimeofday. */
| #define strlcpy innocuous_strlcpy
|
| /* System header to define __stub macros and hopefully few prototypes,
| which can conflict with char strlcpy (); below. */
|
| #include <limits.h>
| #undef strlcpy
|
| /* Override any GCC internal prototype to avoid an error.
| Use char because int might match the return type of a GCC
| builtin and then its argument prototype would still apply. */
| #ifdef __cplusplus
| extern "C"
| #endif
| char strlcpy ();
| /* The GNU C library defines this for functions which it implements
| to always fail with ENOSYS. Some functions are actually named
| something starting with __ and the normal name is an alias. */
| #if defined __stub_strlcpy || defined __stub___strlcpy
| choke me
| #endif
|
| int
| main (void)
| {
| return strlcpy ();
| ;
| return 0;
| }
configure:67394: result: no
configure:67422: checking kernel source and build directories
configure:67505: result: done
configure:67507: checking kernel source directory
configure:67509: result: /usr/src/linux-headers-5.15.0-33-generic
configure:67511: checking kernel build directory
configure:67513: result: /usr/src/linux-headers-5.15.0-33-generic
configure:67526: checking kernel source version
configure:67579: result: 5.15.0-33-generic
configure:67611: checking kernel file name for module symbols
configure:67644: result: Module.symvers
configure:67755: checking whether modules can be built
configure:67931:
KBUILD_MODPOST_NOFINAL= KBUILD_MODPOST_WARN=
make modules -k -j8 -C /usr/src/linux-headers-5.15.0-33-generic
M=/usr/src/zfs/build/conftest >build/conftest/build.log 2>&1
configure:67934: $? = 2
configure:67937: test -f build/conftest/conftest.ko
configure:67940: $? = 1
configure:67949: result: no
configure:67952: error:
*** Unable to build an empty module.
```
|
1.0
|
2.1.4 is unable to build on ubuntu LTS 22.04 - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | ubuntu
Distribution Version | 22.04
Kernel Version | 5.15.0-33
Architecture |. x86_64
OpenZFS Version | 2.1.4
### Describe the problem you're observing
I try to build 2.1.4 from source on ubuntu 22.04, it failed to build and throw error on 'strlcpy'
Now sure what I am missing.
### Describe how to reproduce the problem
Install ubuntu 22.04, download 2.1.4 source and try to build.
### Include any warning/errors/backtraces from the system logs
```
configure:67388: result: no
configure:67394: checking for strlcpy
configure:67394: gcc -o conftest -g -O2 conftest.c >&5
/usr/bin/ld: /tmp/ccliDnPJ.o: in function `main':
/usr/src/zfs/conftest.c:103: undefined reference to `strlcpy'
collect2: error: ld returned 1 exit status
configure:67394: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "zfs"
| #define PACKAGE_TARNAME "zfs"
| #define PACKAGE_VERSION "2.1.4"
| #define PACKAGE_STRING "zfs 2.1.4"
| #define PACKAGE_BUGREPORT ""
| #define PACKAGE_URL ""
| #define ZFS_META_NAME "zfs"
| #define ZFS_META_VERSION "2.1.4"
| #define SPL_META_VERSION ZFS_META_VERSION
| #define ZFS_META_RELEASE "1"
| #define SPL_META_RELEASE ZFS_META_RELEASE
| #define ZFS_META_LICENSE "CDDL"
| #define ZFS_META_ALIAS "zfs-2.1.4-1"
| #define SPL_META_ALIAS ZFS_META_ALIAS
| #define ZFS_META_AUTHOR "OpenZFS"
| #define ZFS_META_KVER_MIN "3.10"
| #define ZFS_META_KVER_MAX "5.17"
| #define PACKAGE "zfs"
| #define VERSION "2.1.4"
| #define HAVE_STDIO_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_UNISTD_H 1
| #define STDC_HEADERS 1
| #define HAVE_DLFCN_H 1
| #define LT_OBJDIR ".libs/"
| #define HAVE_IMPLICIT_FALLTHROUGH 1
| #define HAVE_SSE 1
| #define HAVE_SSE2 1
| #define HAVE_SSE3 1
| #define HAVE_SSSE3 1
| #define HAVE_SSE4_1 1
| #define HAVE_SSE4_2 1
| #define HAVE_AVX 1
| #define HAVE_AVX2 1
| #define HAVE_AVX512F 1
| #define HAVE_AVX512CD 1
| #define HAVE_AVX512DQ 1
| #define HAVE_AVX512BW 1
| #define HAVE_AVX512IFMA 1
| #define HAVE_AVX512VBMI 1
| #define HAVE_AVX512PF 1
| #define HAVE_AVX512ER 1
| #define HAVE_AVX512VL 1
| #define HAVE_AES 1
| #define HAVE_PCLMULQDQ 1
| #define HAVE_MOVBE 1
| #define HAVE_XSAVE 1
| #define HAVE_XSAVEOPT 1
| #define HAVE_XSAVES 1
| #define SYSTEM_LINUX 1
| #define ENABLE_NLS 1
| #define HAVE_GETTEXT 1
| #define HAVE_DCGETTEXT 1
| #define HAVE_ZLIB 1
| #define HAVE_LIBUUID 1
| #define HAVE_LIBBLKID 1
| #define HAVE_LIBTIRPC 1
| #define HAVE_LIBUDEV 1
| #define HAVE_UDEV_DEVICE_GET_IS_INITIALIZED 1
| #define HAVE_LIBCRYPTO 1
| #define HAVE_LIBAIO 1
| #define LIBFETCH_IS_FETCH 0
| #define LIBFETCH_IS_LIBCURL 0
| #define LIBFETCH_DYNAMIC 0
| #define LIBFETCH_SONAME ""
| #define HAVE_MAKEDEV_IN_SYSMACROS 1
| #define HAVE_MLOCKALL 1
| /* end confdefs.h. */
| /* Define strlcpy to an innocuous variant, in case <limits.h> declares strlcpy.
| For example, HP-UX 11i <limits.h> declares gettimeofday. */
| #define strlcpy innocuous_strlcpy
|
| /* System header to define __stub macros and hopefully few prototypes,
| which can conflict with char strlcpy (); below. */
|
| #include <limits.h>
| #undef strlcpy
|
| /* Override any GCC internal prototype to avoid an error.
| Use char because int might match the return type of a GCC
| builtin and then its argument prototype would still apply. */
| #ifdef __cplusplus
| extern "C"
| #endif
| char strlcpy ();
| /* The GNU C library defines this for functions which it implements
| to always fail with ENOSYS. Some functions are actually named
| something starting with __ and the normal name is an alias. */
| #if defined __stub_strlcpy || defined __stub___strlcpy
| choke me
| #endif
|
| int
| main (void)
| {
| return strlcpy ();
| ;
| return 0;
| }
configure:67394: result: no
configure:67422: checking kernel source and build directories
configure:67505: result: done
configure:67507: checking kernel source directory
configure:67509: result: /usr/src/linux-headers-5.15.0-33-generic
configure:67511: checking kernel build directory
configure:67513: result: /usr/src/linux-headers-5.15.0-33-generic
configure:67526: checking kernel source version
configure:67579: result: 5.15.0-33-generic
configure:67611: checking kernel file name for module symbols
configure:67644: result: Module.symvers
configure:67755: checking whether modules can be built
configure:67931:
KBUILD_MODPOST_NOFINAL= KBUILD_MODPOST_WARN=
make modules -k -j8 -C /usr/src/linux-headers-5.15.0-33-generic
M=/usr/src/zfs/build/conftest >build/conftest/build.log 2>&1
configure:67934: $? = 2
configure:67937: test -f build/conftest/conftest.ko
configure:67940: $? = 1
configure:67949: result: no
configure:67952: error:
*** Unable to build an empty module.
```
|
defect
|
is unable to build on ubuntu lts thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version kernel version architecture openzfs version describe the problem you re observing i try to build from source on ubuntu it failed to build and throw error on strlcpy now sure what i am missing describe how to reproduce the problem install ubuntu download source and try to build include any warning errors backtraces from the system logs configure result no configure checking for strlcpy configure gcc o conftest g conftest c usr bin ld tmp cclidnpj o in function main usr src zfs conftest c undefined reference to strlcpy error ld returned exit status configure configure failed program was confdefs h define package name zfs define package tarname zfs define package version define package string zfs define package bugreport define package url define zfs meta name zfs define zfs meta version define spl meta version zfs meta version define zfs meta release define spl meta release zfs meta release define zfs meta license cddl define zfs meta alias zfs define spl meta alias zfs meta alias define zfs meta author openzfs define zfs meta kver min define zfs meta kver max define package zfs define version define have stdio h define have stdlib h define have string h define have inttypes h define have stdint h define have strings h define have sys stat h define have sys types h define have unistd h define stdc headers define have dlfcn h define lt objdir libs define have implicit fallthrough define have sse define have define have define have define have define have define have avx define have define have define have define have define have define have define have define have define have define have define have aes define have pclmulqdq define have movbe define have xsave define have xsaveopt define have xsaves define system linux define enable nls define have gettext define have dcgettext define have zlib define have libuuid define have libblkid define have libtirpc define have libudev define have udev device get is initialized define have libcrypto define have libaio define libfetch is fetch define libfetch is libcurl define libfetch dynamic define libfetch soname define have makedev in sysmacros define have mlockall end confdefs h define strlcpy to an innocuous variant in case declares strlcpy for example hp ux declares gettimeofday define strlcpy innocuous strlcpy system header to define stub macros and hopefully few prototypes which can conflict with char strlcpy below include undef strlcpy override any gcc internal prototype to avoid an error use char because int might match the return type of a gcc builtin and then its argument prototype would still apply ifdef cplusplus extern c endif char strlcpy the gnu c library defines this for functions which it implements to always fail with enosys some functions are actually named something starting with and the normal name is an alias if defined stub strlcpy defined stub strlcpy choke me endif int main void return strlcpy return configure result no configure checking kernel source and build directories configure result done configure checking kernel source directory configure result usr src linux headers generic configure checking kernel build directory configure result usr src linux headers generic configure checking kernel source version configure result generic configure checking kernel file name for module symbols configure result module symvers configure checking whether modules can be built configure kbuild modpost nofinal kbuild modpost warn make modules k c usr src linux headers generic m usr src zfs build conftest build conftest build log configure configure test f build conftest conftest ko configure configure result no configure error unable to build an empty module
| 1
|
36,078
| 7,858,409,789
|
IssuesEvent
|
2018-06-21 13:51:59
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
opened
|
Use \pL\pN_ instead of \w in regular expressions
|
defect
|
Depending on whether https://bugs.php.net/bug.php?id=76512 is a bug or an intended change, we might have to replace `\w` with `\pL\pN_` in regular expressions with a `/u` flag.
|
1.0
|
Use \pL\pN_ instead of \w in regular expressions - Depending on whether https://bugs.php.net/bug.php?id=76512 is a bug or an intended change, we might have to replace `\w` with `\pL\pN_` in regular expressions with a `/u` flag.
|
defect
|
use pl pn instead of w in regular expressions depending on whether is a bug or an intended change we might have to replace w with pl pn in regular expressions with a u flag
| 1
|
31,971
| 6,671,286,777
|
IssuesEvent
|
2017-10-04 06:22:24
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
DataTable RowGroup and Expand bug
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
https://plnkr.co/edit/2wsVTp?p=preview
**Current behavior**
<!-- Describe how the bug manifests. -->
The last row in each rowGroup expands below the rowgroupfooter.
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Should expand above the rowgroupfooter.
* **Angular version:** 4.3.3
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 4.2.0
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [ Chrome XX | Firefox XX | IE(dge) XX ]
<!-- All browsers where this could be reproduced -->
|
1.0
|
DataTable RowGroup and Expand bug - **I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
https://plnkr.co/edit/2wsVTp?p=preview
**Current behavior**
<!-- Describe how the bug manifests. -->
The last row in each rowGroup expands below the rowgroupfooter.
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Should expand above the rowgroupfooter.
* **Angular version:** 4.3.3
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 4.2.0
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [ Chrome XX | Firefox XX | IE(dge) XX ]
<!-- All browsers where this could be reproduced -->
|
defect
|
datatable rowgroup and expand bug i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior the last row in each rowgroup expands below the rowgroupfooter expected behavior should expand above the rowgroupfooter angular version primeng version browser
| 1
|
26,078
| 4,568,787,318
|
IssuesEvent
|
2016-09-15 15:23:04
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
opened
|
Invalid Severe with CoolingTower:TwoSpeed when low speed autocalculated
|
Defect
|
For the CoolingTower:TwoSpeed, the error message:
** Severe ** CoolingTower:TwoSpeed "COOLING TOWER". Free convection nominal capacity must be less than the low-speed nominal capacity.
Generated when
```
CoolingTower:TwoSpeed,
Cooling Tower, !- Name
Cooling Tower Water Inlet Node, !- Water Inlet Node Name
Cooling Tower Water Outlet Node, !- Water Outlet Node Name
, !- Design Water Flow Rate {m3/s}
autosize, !- High Fan Speed Air Flow Rate {m3/s}
autosize, !- High Fan Speed Fan Power {W}
, !- High Fan Speed U-Factor Times Area Value {W/K}
autocalculate, !- Low Fan Speed Air Flow Rate {m3/s}
0.5000, !- Low Fan Speed Air Flow Rate Sizing Factor
autocalculate, !- Low Fan Speed Fan Power {W}
0.1600, !- Low Fan Speed Fan Power Sizing Factor
, !- Low Fan Speed U-Factor Times Area Value {W/K}
0.6000, !- Low Fan Speed U-Factor Times Area Sizing Factor
autocalculate, !- Free Convection Regime Air Flow Rate {m3/s}
0.1000, !- Free Convection Regime Air Flow Rate Sizing Factor
, !- Free Convection Regime U-Factor Times Area Value {W/K}
0.1000, !- Free Convection U-Factor Times Area Value Sizing Factor
NominalCapacity, !- Performance Input Method
1.2500, !- Heat Rejection Capacity and Nominal Capacity Sizing Ratio
100000.0000, !- High Speed Nominal Capacity {W}
autocalculate, !- Low Speed Nominal Capacity {W}
0.5000, !- Low Speed Nominal Capacity Sizing Factor
10000.0000, !- Free Convection Nominal Capacity {W}
0.1000, !- Free Convection Nominal Capacity Sizing Factor
0.00, !- Basin Heater Capacity {W/K}
2.00, !- Basin Heater Setpoint Temperature {C}
On 24/7, !- Basin Heater Operating Schedule Name
SaturatedExit, !- Evaporation Loss Mode
0.2000, !- Evaporation Loss Factor {percent/K}
0, !- Drift Loss Percent {percent}
ConcentrationRatio, !- Blowdown Calculation Mode
3.00, !- Blowdown Concentration Ratio
On 24/7, !- Blowdown Makeup Water Usage Schedule Name
, !- Supply Water Storage Tank Name
, !- Outdoor Air Inlet Node Name
, !- Number of Cells
MinimalCell, !- Cell Control
0.3300, !- Cell Minimum Water Flow Rate Fraction
2.5000, !- Cell Maximum Water Flow Rate Fraction
1.00; !- Sizing Factor
```
The calculated value for the low speed should be 50,000 which is greater than the free convection capacity entered of 10,000.
This was reported in helpdesk ticket #11539
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
Invalid Severe with CoolingTower:TwoSpeed when low speed autocalculated - For the CoolingTower:TwoSpeed, the error message:
** Severe ** CoolingTower:TwoSpeed "COOLING TOWER". Free convection nominal capacity must be less than the low-speed nominal capacity.
Generated when
```
CoolingTower:TwoSpeed,
Cooling Tower, !- Name
Cooling Tower Water Inlet Node, !- Water Inlet Node Name
Cooling Tower Water Outlet Node, !- Water Outlet Node Name
, !- Design Water Flow Rate {m3/s}
autosize, !- High Fan Speed Air Flow Rate {m3/s}
autosize, !- High Fan Speed Fan Power {W}
, !- High Fan Speed U-Factor Times Area Value {W/K}
autocalculate, !- Low Fan Speed Air Flow Rate {m3/s}
0.5000, !- Low Fan Speed Air Flow Rate Sizing Factor
autocalculate, !- Low Fan Speed Fan Power {W}
0.1600, !- Low Fan Speed Fan Power Sizing Factor
, !- Low Fan Speed U-Factor Times Area Value {W/K}
0.6000, !- Low Fan Speed U-Factor Times Area Sizing Factor
autocalculate, !- Free Convection Regime Air Flow Rate {m3/s}
0.1000, !- Free Convection Regime Air Flow Rate Sizing Factor
, !- Free Convection Regime U-Factor Times Area Value {W/K}
0.1000, !- Free Convection U-Factor Times Area Value Sizing Factor
NominalCapacity, !- Performance Input Method
1.2500, !- Heat Rejection Capacity and Nominal Capacity Sizing Ratio
100000.0000, !- High Speed Nominal Capacity {W}
autocalculate, !- Low Speed Nominal Capacity {W}
0.5000, !- Low Speed Nominal Capacity Sizing Factor
10000.0000, !- Free Convection Nominal Capacity {W}
0.1000, !- Free Convection Nominal Capacity Sizing Factor
0.00, !- Basin Heater Capacity {W/K}
2.00, !- Basin Heater Setpoint Temperature {C}
On 24/7, !- Basin Heater Operating Schedule Name
SaturatedExit, !- Evaporation Loss Mode
0.2000, !- Evaporation Loss Factor {percent/K}
0, !- Drift Loss Percent {percent}
ConcentrationRatio, !- Blowdown Calculation Mode
3.00, !- Blowdown Concentration Ratio
On 24/7, !- Blowdown Makeup Water Usage Schedule Name
, !- Supply Water Storage Tank Name
, !- Outdoor Air Inlet Node Name
, !- Number of Cells
MinimalCell, !- Cell Control
0.3300, !- Cell Minimum Water Flow Rate Fraction
2.5000, !- Cell Maximum Water Flow Rate Fraction
1.00; !- Sizing Factor
```
The calculated value for the low speed should be 50,000 which is greater than the free convection capacity entered of 10,000.
This was reported in helpdesk ticket #11539
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
invalid severe with coolingtower twospeed when low speed autocalculated for the coolingtower twospeed the error message severe coolingtower twospeed cooling tower free convection nominal capacity must be less than the low speed nominal capacity generated when coolingtower twospeed cooling tower name cooling tower water inlet node water inlet node name cooling tower water outlet node water outlet node name design water flow rate s autosize high fan speed air flow rate s autosize high fan speed fan power w high fan speed u factor times area value w k autocalculate low fan speed air flow rate s low fan speed air flow rate sizing factor autocalculate low fan speed fan power w low fan speed fan power sizing factor low fan speed u factor times area value w k low fan speed u factor times area sizing factor autocalculate free convection regime air flow rate s free convection regime air flow rate sizing factor free convection regime u factor times area value w k free convection u factor times area value sizing factor nominalcapacity performance input method heat rejection capacity and nominal capacity sizing ratio high speed nominal capacity w autocalculate low speed nominal capacity w low speed nominal capacity sizing factor free convection nominal capacity w free convection nominal capacity sizing factor basin heater capacity w k basin heater setpoint temperature c on basin heater operating schedule name saturatedexit evaporation loss mode evaporation loss factor percent k drift loss percent percent concentrationratio blowdown calculation mode blowdown concentration ratio on blowdown makeup water usage schedule name supply water storage tank name outdoor air inlet node name number of cells minimalcell cell control cell minimum water flow rate fraction cell maximum water flow rate fraction sizing factor the calculated value for the low speed should be which is greater than the free convection capacity entered of this was reported in helpdesk ticket ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
17,824
| 3,013,057,989
|
IssuesEvent
|
2015-07-29 05:53:03
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
Case ID incorrectly changed on checking out a workitem
|
auto-migrated Priority-Medium Type-Defect
|
```
When checking out an enabled workitem (at least from a custom service), the
WorkItemRecord case ID gets changed to the workitem ID e.g. 114 --> 114.2.
I had to work round this in my code by saving the case ID from the enabled
workitem before checking it out. Since the custom service just does an HTTP
Interface B call, I assume that this is a global problem in the engine. Had
a quick look in the code, but this area is quite tortuous and I couldn't
see anything obvious.
What steps will reproduce the problem?
1. Create a custom service which checks out a workitem received via the
handleEnabledWorkItemEvent method
2. Observe the WorkItemRecord's case ID in the enabled workitem and the
checked out (Executing) equivalent returned from the checkOut method
What is the expected output? What do you see instead?
Should retain the case ID; instead gets set to the numeric part of the
workitem ID.
Occurs on YAWL 2.0.1
```
Original issue reported on code.google.com by `monsieur...@gmail.com` on 31 Mar 2010 at 10:29
|
1.0
|
Case ID incorrectly changed on checking out a workitem - ```
When checking out an enabled workitem (at least from a custom service), the
WorkItemRecord case ID gets changed to the workitem ID e.g. 114 --> 114.2.
I had to work round this in my code by saving the case ID from the enabled
workitem before checking it out. Since the custom service just does an HTTP
Interface B call, I assume that this is a global problem in the engine. Had
a quick look in the code, but this area is quite tortuous and I couldn't
see anything obvious.
What steps will reproduce the problem?
1. Create a custom service which checks out a workitem received via the
handleEnabledWorkItemEvent method
2. Observe the WorkItemRecord's case ID in the enabled workitem and the
checked out (Executing) equivalent returned from the checkOut method
What is the expected output? What do you see instead?
Should retain the case ID; instead gets set to the numeric part of the
workitem ID.
Occurs on YAWL 2.0.1
```
Original issue reported on code.google.com by `monsieur...@gmail.com` on 31 Mar 2010 at 10:29
|
defect
|
case id incorrectly changed on checking out a workitem when checking out an enabled workitem at least from a custom service the workitemrecord case id gets changed to the workitem id e g i had to work round this in my code by saving the case id from the enabled workitem before checking it out since the custom service just does an http interface b call i assume that this is a global problem in the engine had a quick look in the code but this area is quite tortuous and i couldn t see anything obvious what steps will reproduce the problem create a custom service which checks out a workitem received via the handleenabledworkitemevent method observe the workitemrecord s case id in the enabled workitem and the checked out executing equivalent returned from the checkout method what is the expected output what do you see instead should retain the case id instead gets set to the numeric part of the workitem id occurs on yawl original issue reported on code google com by monsieur gmail com on mar at
| 1
|
282,062
| 30,889,146,710
|
IssuesEvent
|
2023-08-04 02:18:27
|
madhans23/linux-4.1.15
|
https://api.github.com/repos/madhans23/linux-4.1.15
|
reopened
|
CVE-2019-20811 (Medium) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2019-20811 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/net-sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/net-sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.6. In rx_queue_add_kobject() and netdev_queue_add_kobject() in net/core/net-sysfs.c, a reference count is mishandled, aka CID-a3e23f719f5c.
<p>Publish Date: 2020-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20811>CVE-2019-20811</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20811">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20811</a></p>
<p>Release Date: 2020-06-03</p>
<p>Fix Resolution: v5.1-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-20811 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-20811 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/net-sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/core/net-sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.6. In rx_queue_add_kobject() and netdev_queue_add_kobject() in net/core/net-sysfs.c, a reference count is mishandled, aka CID-a3e23f719f5c.
<p>Publish Date: 2020-06-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20811>CVE-2019-20811</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20811">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20811</a></p>
<p>Release Date: 2020-06-03</p>
<p>Fix Resolution: v5.1-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files net core net sysfs c net core net sysfs c vulnerability details an issue was discovered in the linux kernel before in rx queue add kobject and netdev queue add kobject in net core net sysfs c a reference count is mishandled aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
78,299
| 27,418,688,682
|
IssuesEvent
|
2023-03-01 15:21:12
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Hi, when I try to deactivate my account, I get the following message: "An error occurred while communicating with the server. Please try again."
|
T-Defect
|
### Steps to reproduce
Quoting [this rageshake](https://github.com/matrix-org/element-web-rageshakes/issues/20381):
When I try to deactivate my account, I get the following message: "An error occurred while communicating with the server. Please try again.
### Outcome
.
### Operating system
.
### Browser information
.
### URL for webapp
.
### Application version
.
### Homeserver
.
### Will you send logs?
No
|
1.0
|
Hi, when I try to deactivate my account, I get the following message: "An error occurred while communicating with the server. Please try again." - ### Steps to reproduce
Quoting [this rageshake](https://github.com/matrix-org/element-web-rageshakes/issues/20381):
When I try to deactivate my account, I get the following message: "An error occurred while communicating with the server. Please try again.
### Outcome
.
### Operating system
.
### Browser information
.
### URL for webapp
.
### Application version
.
### Homeserver
.
### Will you send logs?
No
|
defect
|
hi when i try to deactivate my account i get the following message an error occurred while communicating with the server please try again steps to reproduce quoting when i try to deactivate my account i get the following message an error occurred while communicating with the server please try again outcome operating system browser information url for webapp application version homeserver will you send logs no
| 1
|
133,101
| 5,197,111,132
|
IssuesEvent
|
2017-01-23 14:50:41
|
minishift/minishift
|
https://api.github.com/repos/minishift/minishift
|
closed
|
"minishift logs" doesn't work
|
kind/bug priority/critical
|
"minishift logs" with 1.0.0-beta-2 always returns a single empty line.
This worked with "minishift 0.9.0" prefectly, also with setting the log level.
I'm running minishift on OS X with xhyve.
|
1.0
|
"minishift logs" doesn't work - "minishift logs" with 1.0.0-beta-2 always returns a single empty line.
This worked with "minishift 0.9.0" prefectly, also with setting the log level.
I'm running minishift on OS X with xhyve.
|
non_defect
|
minishift logs doesn t work minishift logs with beta always returns a single empty line this worked with minishift prefectly also with setting the log level i m running minishift on os x with xhyve
| 0
|
15,085
| 5,877,722,857
|
IssuesEvent
|
2017-05-16 01:00:06
|
eventbrite/britecharts
|
https://api.github.com/repos/eventbrite/britecharts
|
closed
|
Britecharts complete build doesn't provide the right structure
|
bug build help wanted
|
When loading /dist/bundled/britecharts.min.js, either by ES6 imports, commonJS modules or AMD modules we are not getting the expected results.
## Expected Behavior
We should get an object with this structure:
```
britecharts: {
bar: function...
brush: function...
...
}
```
## Current Behavior
Either getting just the first object or nothing being imported to the global scope (in script mode)
## Possible Solution
Right now, our individual charts are exporting the right object (AMD and CommonJS) or a global object (script load) with this shape: window.britecharts.chartName
We want the whole bundle to do the same, aggregating all the charts in an object named britecharts.
This way we should be able to do:
```
import britehcarts from 'britecharts';
import {bar} from 'britecharts';
```
or
```
<script src="/dist/bundle/britecharts.min.js"></script>
...
var barChart = window.britecharts.bar;
```
I hope the solution would be something we haven't tried yet on our webpack configuration.
prodUMD poduces our individual bundles all right
prod should produce the desired outcome. Right now looks like:
```
prod: {
entry: {
britecharts: currentChartsArray
},
devtool: 'source-map',
output: {
path: 'dist/bundled',
filename: projectName + '.min.js',
library: ['britecharts'],
libraryTarget: 'umd'
},
externals: {
d3: 'd3',
underscore: 'underscore'
},
module: {
loaders: [ defaultJSLoader ],
// Tell Webpack not to parse certain modules.
noParse: [
new RegExp(vendorsPath + '/d3/d3.js')
]
},
resolve: {
alias: {
d3: vendorsPath + '/d3'
}
},
plugins
}
```
## Steps to Reproduce (for bugs)
1. Load `/dist/bundle/britecharts.min.js`
2. Check what gives back
## Context
This actually makes our bundle unusable.
|
1.0
|
Britecharts complete build doesn't provide the right structure - When loading /dist/bundled/britecharts.min.js, either by ES6 imports, commonJS modules or AMD modules we are not getting the expected results.
## Expected Behavior
We should get an object with this structure:
```
britecharts: {
bar: function...
brush: function...
...
}
```
## Current Behavior
Either getting just the first object or nothing being imported to the global scope (in script mode)
## Possible Solution
Right now, our individual charts are exporting the right object (AMD and CommonJS) or a global object (script load) with this shape: window.britecharts.chartName
We want the whole bundle to do the same, aggregating all the charts in an object named britecharts.
This way we should be able to do:
```
import britehcarts from 'britecharts';
import {bar} from 'britecharts';
```
or
```
<script src="/dist/bundle/britecharts.min.js"></script>
...
var barChart = window.britecharts.bar;
```
I hope the solution would be something we haven't tried yet on our webpack configuration.
prodUMD poduces our individual bundles all right
prod should produce the desired outcome. Right now looks like:
```
prod: {
entry: {
britecharts: currentChartsArray
},
devtool: 'source-map',
output: {
path: 'dist/bundled',
filename: projectName + '.min.js',
library: ['britecharts'],
libraryTarget: 'umd'
},
externals: {
d3: 'd3',
underscore: 'underscore'
},
module: {
loaders: [ defaultJSLoader ],
// Tell Webpack not to parse certain modules.
noParse: [
new RegExp(vendorsPath + '/d3/d3.js')
]
},
resolve: {
alias: {
d3: vendorsPath + '/d3'
}
},
plugins
}
```
## Steps to Reproduce (for bugs)
1. Load `/dist/bundle/britecharts.min.js`
2. Check what gives back
## Context
This actually makes our bundle unusable.
|
non_defect
|
britecharts complete build doesn t provide the right structure when loading dist bundled britecharts min js either by imports commonjs modules or amd modules we are not getting the expected results expected behavior we should get an object with this structure britecharts bar function brush function current behavior either getting just the first object or nothing being imported to the global scope in script mode possible solution right now our individual charts are exporting the right object amd and commonjs or a global object script load with this shape window britecharts chartname we want the whole bundle to do the same aggregating all the charts in an object named britecharts this way we should be able to do import britehcarts from britecharts import bar from britecharts or var barchart window britecharts bar i hope the solution would be something we haven t tried yet on our webpack configuration produmd poduces our individual bundles all right prod should produce the desired outcome right now looks like prod entry britecharts currentchartsarray devtool source map output path dist bundled filename projectname min js library librarytarget umd externals underscore underscore module loaders tell webpack not to parse certain modules noparse new regexp vendorspath js resolve alias vendorspath plugins steps to reproduce for bugs load dist bundle britecharts min js check what gives back context this actually makes our bundle unusable
| 0
|
33,725
| 7,202,513,218
|
IssuesEvent
|
2018-02-06 04:27:40
|
DivinumOfficium/divinum-officium
|
https://api.github.com/repos/DivinumOfficium/divinum-officium
|
closed
|
Sancta Missa - Forma Brevior in Pentecost Ember Day
|
Priority-Medium Type-Defect auto-migrated
|
```
When using the Forma Brevior for the Pentecost Ember Saturday Mass (as seen on
06-14-2014), the first oration should appear after the Kyrie, followed by the
Lesson and Alleluia. The Gloria should appear subsequent to that.
```
Original issue reported on code.google.com by `APMarcel...@gmail.com` on 16 Jun 2014 at 3:13
|
1.0
|
Sancta Missa - Forma Brevior in Pentecost Ember Day - ```
When using the Forma Brevior for the Pentecost Ember Saturday Mass (as seen on
06-14-2014), the first oration should appear after the Kyrie, followed by the
Lesson and Alleluia. The Gloria should appear subsequent to that.
```
Original issue reported on code.google.com by `APMarcel...@gmail.com` on 16 Jun 2014 at 3:13
|
defect
|
sancta missa forma brevior in pentecost ember day when using the forma brevior for the pentecost ember saturday mass as seen on the first oration should appear after the kyrie followed by the lesson and alleluia the gloria should appear subsequent to that original issue reported on code google com by apmarcel gmail com on jun at
| 1
|
71,496
| 23,656,812,711
|
IssuesEvent
|
2022-08-26 12:04:54
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
Space Switching: Spaces Bottom Sheet Should Have Expandable Subspaces
|
T-Defect Z-AppLayout
|
Initially we aimed for an approach where clicking on a space closed the bottom sheet and you'd have to open the sheet again if you want to navigate even deeper.
Following internal and community feedback, we want to revert to the old style of having expandable subspaces.
<img width="500" src="https://user-images.githubusercontent.com/20701752/185088518-c65ee4fc-4516-4b9b-8728-125c253a5332.png" />
Figma link:
https://www.figma.com/file/Hw4gjP3pknMZUir8jQqmWH/%5BMobile%5D-IA-early-proposals
|
1.0
|
Space Switching: Spaces Bottom Sheet Should Have Expandable Subspaces - Initially we aimed for an approach where clicking on a space closed the bottom sheet and you'd have to open the sheet again if you want to navigate even deeper.
Following internal and community feedback, we want to revert to the old style of having expandable subspaces.
<img width="500" src="https://user-images.githubusercontent.com/20701752/185088518-c65ee4fc-4516-4b9b-8728-125c253a5332.png" />
Figma link:
https://www.figma.com/file/Hw4gjP3pknMZUir8jQqmWH/%5BMobile%5D-IA-early-proposals
|
defect
|
space switching spaces bottom sheet should have expandable subspaces initially we aimed for an approach where clicking on a space closed the bottom sheet and you d have to open the sheet again if you want to navigate even deeper following internal and community feedback we want to revert to the old style of having expandable subspaces figma link
| 1
|
250,526
| 27,098,808,432
|
IssuesEvent
|
2023-02-15 06:40:47
|
RG4421/simplivity-python
|
https://api.github.com/repos/RG4421/simplivity-python
|
opened
|
CVE-2023-0286 (High) detected in cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2023-0286 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/bf/a0/c630e9e3b7e7ea2492db1ca47ef7f741ef1a09f19c6642ef1a16ce996d9b/cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/bf/a0/c630e9e3b7e7ea2492db1ca47ef7f741ef1a09f19c6642ef1a16ce996d9b/cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/simplivity-python</p>
<p>Path to vulnerable library: /simplivity-python,/test_requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a type confusion vulnerability relating to X.400 address processing inside an X.509 GeneralName. X.400 addresses were parsed as an ASN1_STRING but the public structure definition for GENERAL_NAME incorrectly specified the type of the x400Address field as ASN1_TYPE. This field is subsequently interpreted by the OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than an ASN1_STRING. When CRL checking is enabled (i.e. the application sets the X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow an attacker to pass arbitrary pointers to a memcmp call, enabling them to read memory contents or enact a denial of service. In most cases, the attack requires the attacker to provide both the certificate chain and CRL, neither of which need to have a valid signature. If the attacker only controls one of these inputs, the other input must already contain an X.400 address as a CRL distribution point, which is uncommon. As such, this vulnerability is most likely to only affect applications which have implemented their own functionality for retrieving CRLs over a network.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-0286>CVE-2023-0286</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openssl.org/news/vulnerabilities.html">https://www.openssl.org/news/vulnerabilities.html</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: openssl-3.0.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2023-0286 (High) detected in cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2023-0286 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/bf/a0/c630e9e3b7e7ea2492db1ca47ef7f741ef1a09f19c6642ef1a16ce996d9b/cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/bf/a0/c630e9e3b7e7ea2492db1ca47ef7f741ef1a09f19c6642ef1a16ce996d9b/cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/simplivity-python</p>
<p>Path to vulnerable library: /simplivity-python,/test_requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **cryptography-3.3.2-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a type confusion vulnerability relating to X.400 address processing inside an X.509 GeneralName. X.400 addresses were parsed as an ASN1_STRING but the public structure definition for GENERAL_NAME incorrectly specified the type of the x400Address field as ASN1_TYPE. This field is subsequently interpreted by the OpenSSL function GENERAL_NAME_cmp as an ASN1_TYPE rather than an ASN1_STRING. When CRL checking is enabled (i.e. the application sets the X509_V_FLAG_CRL_CHECK flag), this vulnerability may allow an attacker to pass arbitrary pointers to a memcmp call, enabling them to read memory contents or enact a denial of service. In most cases, the attack requires the attacker to provide both the certificate chain and CRL, neither of which need to have a valid signature. If the attacker only controls one of these inputs, the other input must already contain an X.400 address as a CRL distribution point, which is uncommon. As such, this vulnerability is most likely to only affect applications which have implemented their own functionality for retrieving CRLs over a network.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-0286>CVE-2023-0286</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openssl.org/news/vulnerabilities.html">https://www.openssl.org/news/vulnerabilities.html</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: openssl-3.0.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_defect
|
cve high detected in cryptography whl cve high severity vulnerability vulnerable library cryptography whl cryptography is a package which provides cryptographic recipes and primitives to python developers library home page a href path to dependency file tmp ws scm simplivity python path to vulnerable library simplivity python test requirements txt dependency hierarchy x cryptography whl vulnerable library found in base branch master vulnerability details there is a type confusion vulnerability relating to x address processing inside an x generalname x addresses were parsed as an string but the public structure definition for general name incorrectly specified the type of the field as type this field is subsequently interpreted by the openssl function general name cmp as an type rather than an string when crl checking is enabled i e the application sets the v flag crl check flag this vulnerability may allow an attacker to pass arbitrary pointers to a memcmp call enabling them to read memory contents or enact a denial of service in most cases the attack requires the attacker to provide both the certificate chain and crl neither of which need to have a valid signature if the attacker only controls one of these inputs the other input must already contain an x address as a crl distribution point which is uncommon as such this vulnerability is most likely to only affect applications which have implemented their own functionality for retrieving crls over a network publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution openssl rescue worker helmet automatic remediation is available for this issue
| 0
|
4,730
| 2,610,153,741
|
IssuesEvent
|
2015-02-26 18:48:56
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Mining facilities
|
auto-migrated Priority-Medium Type-Defect
|
```
Mining facilities need to be more than limit of 1
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:06
|
1.0
|
Mining facilities - ```
Mining facilities need to be more than limit of 1
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:06
|
defect
|
mining facilities mining facilities need to be more than limit of original issue reported on code google com by gmail com on jan at
| 1
|
66,112
| 19,986,885,070
|
IssuesEvent
|
2022-01-30 19:56:06
|
akarnokd/open-ig
|
https://api.github.com/repos/akarnokd/open-ig
|
closed
|
Small graphical bug on confirmation dialog over the options screen
|
Type-Defect Priority-Low Component-UI Milestone-0.95.200
|
```
http://img205.imageshack.us/img205/4819/java2012061918250683.jpg
```
Original issue reported on code.google.com by `norbert....@gmail.com` on 19 Jun 2012 at 4:29
|
1.0
|
Small graphical bug on confirmation dialog over the options screen - ```
http://img205.imageshack.us/img205/4819/java2012061918250683.jpg
```
Original issue reported on code.google.com by `norbert....@gmail.com` on 19 Jun 2012 at 4:29
|
defect
|
small graphical bug on confirmation dialog over the options screen original issue reported on code google com by norbert gmail com on jun at
| 1
|
3,569
| 2,610,064,929
|
IssuesEvent
|
2015-02-26 18:19:06
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
临海治前列腺炎哪家专业
|
auto-migrated Priority-Medium Type-Defect
|
```
临海治前列腺炎哪家专业【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:14
|
1.0
|
临海治前列腺炎哪家专业 - ```
临海治前列腺炎哪家专业【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:14
|
defect
|
临海治前列腺炎哪家专业 临海治前列腺炎哪家专业【台州五洲生殖医院】 询热线 微信号tzwzszyy 医院地址 台州 (枫南大转盘旁)乘车线路 、 � �� 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
81,883
| 31,797,375,254
|
IssuesEvent
|
2023-09-13 08:53:38
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
OpenZFS for Linux interaction problem with libata NCQ - potential data loss
|
Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Linux x64 Box
--- | ---
Proxmox |
8.04 |
6.2.16-12-pve |
x64 |
OpenZFS zfs-2.1.12-pve1 |
8x WDC
### Describe the problem you're observing
There is an old issue which partly relates to this, but I think it is not classified as a bug - and what is worse, one that leads to data destruction.
Just to reiterate on what I wrote about this here: (https://github.com/openzfs/zfs/issues/10094#issuecomment-1707993156), I have a Linux box with 8 WDC 18 TByte SATA drives, 4 of which are connected through the mainboard controllers (AMD FCH variants) and 4 through an ASMEDIA ASM1166. They build a raidz2 running under Proxmox with a 6.2 kernel. During my nightly backups, the drives would regularly fail and errors showed up in the logs, more often than not "unaligned write errors".
First thing to note is that one poster in the thread mentioned that the "Unaligned write" is a bug in libata, in that "other" errors are mapped to this one in the scsi translation code (https://lore.kernel.org/all/20230623181908.2032764-1-lorenz@brun.one/). Thus, the actual error itself is meaningless.
In the issue, several possible remedies were offered, such as:
1. Faulty SATA cables (I replaced them all, no change, but I admit this could be the problem in some cases)
2. Faulty disks (Mine were known to be good, and also, errors were randomly distributed among them)
3. Power saving in the SATA link or the PCI bus (disabling this did not help)
4. Problematic controllers (Both the FCH and the ASM1166 chips as well as a JMB585 showed the same behaviour)
5. Limiting SATA speed to SATA 3.0 Gbps or even to 1.5 Gbps (3.0 Gbps did not help, and was not even possible with the ASM1166 as the speed was always reset to 6.0 Gbps, but I could check with FCH and JMB585 controllers)
6. Disabling NCQ (guess what, this helped!)
7. Replacing the SATA controllers with an LSI 9211-8i (I guess this would have helped, as others have reported, because it probably does not use NCQ)
I am 99% sure that it boils down to a bad interaction between OpenZFS and libata with NCQ enabled and I have a theory why this is so:
When you look at how NCQ works, it is a queue of up to 32 (or to be exact 31 for implementation reasons) tasks that can be given to the disk drive. Those tasks can be handled in any order by the drive hardware, e.g. in order to minimize seek times. This, when you give the drive 3 tasks, like "read sectors 1, 42 and 2, the drive might decide to reorder them and read sector 42 last, thus saving one seek in the process.
Now imagine a time of high I/O pressure, like when I do my nightly backups. OpenZFS has some queues of its own which are then given to the drives and for each task started, OpenZFS expects a result (but in no particular order). However, when a task returns, it opens up a slot in the NCQ queue, which is immediately filled with another task because of the high I/O pressure. That means that the sector 42 could potentially never be read at all, provided that other tasks are prioritized higher by the drive hardware.
I believe, this is exactly what is happening and if one task result is not received within the expected time frame, a timeout or an unspecific error occurs which is then reflected as "unaligned write".
IMHO, this is the result of putting one (or more) queues within OpenZFS in front of a smaller hardware queue (i.e. NCQ).
It explains why both solutions 6 and probably 7 from my list above cure the problem: Without NCQ, every task must first be finished before the next one can be started. It also explains why this problem is not as evident with other filesystems - were this a general problem with libata, it would have been fixed long ago.
I would even guess reducing SATA speed to 1.5 Gbps would help (one guy reported this) - I bet this is simply because the resulting speed of ~150 MByte/s is somewhat lower than modern hard disks, such that the disk can always finish tasks before the next one is started, whereas 3 Gpbs is still faster than modern spinning rust.
If I am right, two things should be considered:
a. The problem should be analysed and fixed in a better way than just disabling NCQ, like throttling the libata NCQ queue if pressure gets too high, just before errors are thrown. This would give the drive time to finish existing tasks.
b. There should be a warning or some kind of automatism to disable NCQ for OpenZFS for the time being.
I also think that the performance impact of disabling NCQ with OpenZFS is probably neglible, because OpenZFS has prioritized queues for different operations anyway.
### Describe how to reproduce the problem
Create a raidz2, copy a large number of files to it, preferably from a fast source like an NVMe disk.
### Include any warning/errors/backtraces from the system logs
Irrelevant because of another bug in the libata/scsi abstraction layer, see: https://lore.kernel.org/all/20230623181908.2032764-1-lorenz@brun.one/
|
1.0
|
OpenZFS for Linux interaction problem with libata NCQ - potential data loss - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Linux x64 Box
--- | ---
Proxmox |
8.04 |
6.2.16-12-pve |
x64 |
OpenZFS zfs-2.1.12-pve1 |
8x WDC
### Describe the problem you're observing
There is an old issue which partly relates to this, but I think it is not classified as a bug - and what is worse, one that leads to data destruction.
Just to reiterate on what I wrote about this here: (https://github.com/openzfs/zfs/issues/10094#issuecomment-1707993156), I have a Linux box with 8 WDC 18 TByte SATA drives, 4 of which are connected through the mainboard controllers (AMD FCH variants) and 4 through an ASMEDIA ASM1166. They build a raidz2 running under Proxmox with a 6.2 kernel. During my nightly backups, the drives would regularly fail and errors showed up in the logs, more often than not "unaligned write errors".
First thing to note is that one poster in the thread mentioned that the "Unaligned write" is a bug in libata, in that "other" errors are mapped to this one in the scsi translation code (https://lore.kernel.org/all/20230623181908.2032764-1-lorenz@brun.one/). Thus, the actual error itself is meaningless.
In the issue, several possible remedies were offered, such as:
1. Faulty SATA cables (I replaced them all, no change, but I admit this could be the problem in some cases)
2. Faulty disks (Mine were known to be good, and also, errors were randomly distributed among them)
3. Power saving in the SATA link or the PCI bus (disabling this did not help)
4. Problematic controllers (Both the FCH and the ASM1166 chips as well as a JMB585 showed the same behaviour)
5. Limiting SATA speed to SATA 3.0 Gbps or even to 1.5 Gbps (3.0 Gbps did not help, and was not even possible with the ASM1166 as the speed was always reset to 6.0 Gbps, but I could check with FCH and JMB585 controllers)
6. Disabling NCQ (guess what, this helped!)
7. Replacing the SATA controllers with an LSI 9211-8i (I guess this would have helped, as others have reported, because it probably does not use NCQ)
I am 99% sure that it boils down to a bad interaction between OpenZFS and libata with NCQ enabled and I have a theory why this is so:
When you look at how NCQ works, it is a queue of up to 32 (or to be exact 31 for implementation reasons) tasks that can be given to the disk drive. Those tasks can be handled in any order by the drive hardware, e.g. in order to minimize seek times. This, when you give the drive 3 tasks, like "read sectors 1, 42 and 2, the drive might decide to reorder them and read sector 42 last, thus saving one seek in the process.
Now imagine a time of high I/O pressure, like when I do my nightly backups. OpenZFS has some queues of its own which are then given to the drives and for each task started, OpenZFS expects a result (but in no particular order). However, when a task returns, it opens up a slot in the NCQ queue, which is immediately filled with another task because of the high I/O pressure. That means that the sector 42 could potentially never be read at all, provided that other tasks are prioritized higher by the drive hardware.
I believe, this is exactly what is happening and if one task result is not received within the expected time frame, a timeout or an unspecific error occurs which is then reflected as "unaligned write".
IMHO, this is the result of putting one (or more) queues within OpenZFS in front of a smaller hardware queue (i.e. NCQ).
It explains why both solutions 6 and probably 7 from my list above cure the problem: Without NCQ, every task must first be finished before the next one can be started. It also explains why this problem is not as evident with other filesystems - were this a general problem with libata, it would have been fixed long ago.
I would even guess reducing SATA speed to 1.5 Gbps would help (one guy reported this) - I bet this is simply because the resulting speed of ~150 MByte/s is somewhat lower than modern hard disks, such that the disk can always finish tasks before the next one is started, whereas 3 Gpbs is still faster than modern spinning rust.
If I am right, two things should be considered:
a. The problem should be analysed and fixed in a better way than just disabling NCQ, like throttling the libata NCQ queue if pressure gets too high, just before errors are thrown. This would give the drive time to finish existing tasks.
b. There should be a warning or some kind of automatism to disable NCQ for OpenZFS for the time being.
I also think that the performance impact of disabling NCQ with OpenZFS is probably neglible, because OpenZFS has prioritized queues for different operations anyway.
### Describe how to reproduce the problem
Create a raidz2, copy a large number of files to it, preferably from a fast source like an NVMe disk.
### Include any warning/errors/backtraces from the system logs
Irrelevant because of another bug in the libata/scsi abstraction layer, see: https://lore.kernel.org/all/20230623181908.2032764-1-lorenz@brun.one/
|
defect
|
openzfs for linux interaction problem with libata ncq potential data loss thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information linux box proxmox pve openzfs zfs wdc describe the problem you re observing there is an old issue which partly relates to this but i think it is not classified as a bug and what is worse one that leads to data destruction just to reiterate on what i wrote about this here i have a linux box with wdc tbyte sata drives of which are connected through the mainboard controllers amd fch variants and through an asmedia they build a running under proxmox with a kernel during my nightly backups the drives would regularly fail and errors showed up in the logs more often than not unaligned write errors first thing to note is that one poster in the thread mentioned that the unaligned write is a bug in libata in that other errors are mapped to this one in the scsi translation code thus the actual error itself is meaningless in the issue several possible remedies were offered such as faulty sata cables i replaced them all no change but i admit this could be the problem in some cases faulty disks mine were known to be good and also errors were randomly distributed among them power saving in the sata link or the pci bus disabling this did not help problematic controllers both the fch and the chips as well as a showed the same behaviour limiting sata speed to sata gbps or even to gbps gbps did not help and was not even possible with the as the speed was always reset to gbps but i could check with fch and controllers disabling ncq guess what this helped replacing the sata controllers with an lsi i guess this would have helped as others have reported because it probably does not use ncq i am sure that it boils down to a bad interaction between openzfs and libata with ncq enabled and i have a theory why this is so when you look at how ncq works it is a queue of up to or to be exact for implementation reasons tasks that can be given to the disk drive those tasks can be handled in any order by the drive hardware e g in order to minimize seek times this when you give the drive tasks like read sectors and the drive might decide to reorder them and read sector last thus saving one seek in the process now imagine a time of high i o pressure like when i do my nightly backups openzfs has some queues of its own which are then given to the drives and for each task started openzfs expects a result but in no particular order however when a task returns it opens up a slot in the ncq queue which is immediately filled with another task because of the high i o pressure that means that the sector could potentially never be read at all provided that other tasks are prioritized higher by the drive hardware i believe this is exactly what is happening and if one task result is not received within the expected time frame a timeout or an unspecific error occurs which is then reflected as unaligned write imho this is the result of putting one or more queues within openzfs in front of a smaller hardware queue i e ncq it explains why both solutions and probably from my list above cure the problem without ncq every task must first be finished before the next one can be started it also explains why this problem is not as evident with other filesystems were this a general problem with libata it would have been fixed long ago i would even guess reducing sata speed to gbps would help one guy reported this i bet this is simply because the resulting speed of mbyte s is somewhat lower than modern hard disks such that the disk can always finish tasks before the next one is started whereas gpbs is still faster than modern spinning rust if i am right two things should be considered a the problem should be analysed and fixed in a better way than just disabling ncq like throttling the libata ncq queue if pressure gets too high just before errors are thrown this would give the drive time to finish existing tasks b there should be a warning or some kind of automatism to disable ncq for openzfs for the time being i also think that the performance impact of disabling ncq with openzfs is probably neglible because openzfs has prioritized queues for different operations anyway describe how to reproduce the problem create a copy a large number of files to it preferably from a fast source like an nvme disk include any warning errors backtraces from the system logs irrelevant because of another bug in the libata scsi abstraction layer see
| 1
|
9,720
| 2,615,166,455
|
IssuesEvent
|
2015-03-01 06:46:59
|
chrsmith/reaver-wps
|
https://api.github.com/repos/chrsmith/reaver-wps
|
opened
|
AP router disappeared from wash list after 3 attempts and received "detected ap rate limiting waiting 60 seconds before re-checking
|
auto-migrated Priority-Triage Type-Defect
|
```
I had two Arris routers available in wash with lock status as NO initially. I
tried the basic command of reaver to do them, and after 3 attempts, it kept on
showing "detected ap rate limiting waiting 60 seconds before re-checking". I
stopped the attack and when I try to resume it, I couldn't even accociate with
them, accosiate fail message and timeout message kept on showing. I checked
wash list, they both disappeared from the wash list. I don't know if my initial
attack was discovered by their security software and they turned off the WPS
feature or the two routers turned off the WPS by temselves due to my attack.
Later I got familiar with reaver and tried to play with those arguements but
none of them were working, which showed their WPS seemed already turned off
permanently for sure.
So what should I do?
I have some plans in mind.
1. Try to force these APs to reboot or reset, either by crash them via DDos or
anything else that will bring the similar results and then they will reboot or
reset by themselves or their holders will find their network are not working so
manually reset their routers. So by that I can use reaver again, but yet the
new problem is how to prevent they lock the WPS again.
2. I also tried the aircrack, but since their default passwords are up to 16
chacacters with combination of numbers and letters, the dictionary would be
extremely large. So after a try, I gave up.
3. As I know I can somehow use the MAC address or the manufactuer of the router
to search for their default PIN and WEP online(I believe they are still
defalut).
4. I don't know if I can log in to the routers' gateway page (192.168.100.1)
without actually successfully connected to the router(I mean just type a random
password when I try to connect them and it will still show the status as
"connected" in my connection panel). (And again, I still believe that username
and password are still default, which are admin and 1234).
Above are all the ways I can think of, could you please give me some
suggestions or how did you successfully crack those self-locked-permanently
routers?
Many many thanks for your help in advance.
```
Original issue reported on code.google.com by `fdsavv...@gmail.com` on 4 Aug 2013 at 8:42
|
1.0
|
AP router disappeared from wash list after 3 attempts and received "detected ap rate limiting waiting 60 seconds before re-checking - ```
I had two Arris routers available in wash with lock status as NO initially. I
tried the basic command of reaver to do them, and after 3 attempts, it kept on
showing "detected ap rate limiting waiting 60 seconds before re-checking". I
stopped the attack and when I try to resume it, I couldn't even accociate with
them, accosiate fail message and timeout message kept on showing. I checked
wash list, they both disappeared from the wash list. I don't know if my initial
attack was discovered by their security software and they turned off the WPS
feature or the two routers turned off the WPS by temselves due to my attack.
Later I got familiar with reaver and tried to play with those arguements but
none of them were working, which showed their WPS seemed already turned off
permanently for sure.
So what should I do?
I have some plans in mind.
1. Try to force these APs to reboot or reset, either by crash them via DDos or
anything else that will bring the similar results and then they will reboot or
reset by themselves or their holders will find their network are not working so
manually reset their routers. So by that I can use reaver again, but yet the
new problem is how to prevent they lock the WPS again.
2. I also tried the aircrack, but since their default passwords are up to 16
chacacters with combination of numbers and letters, the dictionary would be
extremely large. So after a try, I gave up.
3. As I know I can somehow use the MAC address or the manufactuer of the router
to search for their default PIN and WEP online(I believe they are still
defalut).
4. I don't know if I can log in to the routers' gateway page (192.168.100.1)
without actually successfully connected to the router(I mean just type a random
password when I try to connect them and it will still show the status as
"connected" in my connection panel). (And again, I still believe that username
and password are still default, which are admin and 1234).
Above are all the ways I can think of, could you please give me some
suggestions or how did you successfully crack those self-locked-permanently
routers?
Many many thanks for your help in advance.
```
Original issue reported on code.google.com by `fdsavv...@gmail.com` on 4 Aug 2013 at 8:42
|
defect
|
ap router disappeared from wash list after attempts and received detected ap rate limiting waiting seconds before re checking i had two arris routers available in wash with lock status as no initially i tried the basic command of reaver to do them and after attempts it kept on showing detected ap rate limiting waiting seconds before re checking i stopped the attack and when i try to resume it i couldn t even accociate with them accosiate fail message and timeout message kept on showing i checked wash list they both disappeared from the wash list i don t know if my initial attack was discovered by their security software and they turned off the wps feature or the two routers turned off the wps by temselves due to my attack later i got familiar with reaver and tried to play with those arguements but none of them were working which showed their wps seemed already turned off permanently for sure so what should i do i have some plans in mind try to force these aps to reboot or reset either by crash them via ddos or anything else that will bring the similar results and then they will reboot or reset by themselves or their holders will find their network are not working so manually reset their routers so by that i can use reaver again but yet the new problem is how to prevent they lock the wps again i also tried the aircrack but since their default passwords are up to chacacters with combination of numbers and letters the dictionary would be extremely large so after a try i gave up as i know i can somehow use the mac address or the manufactuer of the router to search for their default pin and wep online i believe they are still defalut i don t know if i can log in to the routers gateway page without actually successfully connected to the router i mean just type a random password when i try to connect them and it will still show the status as connected in my connection panel and again i still believe that username and password are still default which are admin and above are all the ways i can think of could you please give me some suggestions or how did you successfully crack those self locked permanently routers many many thanks for your help in advance original issue reported on code google com by fdsavv gmail com on aug at
| 1
|
43,078
| 5,575,066,377
|
IssuesEvent
|
2017-03-28 00:21:37
|
18F/fec-style
|
https://api.github.com/repos/18F/fec-style
|
opened
|
Finalize dropdown menus
|
Work: Design Work: Front-end
|
We kind of left the [last site nav](https://github.com/18F/fec-cms/issues/827) thread hanging, so I wanted to make a fresh issue so we can resolve all remaining issues, both in the content and development.
In addition to finalizing language (now that we have a new name for the compliance section) we should also decide if we're keeping the search bar in the campaign finance data one as well.
**Completion criteria**
- [ ] Finalize language
- [ ] Implement desktop views with correct links
- [ ] Implement mobile views with correct links
|
1.0
|
Finalize dropdown menus - We kind of left the [last site nav](https://github.com/18F/fec-cms/issues/827) thread hanging, so I wanted to make a fresh issue so we can resolve all remaining issues, both in the content and development.
In addition to finalizing language (now that we have a new name for the compliance section) we should also decide if we're keeping the search bar in the campaign finance data one as well.
**Completion criteria**
- [ ] Finalize language
- [ ] Implement desktop views with correct links
- [ ] Implement mobile views with correct links
|
non_defect
|
finalize dropdown menus we kind of left the thread hanging so i wanted to make a fresh issue so we can resolve all remaining issues both in the content and development in addition to finalizing language now that we have a new name for the compliance section we should also decide if we re keeping the search bar in the campaign finance data one as well completion criteria finalize language implement desktop views with correct links implement mobile views with correct links
| 0
|
26,618
| 4,773,056,251
|
IssuesEvent
|
2016-10-26 22:51:55
|
wheeler-microfluidics/microdrop
|
https://api.github.com/repos/wheeler-microfluidics/microdrop
|
opened
|
Convert all on_app_init calls to on_plugin_enable (Trac #25)
|
defect Incomplete Migration microdrop Migrated from Trac
|
Migrated from http://microfluidics.utoronto.ca/ticket/25
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Convert all on_app_init calls to on_plugin_enable",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-04T03:26:29",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
1.0
|
Convert all on_app_init calls to on_plugin_enable (Trac #25) - Migrated from http://microfluidics.utoronto.ca/ticket/25
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Convert all on_app_init calls to on_plugin_enable",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-04T03:26:29",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
defect
|
convert all on app init calls to on plugin enable trac migrated from json status closed changetime description reporter cfobel cc resolution fixed ts component microdrop summary convert all on app init calls to on plugin enable priority major keywords version time milestone microdrop owner cfobel type defect
| 1
|
64,723
| 3,214,521,006
|
IssuesEvent
|
2015-10-07 02:44:21
|
cs2103aug2015-w09-3j/main
|
https://api.github.com/repos/cs2103aug2015-w09-3j/main
|
closed
|
A user can read current tasks by using the 'view' command
|
priority.high type.story
|
... so that the user can view the current outstanding tasks
|
1.0
|
A user can read current tasks by using the 'view' command - ... so that the user can view the current outstanding tasks
|
non_defect
|
a user can read current tasks by using the view command so that the user can view the current outstanding tasks
| 0
|
68,120
| 7,088,160,425
|
IssuesEvent
|
2018-01-11 20:27:06
|
sass/libsass
|
https://api.github.com/repos/sass/libsass
|
closed
|
Case-insensitive attribute selectors support
|
Bug - Confirmed Dev - Test Written
|
Hi there,
I'm trying to use case-insensitive attribute selector (e.g. [charset="utf-8" i]) and am currently unable to compile [my project](https://github.com/ffoodd/a11y.css/issues/265). I get:
`Error: unterminated attribute selector for charset` when trying to compile.
[Reduced test case](http://libsass.ocbnet.ch/srcmap/#QGltcG9ydCAiaHR0cDovL2Nkbi5yYXdnaXQuY29tL3Rob3VnaHRib3QvYm91cmJvbi9tYXN0ZXIvYXBwL2Fzc2V0cy9zdHlsZXNoZWV0cy9ib3VyYm9uIjsKCltjaGFyc2V0PSJ1dGYtOCIgaV0gewogIGRpc3BsYXk6IGJsb2NrOwp9)
It's supported by [sass](https://github.com/sass/sass/issues/2405); but at the time of writing I'm using gulp-sass. I don't think gulp-sass nor node-sass are related to the issue since as shown in the test case, compiling with libsass only doesn't work.
|
1.0
|
Case-insensitive attribute selectors support - Hi there,
I'm trying to use case-insensitive attribute selector (e.g. [charset="utf-8" i]) and am currently unable to compile [my project](https://github.com/ffoodd/a11y.css/issues/265). I get:
`Error: unterminated attribute selector for charset` when trying to compile.
[Reduced test case](http://libsass.ocbnet.ch/srcmap/#QGltcG9ydCAiaHR0cDovL2Nkbi5yYXdnaXQuY29tL3Rob3VnaHRib3QvYm91cmJvbi9tYXN0ZXIvYXBwL2Fzc2V0cy9zdHlsZXNoZWV0cy9ib3VyYm9uIjsKCltjaGFyc2V0PSJ1dGYtOCIgaV0gewogIGRpc3BsYXk6IGJsb2NrOwp9)
It's supported by [sass](https://github.com/sass/sass/issues/2405); but at the time of writing I'm using gulp-sass. I don't think gulp-sass nor node-sass are related to the issue since as shown in the test case, compiling with libsass only doesn't work.
|
non_defect
|
case insensitive attribute selectors support hi there i m trying to use case insensitive attribute selector e g and am currently unable to compile i get error unterminated attribute selector for charset when trying to compile it s supported by but at the time of writing i m using gulp sass i don t think gulp sass nor node sass are related to the issue since as shown in the test case compiling with libsass only doesn t work
| 0
|
126,411
| 12,291,827,956
|
IssuesEvent
|
2020-05-10 11:57:39
|
mtg-to/obs-key-listener
|
https://api.github.com/repos/mtg-to/obs-key-listener
|
opened
|
Create documentation for the bindings file format
|
documentation enhancement
|
As a new user I want to read documentation for the bindings file so that I can learn how to customize it.
|
1.0
|
Create documentation for the bindings file format - As a new user I want to read documentation for the bindings file so that I can learn how to customize it.
|
non_defect
|
create documentation for the bindings file format as a new user i want to read documentation for the bindings file so that i can learn how to customize it
| 0
|
18,830
| 3,089,691,836
|
IssuesEvent
|
2015-08-25 23:03:04
|
google/googletest
|
https://api.github.com/repos/google/googletest
|
opened
|
FTBFS in gtest:
|
auto-migrated Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on August 24, 2015 22:40_
```
On Debian 1.7.0 FTBFS as follows:
~~~~
make[4] Entering directory '/mnt/tmpssd/src/libgmock/gmock-1.7.0/gtest'
echo "'make install' is dangerous and not supported. Instead, see README for
how to integrate Google Test into your build system."
'make install' is dangerous and not supported. Instead, see README for how to
integrate Google Test into your build system.
false
~~~~
```
Original issue reported on code.google.com by `only...@gmail.com` on 12 Jul 2015 at 2:14
_Copied from original issue: google/googlemock#177_
|
1.0
|
FTBFS in gtest: - _From @GoogleCodeExporter on August 24, 2015 22:40_
```
On Debian 1.7.0 FTBFS as follows:
~~~~
make[4] Entering directory '/mnt/tmpssd/src/libgmock/gmock-1.7.0/gtest'
echo "'make install' is dangerous and not supported. Instead, see README for
how to integrate Google Test into your build system."
'make install' is dangerous and not supported. Instead, see README for how to
integrate Google Test into your build system.
false
~~~~
```
Original issue reported on code.google.com by `only...@gmail.com` on 12 Jul 2015 at 2:14
_Copied from original issue: google/googlemock#177_
|
defect
|
ftbfs in gtest from googlecodeexporter on august on debian ftbfs as follows make entering directory mnt tmpssd src libgmock gmock gtest echo make install is dangerous and not supported instead see readme for how to integrate google test into your build system make install is dangerous and not supported instead see readme for how to integrate google test into your build system false original issue reported on code google com by only gmail com on jul at copied from original issue google googlemock
| 1
|
50,326
| 13,187,446,112
|
IssuesEvent
|
2020-08-13 03:26:24
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
funky redirect msg (Trac #532)
|
Migrated from Trac defect tools/ports
|
14:02 <@straszhm> % svn ls http://code.icecube.wisc.edu/tools
14:02 <@straszhm> svn: Repository moved permanently to 'http://code.icecube.wisc.edu/tools/'; please
relocate
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/532
, reported by troy and owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-02-09T21:08:24",
"description": "14:02 <@straszhm> % svn ls http://code.icecube.wisc.edu/tools\n14:02 <@straszhm> svn: Repository moved permanently to 'http://code.icecube.wisc.edu/tools/'; please \n relocate\n\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1234213704000000",
"component": "tools/ports",
"summary": "funky redirect msg",
"priority": "normal",
"keywords": "",
"time": "2009-02-09T19:04:26",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
funky redirect msg (Trac #532) - 14:02 <@straszhm> % svn ls http://code.icecube.wisc.edu/tools
14:02 <@straszhm> svn: Repository moved permanently to 'http://code.icecube.wisc.edu/tools/'; please
relocate
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/532
, reported by troy and owned by cgils</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2009-02-09T21:08:24",
"description": "14:02 <@straszhm> % svn ls http://code.icecube.wisc.edu/tools\n14:02 <@straszhm> svn: Repository moved permanently to 'http://code.icecube.wisc.edu/tools/'; please \n relocate\n\n",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1234213704000000",
"component": "tools/ports",
"summary": "funky redirect msg",
"priority": "normal",
"keywords": "",
"time": "2009-02-09T19:04:26",
"milestone": "",
"owner": "cgils",
"type": "defect"
}
```
</p>
</details>
|
defect
|
funky redirect msg trac svn ls svn repository moved permanently to please relocate migrated from reported by troy and owned by cgils json status closed changetime description svn ls svn repository moved permanently to please n relocate n n reporter troy cc resolution wont or cant fix ts component tools ports summary funky redirect msg priority normal keywords time milestone owner cgils type defect
| 1
|
428,660
| 12,414,895,530
|
IssuesEvent
|
2020-05-22 15:16:55
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
New users see whole feed even if they don't have permissions to something
|
Administration/Permissions Priority:P2 Type:Bug
|
When a new user signs up, they see the full activity feed even with questions that they don't have permissions to. When they click on one of the activity links, they get:
Sorry, you don’t have permission to see that.
|
1.0
|
New users see whole feed even if they don't have permissions to something - When a new user signs up, they see the full activity feed even with questions that they don't have permissions to. When they click on one of the activity links, they get:
Sorry, you don’t have permission to see that.
|
non_defect
|
new users see whole feed even if they don t have permissions to something when a new user signs up they see the full activity feed even with questions that they don t have permissions to when they click on one of the activity links they get sorry you don’t have permission to see that
| 0
|
448,639
| 12,955,010,523
|
IssuesEvent
|
2020-07-20 05:18:59
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
closed
|
The suggested rule is inconsistent with the actual checksum
|
area/console kind/bug kind/need-to-verify priority/low
|
create service,set name is 123, invalid format prompt appears
<img width="584" alt="校验不通过" src="https://user-images.githubusercontent.com/36271543/87416848-e22ba700-c601-11ea-9e06-27bde905b7cc.png">
/kind bug
/area console
/assign @leoendless
/milestone 3.0.0
/priority low
|
1.0
|
The suggested rule is inconsistent with the actual checksum - create service,set name is 123, invalid format prompt appears
<img width="584" alt="校验不通过" src="https://user-images.githubusercontent.com/36271543/87416848-e22ba700-c601-11ea-9e06-27bde905b7cc.png">
/kind bug
/area console
/assign @leoendless
/milestone 3.0.0
/priority low
|
non_defect
|
the suggested rule is inconsistent with the actual checksum create service set name is invalid format prompt appears img width alt 校验不通过 src kind bug area console assign leoendless milestone priority low
| 0
|
75,044
| 25,497,524,349
|
IssuesEvent
|
2022-11-27 21:19:56
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: DOC: editing docstring example in svds
|
defect scipy.sparse.linalg Documentation
|
### Is your feature request related to a problem? Please describe.
Changed this issue to BUG, since coincidently every PR now fails on exactly this docstring example in refguide_asv_check , e.g., see https://github.com/scipy/scipy/pull/14563#issuecomment-1200290619
The only docstring example in svds is a bit strange. It's essentially a dense problem forced into sparse by using the sparse format of singular vectors although in practical applications the singular vectors are typically dense - only the matrix may be sparse.
The second issue that the example computes the accuracy of individual vectors, which only makes sense in the absence of clustered singular values.
### Describe the solution you'd like.
Turn the existing example into dense, e.g.,
```
import numpy as np
from scipy.stats import ortho_group
from scipy.sparse.linalg import svds
rng = np.random.default_rng()
orthogonal = ortho_group.rvs(10, random_state=rng)
s = [0.0001, 0.001, 3, 4, 5] # singular values
u = orthogonal[:, :5] # left singular vectors
vT = orthogonal[:, 5:].T # right singular vectors
A = u @ diags(s) @ vT
u5, s5, vT5 = svds(A, k=5)
A5 = u5 @ np.diag(s5) @ vT5
np.allclose(A5, A.toarray())
np.allclose(s5, s)
np.allclose(np.abs(u5), np.abs(u))
np.allclose(np.abs(vT5), np.abs(vT))
```
And a new example with clustered singular vectors and use something like
```
from scipy.linalg import subspace_angles
subspace_angles(u5[:, :2], u[:, :2])
subspace_angles(vT5[:2, :], vT[:2, :])
```
to check their accuracy
Add yet another truly sparse example that resembles a practical scenario.
|
1.0
|
BUG: DOC: editing docstring example in svds - ### Is your feature request related to a problem? Please describe.
Changed this issue to BUG, since coincidently every PR now fails on exactly this docstring example in refguide_asv_check , e.g., see https://github.com/scipy/scipy/pull/14563#issuecomment-1200290619
The only docstring example in svds is a bit strange. It's essentially a dense problem forced into sparse by using the sparse format of singular vectors although in practical applications the singular vectors are typically dense - only the matrix may be sparse.
The second issue that the example computes the accuracy of individual vectors, which only makes sense in the absence of clustered singular values.
### Describe the solution you'd like.
Turn the existing example into dense, e.g.,
```
import numpy as np
from scipy.stats import ortho_group
from scipy.sparse.linalg import svds
rng = np.random.default_rng()
orthogonal = ortho_group.rvs(10, random_state=rng)
s = [0.0001, 0.001, 3, 4, 5] # singular values
u = orthogonal[:, :5] # left singular vectors
vT = orthogonal[:, 5:].T # right singular vectors
A = u @ diags(s) @ vT
u5, s5, vT5 = svds(A, k=5)
A5 = u5 @ np.diag(s5) @ vT5
np.allclose(A5, A.toarray())
np.allclose(s5, s)
np.allclose(np.abs(u5), np.abs(u))
np.allclose(np.abs(vT5), np.abs(vT))
```
And a new example with clustered singular vectors and use something like
```
from scipy.linalg import subspace_angles
subspace_angles(u5[:, :2], u[:, :2])
subspace_angles(vT5[:2, :], vT[:2, :])
```
to check their accuracy
Add yet another truly sparse example that resembles a practical scenario.
|
defect
|
bug doc editing docstring example in svds is your feature request related to a problem please describe changed this issue to bug since coincidently every pr now fails on exactly this docstring example in refguide asv check e g see the only docstring example in svds is a bit strange it s essentially a dense problem forced into sparse by using the sparse format of singular vectors although in practical applications the singular vectors are typically dense only the matrix may be sparse the second issue that the example computes the accuracy of individual vectors which only makes sense in the absence of clustered singular values describe the solution you d like turn the existing example into dense e g import numpy as np from scipy stats import ortho group from scipy sparse linalg import svds rng np random default rng orthogonal ortho group rvs random state rng s singular values u orthogonal left singular vectors vt orthogonal t right singular vectors a u diags s vt svds a k np diag np allclose a toarray np allclose s np allclose np abs np abs u np allclose np abs np abs vt and a new example with clustered singular vectors and use something like from scipy linalg import subspace angles subspace angles u subspace angles vt to check their accuracy add yet another truly sparse example that resembles a practical scenario
| 1
|
147,020
| 23,156,687,356
|
IssuesEvent
|
2022-07-29 13:39:55
|
nextcloud/android
|
https://api.github.com/repos/nextcloud/android
|
closed
|
Fast scroll list view
|
enhancement design approved hacktoberfest
|
Enable a handler that allows fast scrolling to easily scroll to the bottom, e.g. when navigate to the oldest file.
Bug:
- [ ] fast scroll vanishes after switching to subfolder
|
1.0
|
Fast scroll list view - Enable a handler that allows fast scrolling to easily scroll to the bottom, e.g. when navigate to the oldest file.
Bug:
- [ ] fast scroll vanishes after switching to subfolder
|
non_defect
|
fast scroll list view enable a handler that allows fast scrolling to easily scroll to the bottom e g when navigate to the oldest file bug fast scroll vanishes after switching to subfolder
| 0
|
18,786
| 10,228,950,887
|
IssuesEvent
|
2019-08-17 08:00:19
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
Make DB connections limits configurable
|
area/performance
|
Hello,
We currently reach our DB max connections limit.
We would like to configure the max number of opened and idled connection on the DB.
We found 2 functions that seems to do the job on the harbor orm: [SetMaxIdleConns](https://beego.me/docs/mvc/model/orm.md#setmaxidleconns) and [SetMaxOpenConns](https://beego.me/docs/mvc/model/orm.md#setmaxopenconns).
Is that possible to expose the 2 values in configuration ?
Thomas,
OVH
|
True
|
Make DB connections limits configurable - Hello,
We currently reach our DB max connections limit.
We would like to configure the max number of opened and idled connection on the DB.
We found 2 functions that seems to do the job on the harbor orm: [SetMaxIdleConns](https://beego.me/docs/mvc/model/orm.md#setmaxidleconns) and [SetMaxOpenConns](https://beego.me/docs/mvc/model/orm.md#setmaxopenconns).
Is that possible to expose the 2 values in configuration ?
Thomas,
OVH
|
non_defect
|
make db connections limits configurable hello we currently reach our db max connections limit we would like to configure the max number of opened and idled connection on the db we found functions that seems to do the job on the harbor orm and is that possible to expose the values in configuration thomas ovh
| 0
|
42,990
| 11,415,873,961
|
IssuesEvent
|
2020-02-02 13:57:17
|
nanopb/nanopb
|
https://api.github.com/repos/nanopb/nanopb
|
closed
|
Generator error on python-protobuf 3.6 and older
|
Component-Generator FixedInGit Priority-Medium Type-Defect
|
When doing the following steps:
- git clone https://github.com/nanopb/nanopb.git
- cd nanopb
- cmake .
- make
- cd generator/proto
- make
- cd ../../tests
- scons
The tests fail somewhere in nanopb_generator.py:1200:
`Traceback (most recent call last):
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 2017, in <module>
main_plugin()
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 2002, in main_plugin
results = process_file(filename, fdesc, options, other_files)
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1856, in process_file
headerdata = ''.join(f.generate_header(includes, headerbasename, options))
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1513, in generate_header
yield msg.fields_declaration(self.dependencies) + '\n'
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1092, in fields_declaration
defval = self.default_value(dependencies)
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1202, in default_value
field.ClearField('default_value')
TypeError: field name must be a string`
This in on Ubuntu 18.04.3 LTS, x86_64.
|
1.0
|
Generator error on python-protobuf 3.6 and older - When doing the following steps:
- git clone https://github.com/nanopb/nanopb.git
- cd nanopb
- cmake .
- make
- cd generator/proto
- make
- cd ../../tests
- scons
The tests fail somewhere in nanopb_generator.py:1200:
`Traceback (most recent call last):
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 2017, in <module>
main_plugin()
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 2002, in main_plugin
results = process_file(filename, fdesc, options, other_files)
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1856, in process_file
headerdata = ''.join(f.generate_header(includes, headerbasename, options))
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1513, in generate_header
yield msg.fields_declaration(self.dependencies) + '\n'
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1092, in fields_declaration
defval = self.default_value(dependencies)
File "/home/bo.lind/nanopb/generator/nanopb_generator.py", line 1202, in default_value
field.ClearField('default_value')
TypeError: field name must be a string`
This in on Ubuntu 18.04.3 LTS, x86_64.
|
defect
|
generator error on python protobuf and older when doing the following steps git clone cd nanopb cmake make cd generator proto make cd tests scons the tests fail somewhere in nanopb generator py traceback most recent call last file home bo lind nanopb generator nanopb generator py line in main plugin file home bo lind nanopb generator nanopb generator py line in main plugin results process file filename fdesc options other files file home bo lind nanopb generator nanopb generator py line in process file headerdata join f generate header includes headerbasename options file home bo lind nanopb generator nanopb generator py line in generate header yield msg fields declaration self dependencies n file home bo lind nanopb generator nanopb generator py line in fields declaration defval self default value dependencies file home bo lind nanopb generator nanopb generator py line in default value field clearfield default value typeerror field name must be a string this in on ubuntu lts
| 1
|
76,309
| 26,356,466,564
|
IssuesEvent
|
2023-01-11 10:05:02
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Wrong SQL generated in SQL Server RETURNING emulation, when generated aliases conflict
|
T: Defect C: Functionality C: DB: SQL Server P: Medium E: Professional Edition E: Enterprise Edition
|
Somewhat related to recent issues that were fixed in the context of https://github.com/jOOQ/jOOQ/issues/14477
In SQL Server's `RETURNING` emulation, it can happen that generated aliases of the `@result` table create a conflict, such as in this instance:
```sql
declare @result table (
alias_63 int,
alias_22869988 int,
alias_128594141 int,
alias_63 int,
ID int
);
insert into t_comp_client_virtual_1 (ID, VAL)
output
1,
(inserted.VAL * 2),
(1 + (inserted.VAL * 2) + 1),
1,
inserted.ID
into @result
values (
1,
1
);
select alias_63, alias_22869988, alias_128594141, alias_63
from @result r;
```
This produces an error:
```
org.jooq.exception.DataAccessException: SQL [declare @result table (alias_63 int, alias_22869988 int, alias_128594141 int, alias_63 int, ID int); insert into t_comp_client_virtual_1 (ID, VAL) output ?, (inserted.VAL * ?), (? + (inserted.VAL * ?) + ?), ?, inserted.ID into @result values (?, ?); select alias_63, alias_22869988, alias_128594141, alias_63 from @result r;]; Column names in each table must be unique. Column name 'alias_63' in table '@result' is specified more than once.
at org.jooq_3.18.0-SNAPSHOT.SQLSERVER2022.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:3406)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:746)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:371)
at org.jooq.impl.AbstractDMLQueryAsResultQuery.fetch(AbstractDMLQueryAsResultQuery.java:140)
at org.jooq.impl.ResultQueryTrait.fetchLazy(ResultQueryTrait.java:281)
at org.jooq.impl.ResultQueryTrait.fetchLazyNonAutoClosing(ResultQueryTrait.java:290)
at org.jooq.impl.ResultQueryTrait.fetchOne(ResultQueryTrait.java:509)
at org.jooq.test.all.testcases.ComputedClientSideVirtualTests.testComputedClientSideVirtualDMLInsertReturningProjectionExpressions(ComputedClientSideVirtualTests.java:579)
at org.jooq.test.jOOQAbstractTest.testComputedClientSideVirtualDMLInsertReturningProjectionExpressions(jOOQAbstractTest.java:11872)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:93)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:40)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:529)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:756)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:452)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Column names in each table must be unique. Column name 'alias_63' in table '@result' is specified more than once.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1673)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:620)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:540)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7627)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3912)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:268)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:242)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:459)
at org.jooq.tools.jdbc.DefaultPreparedStatement.executeQuery(DefaultPreparedStatement.java:104)
at org.jooq.tools.jdbc.DefaultPreparedStatement.executeQuery(DefaultPreparedStatement.java:104)
at org.jooq.impl.AbstractDMLQuery.executeReturningQuery(AbstractDMLQuery.java:1313)
at org.jooq.impl.AbstractDMLQuery.execute(AbstractDMLQuery.java:1081)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:357)
... 38 more
```
We don't have to generate aliases based on some `hashCode` in this case, which is guaranteed to create conflicts for identical expressions (which is an edge case, but can still happen often enough). It's probably better to enumerate the aliases, instead.
In this particular case, because there's no `MERGE` statement due to the fixes in https://github.com/jOOQ/jOOQ/issues/14479, we might also work around the edge case by removing the `@result` table entirely. Without `MERGE`, we can revert to running only the `INSERT` statement without `INTO`.
But this shows that we need another integration test that combines `MERGE` with duplicate aliases.
|
1.0
|
Wrong SQL generated in SQL Server RETURNING emulation, when generated aliases conflict - Somewhat related to recent issues that were fixed in the context of https://github.com/jOOQ/jOOQ/issues/14477
In SQL Server's `RETURNING` emulation, it can happen that generated aliases of the `@result` table create a conflict, such as in this instance:
```sql
declare @result table (
alias_63 int,
alias_22869988 int,
alias_128594141 int,
alias_63 int,
ID int
);
insert into t_comp_client_virtual_1 (ID, VAL)
output
1,
(inserted.VAL * 2),
(1 + (inserted.VAL * 2) + 1),
1,
inserted.ID
into @result
values (
1,
1
);
select alias_63, alias_22869988, alias_128594141, alias_63
from @result r;
```
This produces an error:
```
org.jooq.exception.DataAccessException: SQL [declare @result table (alias_63 int, alias_22869988 int, alias_128594141 int, alias_63 int, ID int); insert into t_comp_client_virtual_1 (ID, VAL) output ?, (inserted.VAL * ?), (? + (inserted.VAL * ?) + ?), ?, inserted.ID into @result values (?, ?); select alias_63, alias_22869988, alias_128594141, alias_63 from @result r;]; Column names in each table must be unique. Column name 'alias_63' in table '@result' is specified more than once.
at org.jooq_3.18.0-SNAPSHOT.SQLSERVER2022.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:3406)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:746)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:371)
at org.jooq.impl.AbstractDMLQueryAsResultQuery.fetch(AbstractDMLQueryAsResultQuery.java:140)
at org.jooq.impl.ResultQueryTrait.fetchLazy(ResultQueryTrait.java:281)
at org.jooq.impl.ResultQueryTrait.fetchLazyNonAutoClosing(ResultQueryTrait.java:290)
at org.jooq.impl.ResultQueryTrait.fetchOne(ResultQueryTrait.java:509)
at org.jooq.test.all.testcases.ComputedClientSideVirtualTests.testComputedClientSideVirtualDMLInsertReturningProjectionExpressions(ComputedClientSideVirtualTests.java:579)
at org.jooq.test.jOOQAbstractTest.testComputedClientSideVirtualDMLInsertReturningProjectionExpressions(jOOQAbstractTest.java:11872)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:93)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:40)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:529)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:756)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:452)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Column names in each table must be unique. Column name 'alias_63' in table '@result' is specified more than once.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1673)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:620)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:540)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7627)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3912)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:268)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:242)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:459)
at org.jooq.tools.jdbc.DefaultPreparedStatement.executeQuery(DefaultPreparedStatement.java:104)
at org.jooq.tools.jdbc.DefaultPreparedStatement.executeQuery(DefaultPreparedStatement.java:104)
at org.jooq.impl.AbstractDMLQuery.executeReturningQuery(AbstractDMLQuery.java:1313)
at org.jooq.impl.AbstractDMLQuery.execute(AbstractDMLQuery.java:1081)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:357)
... 38 more
```
We don't have to generate aliases based on some `hashCode` in this case, which is guaranteed to create conflicts for identical expressions (which is an edge case, but can still happen often enough). It's probably better to enumerate the aliases, instead.
In this particular case, because there's no `MERGE` statement due to the fixes in https://github.com/jOOQ/jOOQ/issues/14479, we might also work around the edge case by removing the `@result` table entirely. Without `MERGE`, we can revert to running only the `INSERT` statement without `INTO`.
But this shows that we need another integration test that combines `MERGE` with duplicate aliases.
|
defect
|
wrong sql generated in sql server returning emulation when generated aliases conflict somewhat related to recent issues that were fixed in the context of in sql server s returning emulation it can happen that generated aliases of the result table create a conflict such as in this instance sql declare result table alias int alias int alias int alias int id int insert into t comp client virtual id val output inserted val inserted val inserted id into result values select alias alias alias alias from result r this produces an error org jooq exception dataaccessexception sql column names in each table must be unique column name alias in table result is specified more than once at org jooq snapshot debug unknown source at org jooq impl tools translate tools java at org jooq impl defaultexecutecontext sqlexception defaultexecutecontext java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractdmlqueryasresultquery fetch abstractdmlqueryasresultquery java at org jooq impl resultquerytrait fetchlazy resultquerytrait java at org jooq impl resultquerytrait fetchlazynonautoclosing resultquerytrait java at org jooq impl resultquerytrait fetchone resultquerytrait java at org jooq test all testcases computedclientsidevirtualtests testcomputedclientsidevirtualdmlinsertreturningprojectionexpressions computedclientsidevirtualtests java at org jooq test jooqabstracttest testcomputedclientsidevirtualdmlinsertreturningprojectionexpressions jooqabstracttest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit rules testwatcher evaluate testwatcher java at org junit rules testwatcher evaluate testwatcher java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org eclipse jdt internal runner run java at org eclipse jdt internal junit runner testexecution run testexecution java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner runtests remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner run remotetestrunner java at org eclipse jdt internal junit runner remotetestrunner main remotetestrunner java caused by com microsoft sqlserver jdbc sqlserverexception column names in each table must be unique column name alias in table result is specified more than once at com microsoft sqlserver jdbc sqlserverexception makefromdatabaseerror sqlserverexception java at com microsoft sqlserver jdbc sqlserverstatement getnextresult sqlserverstatement java at com microsoft sqlserver jdbc sqlserverpreparedstatement doexecutepreparedstatement sqlserverpreparedstatement java at com microsoft sqlserver jdbc sqlserverpreparedstatement prepstmtexeccmd doexecute sqlserverpreparedstatement java at com microsoft sqlserver jdbc tdscommand execute iobuffer java at com microsoft sqlserver jdbc sqlserverconnection executecommand sqlserverconnection java at com microsoft sqlserver jdbc sqlserverstatement executecommand sqlserverstatement java at com microsoft sqlserver jdbc sqlserverstatement executestatement sqlserverstatement java at com microsoft sqlserver jdbc sqlserverpreparedstatement executequery sqlserverpreparedstatement java at org jooq tools jdbc defaultpreparedstatement executequery defaultpreparedstatement java at org jooq tools jdbc defaultpreparedstatement executequery defaultpreparedstatement java at org jooq impl abstractdmlquery executereturningquery abstractdmlquery java at org jooq impl abstractdmlquery execute abstractdmlquery java at org jooq impl abstractquery execute abstractquery java more we don t have to generate aliases based on some hashcode in this case which is guaranteed to create conflicts for identical expressions which is an edge case but can still happen often enough it s probably better to enumerate the aliases instead in this particular case because there s no merge statement due to the fixes in we might also work around the edge case by removing the result table entirely without merge we can revert to running only the insert statement without into but this shows that we need another integration test that combines merge with duplicate aliases
| 1
|
7,346
| 2,610,364,516
|
IssuesEvent
|
2015-02-26 19:57:46
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
can't add blogger and wordpress
|
auto-migrated Priority-Medium Type-Defect
|
```
the problem:
after enter blogger address and next under login credentiols 'click to securly
authorize Scribedfire' are showing loading circle only after allowed google
access. same problem also in wp id and password.
--------------------------------
ScribeFire is requesting permission to:
Manage your photos and videos
Manage your Blogger account
Allow Access , no thanks
---------------------------------
Used Allow Access
**************************************************************
What browser are you using?
firefox 16.0.2
version of ScribeFire are you running:
Scribe fire Next
```
-----
Original issue reported on code.google.com by `2266...@gmail.com` on 2 Nov 2012 at 3:27
Attachments:
* [scibe.jpg](https://storage.googleapis.com/google-code-attachments/scribefire-chrome/issue-698/comment-0/scibe.jpg)
|
1.0
|
can't add blogger and wordpress - ```
the problem:
after enter blogger address and next under login credentiols 'click to securly
authorize Scribedfire' are showing loading circle only after allowed google
access. same problem also in wp id and password.
--------------------------------
ScribeFire is requesting permission to:
Manage your photos and videos
Manage your Blogger account
Allow Access , no thanks
---------------------------------
Used Allow Access
**************************************************************
What browser are you using?
firefox 16.0.2
version of ScribeFire are you running:
Scribe fire Next
```
-----
Original issue reported on code.google.com by `2266...@gmail.com` on 2 Nov 2012 at 3:27
Attachments:
* [scibe.jpg](https://storage.googleapis.com/google-code-attachments/scribefire-chrome/issue-698/comment-0/scibe.jpg)
|
defect
|
can t add blogger and wordpress the problem after enter blogger address and next under login credentiols click to securly authorize scribedfire are showing loading circle only after allowed google access same problem also in wp id and password scribefire is requesting permission to manage your photos and videos manage your blogger account allow access no thanks used allow access what browser are you using firefox version of scribefire are you running scribe fire next original issue reported on code google com by gmail com on nov at attachments
| 1
|
30,605
| 6,192,416,128
|
IssuesEvent
|
2017-07-05 01:38:18
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
AuthComponent thowing RuntimeException when some data is already stored in the session
|
auth Defect On hold
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: v3.2.14 and v3.4.6
* Platform and Target: Apache web-server, PHP 5.6 and 7.1, Chrome 57
### What you did
I used a component that read the session looking for some key and when I try to use some of the **AuthComponent** methods, an Exception is thrown.
This issue can be replicated easy with the following code:
```php
class UsersController extends AppController
{
public function login()
{
// loaded in some component
$key = $this->request->session()->read('Auth.User.some_key');
// ...
// loaded in the app
$this->Auth->setUser([
'id' => 'userId',
'username' => 'user name'
]);
exit('INDEX');
}
}
```
I tried to recreate this issue in a unit test for the AuthComponent but it did not work in a CLI environment.
The AuthComponent is configured through the UsersAuthComponent from the **[CakeDC/users](https://github.com/CakeDC/users)** plugin, so my custom component that read the session is always loaded before the AuthComponent.
### What happened
this throws a RuntimeException with the message "Session was already started"
The stack trace shows that the \Cake\Network\Session::renew() method is called when is writing the session data but do not detect the previously started session.
### What you expected to happen
The session storage should be able detect if the session was already started and don't try to create another.
|
1.0
|
AuthComponent thowing RuntimeException when some data is already stored in the session - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: v3.2.14 and v3.4.6
* Platform and Target: Apache web-server, PHP 5.6 and 7.1, Chrome 57
### What you did
I used a component that read the session looking for some key and when I try to use some of the **AuthComponent** methods, an Exception is thrown.
This issue can be replicated easy with the following code:
```php
class UsersController extends AppController
{
public function login()
{
// loaded in some component
$key = $this->request->session()->read('Auth.User.some_key');
// ...
// loaded in the app
$this->Auth->setUser([
'id' => 'userId',
'username' => 'user name'
]);
exit('INDEX');
}
}
```
I tried to recreate this issue in a unit test for the AuthComponent but it did not work in a CLI environment.
The AuthComponent is configured through the UsersAuthComponent from the **[CakeDC/users](https://github.com/CakeDC/users)** plugin, so my custom component that read the session is always loaded before the AuthComponent.
### What happened
this throws a RuntimeException with the message "Session was already started"
The stack trace shows that the \Cake\Network\Session::renew() method is called when is writing the session data but do not detect the previously started session.
### What you expected to happen
The session storage should be able detect if the session was already started and don't try to create another.
|
defect
|
authcomponent thowing runtimeexception when some data is already stored in the session this is a multiple allowed bug enhancement feature discussion rfc cakephp version and platform and target apache web server php and chrome what you did i used a component that read the session looking for some key and when i try to use some of the authcomponent methods an exception is thrown this issue can be replicated easy with the following code php class userscontroller extends appcontroller public function login loaded in some component key this request session read auth user some key loaded in the app this auth setuser id userid username user name exit index i tried to recreate this issue in a unit test for the authcomponent but it did not work in a cli environment the authcomponent is configured through the usersauthcomponent from the plugin so my custom component that read the session is always loaded before the authcomponent what happened this throws a runtimeexception with the message session was already started the stack trace shows that the cake network session renew method is called when is writing the session data but do not detect the previously started session what you expected to happen the session storage should be able detect if the session was already started and don t try to create another
| 1
|
14,343
| 2,799,313,431
|
IssuesEvent
|
2015-05-12 23:37:07
|
FIX94/Nintendont
|
https://api.github.com/repos/FIX94/Nintendont
|
closed
|
The Legend of Zelda: Collector's Edition PAL (europe)
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Open the game iso though USB LOADER GX (DiscEx'ed it)
2. Load up the Zelda Collector's Edition (EUROPE)
3. Select one game from the possible options.
What is the expected output? What do you see instead?
Expected: Load the selected game from the collection list.
What happens Instead: Black screen and a humming sound coming from speakers.
What revision of Nintendont are you using? On what system Wii/Wii U?
v2.216 (12.11.2014)
Please provide any additional information below.
Games like Zelda Collector's Edition, Zelda Ocarina of Time/Master Quest, Sonic
Adventure DX, Sonic Gems Collection, Sonic Mega Collection, Midway Arcade
Collection and many other games would benefit from this.
```
Original issue reported on code.google.com by `Xido...@gmail.com` on 13 Nov 2014 at 3:57
|
1.0
|
The Legend of Zelda: Collector's Edition PAL (europe) - ```
What steps will reproduce the problem?
1. Open the game iso though USB LOADER GX (DiscEx'ed it)
2. Load up the Zelda Collector's Edition (EUROPE)
3. Select one game from the possible options.
What is the expected output? What do you see instead?
Expected: Load the selected game from the collection list.
What happens Instead: Black screen and a humming sound coming from speakers.
What revision of Nintendont are you using? On what system Wii/Wii U?
v2.216 (12.11.2014)
Please provide any additional information below.
Games like Zelda Collector's Edition, Zelda Ocarina of Time/Master Quest, Sonic
Adventure DX, Sonic Gems Collection, Sonic Mega Collection, Midway Arcade
Collection and many other games would benefit from this.
```
Original issue reported on code.google.com by `Xido...@gmail.com` on 13 Nov 2014 at 3:57
|
defect
|
the legend of zelda collector s edition pal europe what steps will reproduce the problem open the game iso though usb loader gx discex ed it load up the zelda collector s edition europe select one game from the possible options what is the expected output what do you see instead expected load the selected game from the collection list what happens instead black screen and a humming sound coming from speakers what revision of nintendont are you using on what system wii wii u please provide any additional information below games like zelda collector s edition zelda ocarina of time master quest sonic adventure dx sonic gems collection sonic mega collection midway arcade collection and many other games would benefit from this original issue reported on code google com by xido gmail com on nov at
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.