Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
201,086
| 15,173,147,647
|
IssuesEvent
|
2021-02-13 12:47:17
|
apisofshit/apisofshit.github.io
|
https://api.github.com/repos/apisofshit/apisofshit.github.io
|
opened
|
test | 飞翔小站
|
/post/test/ Gitalk
|
https://freene.tk/post/test/
test
test
test
test
test
www.github.com
test
test test
#test#
##test#3
test
test
test
 detected in netty-codec-http-4.1.39.Final.jar - autoclosed
|
security vulnerability
|
## CVE-2021-21295 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /e/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Tammy-Hudson/commit/6ec480502d2ab606c0fe76d935971e50d9d630a0">6ec480502d2ab606c0fe76d935971e50d9d630a0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: `HTTP2MultiplexCodec` or `Http2FrameCodec` is used, `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects, and these HTTP/1.1 objects are forwarded to another remote peer. This has been patched in 4.1.60.Final As a workaround, the user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21295>CVE-2021-21295</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-wm47-8v5p-wjpj">https://github.com/advisories/GHSA-wm47-8v5p-wjpj</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: io.netty:netty-all:4.1.60;io.netty:netty-codec-http:4.1.60;io.netty:netty-codec-http2:4.1.60</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.60;io.netty:netty-codec-http:4.1.60;io.netty:netty-codec-http2:4.1.60","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21295","vulnerabilityDetails":"Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers \u0026 clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel\u0027s pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: `HTTP2MultiplexCodec` or `Http2FrameCodec` is used, `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects, and these HTTP/1.1 objects are forwarded to another remote peer. This has been patched in 4.1.60.Final As a workaround, the user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21295","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-21295 (Medium) detected in netty-codec-http-4.1.39.Final.jar - autoclosed - ## CVE-2021-21295 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /e/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Tammy-Hudson/commit/6ec480502d2ab606c0fe76d935971e50d9d630a0">6ec480502d2ab606c0fe76d935971e50d9d630a0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: `HTTP2MultiplexCodec` or `Http2FrameCodec` is used, `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects, and these HTTP/1.1 objects are forwarded to another remote peer. This has been patched in 4.1.60.Final As a workaround, the user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21295>CVE-2021-21295</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-wm47-8v5p-wjpj">https://github.com/advisories/GHSA-wm47-8v5p-wjpj</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: io.netty:netty-all:4.1.60;io.netty:netty-codec-http:4.1.60;io.netty:netty-codec-http2:4.1.60</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.60;io.netty:netty-codec-http:4.1.60;io.netty:netty-codec-http2:4.1.60","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21295","vulnerabilityDetails":"Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers \u0026 clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel\u0027s pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: `HTTP2MultiplexCodec` or `Http2FrameCodec` is used, `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects, and these HTTP/1.1 objects are forwarded to another remote peer. This has been patched in 4.1.60.Final As a workaround, the user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21295","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in netty codec http final jar autoclosed cve medium severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file build gradle path to vulnerable library e caches modules files io netty netty codec http final netty codec http final jar dependency hierarchy x netty codec http final jar vulnerable library found in head commit a href found in base branch master vulnerability details netty is an open source asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients in netty io netty netty codec before version final there is a vulnerability that enables request smuggling if a content length header is present in the original http request the field is not validated by as it is propagated up this is fine as long as the request is not proxied through as http if the request comes in as an http stream gets converted into the http domain objects httprequest httpcontent etc via and then sent up to the child channel s pipeline and proxied through a remote peer as http this may result in request smuggling in a proxy case users may assume the content length is validated somehow which is not the case if the request is forwarded to a backend channel that is a http connection the content length now has meaning and needs to be checked an attacker can smuggle requests inside the body as it gets downgraded from http to http for an example attack refer to the linked github advisory users are only affected if all of this is true or is used is used to convert to http objects and these http objects are forwarded to another remote peer this has been patched in final as a workaround the user can do the validation by themselves by implementing a custom channelinboundhandler that is put in the channelpipeline behind publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all io netty netty codec http io netty netty codec isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree io netty netty codec http final isminimumfixversionavailable true minimumfixversion io netty netty all io netty netty codec http io netty netty codec isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails netty is an open source asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients in netty io netty netty codec before version final there is a vulnerability that enables request smuggling if a content length header is present in the original http request the field is not validated by as it is propagated up this is fine as long as the request is not proxied through as http if the request comes in as an http stream gets converted into the http domain objects httprequest httpcontent etc via and then sent up to the child channel pipeline and proxied through a remote peer as http this may result in request smuggling in a proxy case users may assume the content length is validated somehow which is not the case if the request is forwarded to a backend channel that is a http connection the content length now has meaning and needs to be checked an attacker can smuggle requests inside the body as it gets downgraded from http to http for an example attack refer to the linked github advisory users are only affected if all of this is true or is used is used to convert to http objects and these http objects are forwarded to another remote peer this has been patched in final as a workaround the user can do the validation by themselves by implementing a custom channelinboundhandler that is put in the channelpipeline behind vulnerabilityurl
| 0
|
6,939
| 24,042,198,796
|
IssuesEvent
|
2022-09-16 03:42:51
|
AdamXweb/awesome-aussie
|
https://api.github.com/repos/AdamXweb/awesome-aussie
|
closed
|
[ADDITION] Deputy
|
Awaiting Review Added to Airtable Automation from Airtable
|
### Category
HR
### Software to be added
Deputy
### Supporting Material
URL: https://www.deputy.com/
Description: Deputy is an employee management tool, simplifying scheduling, timesheets, tasks and workplace communication.
Size:
HQ: Sydney
LinkedIn: https://www.linkedin.com/company/deputyapp/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec6Vz46dZwfT1POG
|
1.0
|
[ADDITION] Deputy - ### Category
HR
### Software to be added
Deputy
### Supporting Material
URL: https://www.deputy.com/
Description: Deputy is an employee management tool, simplifying scheduling, timesheets, tasks and workplace communication.
Size:
HQ: Sydney
LinkedIn: https://www.linkedin.com/company/deputyapp/
#### See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rec6Vz46dZwfT1POG
|
non_defect
|
deputy category hr software to be added deputy supporting material url description deputy is an employee management tool simplifying scheduling timesheets tasks and workplace communication size hq sydney linkedin see record on airtable
| 0
|
87,172
| 10,881,108,997
|
IssuesEvent
|
2019-11-17 15:40:01
|
bounswe/bounswe2019group5
|
https://api.github.com/repos/bounswe/bounswe2019group5
|
opened
|
Removing listening question body limit
|
Backend Status: Available Type: Design
|
While I am adding listening questions from admin panel, I have encountered string character limit as 1001 chars. Since we are adding base64 converted version of any audio files, their string lengths are much longer than 1001 chars. In our database, we should be able to keep longer strings in order to fetch the audio files. So, I think the backend team should remove this limit or suggest another solution.
|
1.0
|
Removing listening question body limit - While I am adding listening questions from admin panel, I have encountered string character limit as 1001 chars. Since we are adding base64 converted version of any audio files, their string lengths are much longer than 1001 chars. In our database, we should be able to keep longer strings in order to fetch the audio files. So, I think the backend team should remove this limit or suggest another solution.
|
non_defect
|
removing listening question body limit while i am adding listening questions from admin panel i have encountered string character limit as chars since we are adding converted version of any audio files their string lengths are much longer than chars in our database we should be able to keep longer strings in order to fetch the audio files so i think the backend team should remove this limit or suggest another solution
| 0
|
8,607
| 6,587,234,960
|
IssuesEvent
|
2017-09-13 20:17:25
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
Proposed: C# compiler could use flow analysis to track types in variables permitting the use of a `constrained` instruction on virtual calls.
|
Area-Compilers Feature Request Tenet-Performance
|
Boxing for `isinst` aside https://github.com/dotnet/coreclr/issues/12877 it would be nice if Pattern Matching issued a `constrained` instruction prior to `callvirt` where a generic type was matched to an interface
**Version Used**:
> Microsoft Visual Studio Enterprise 2017
Version 15.3.4
VisualStudio.15.Release/15.3.4+26730.15
C#7
**Steps to Reproduce**:
```csharp
int key0, key1;
PatternMatchEquality<int>.Equals(key0, key1)
```
Where
```csharp
public class PatternMatchEquality<TKey>
{
private static readonly EqualityComparer<TKey> _defaultComparer = EqualityComparer<TKey>.Default;
public static bool Equals(TKey key0, TKey key1)
{
switch (key0)
{
case IEquatable<TKey> key:
return key.Equals(key1);
default:
//return _defaultComparer.Equals(key0, key1);
throw new Exception();
}
}
}
```
**Expected Behavior**:
il output
```il
ldarg.1 // key1
constrained. !0/*TKey*/ <--------
callvirt instance bool class [System.Runtime]System.IEquatable`1<!0/*TKey*/>::Equals(!0/*TKey*/)
ret
```
**Actual Behavior**:
il output
```il
.method public hidebysig static bool
Equals(
!0/*TKey*/ key0,
!0/*TKey*/ key1
) cil managed
{
.maxstack 2
.locals init (
[0] !0/*TKey*/ V_0,
[1] class [System.Runtime]System.IEquatable`1<!0/*TKey*/> V_1
)
// [185 13 - 185 26]
IL_0000: ldarg.0 // key0
IL_0001: stloc.0 // V_0
IL_0002: ldloc.0 // V_0
IL_0003: box !0/*TKey*/
IL_0008: brfalse.s IL_0021
IL_000a: ldloc.0 // V_0
IL_000b: box !0/*TKey*/
IL_0010: isinst class [System.Runtime]System.IEquatable`1<!0/*TKey*/>
IL_0015: dup
IL_0016: stloc.1 // V_1
IL_0017: brfalse.s IL_0021
IL_0019: ldloc.1 // V_1
// [188 21 - 188 45]
IL_001a: ldarg.1 // key1
// <------- No constrained. !0
IL_001b: callvirt instance bool class [System.Runtime]System.IEquatable`1<!0/*TKey*/>::Equals(!0/*TKey*/)
IL_0020: ret
// [190 22 - 190 44]
IL_0021: newobj instance void [System.Runtime]System.Exception::.ctor()
IL_0026: throw
} // end of method PatternMatchEquality`1::Equals
```
/cc @gafter @alekseyts @agocke
|
True
|
Proposed: C# compiler could use flow analysis to track types in variables permitting the use of a `constrained` instruction on virtual calls. - Boxing for `isinst` aside https://github.com/dotnet/coreclr/issues/12877 it would be nice if Pattern Matching issued a `constrained` instruction prior to `callvirt` where a generic type was matched to an interface
**Version Used**:
> Microsoft Visual Studio Enterprise 2017
Version 15.3.4
VisualStudio.15.Release/15.3.4+26730.15
C#7
**Steps to Reproduce**:
```csharp
int key0, key1;
PatternMatchEquality<int>.Equals(key0, key1)
```
Where
```csharp
public class PatternMatchEquality<TKey>
{
private static readonly EqualityComparer<TKey> _defaultComparer = EqualityComparer<TKey>.Default;
public static bool Equals(TKey key0, TKey key1)
{
switch (key0)
{
case IEquatable<TKey> key:
return key.Equals(key1);
default:
//return _defaultComparer.Equals(key0, key1);
throw new Exception();
}
}
}
```
**Expected Behavior**:
il output
```il
ldarg.1 // key1
constrained. !0/*TKey*/ <--------
callvirt instance bool class [System.Runtime]System.IEquatable`1<!0/*TKey*/>::Equals(!0/*TKey*/)
ret
```
**Actual Behavior**:
il output
```il
.method public hidebysig static bool
Equals(
!0/*TKey*/ key0,
!0/*TKey*/ key1
) cil managed
{
.maxstack 2
.locals init (
[0] !0/*TKey*/ V_0,
[1] class [System.Runtime]System.IEquatable`1<!0/*TKey*/> V_1
)
// [185 13 - 185 26]
IL_0000: ldarg.0 // key0
IL_0001: stloc.0 // V_0
IL_0002: ldloc.0 // V_0
IL_0003: box !0/*TKey*/
IL_0008: brfalse.s IL_0021
IL_000a: ldloc.0 // V_0
IL_000b: box !0/*TKey*/
IL_0010: isinst class [System.Runtime]System.IEquatable`1<!0/*TKey*/>
IL_0015: dup
IL_0016: stloc.1 // V_1
IL_0017: brfalse.s IL_0021
IL_0019: ldloc.1 // V_1
// [188 21 - 188 45]
IL_001a: ldarg.1 // key1
// <------- No constrained. !0
IL_001b: callvirt instance bool class [System.Runtime]System.IEquatable`1<!0/*TKey*/>::Equals(!0/*TKey*/)
IL_0020: ret
// [190 22 - 190 44]
IL_0021: newobj instance void [System.Runtime]System.Exception::.ctor()
IL_0026: throw
} // end of method PatternMatchEquality`1::Equals
```
/cc @gafter @alekseyts @agocke
|
non_defect
|
proposed c compiler could use flow analysis to track types in variables permitting the use of a constrained instruction on virtual calls boxing for isinst aside it would be nice if pattern matching issued a constrained instruction prior to callvirt where a generic type was matched to an interface version used microsoft visual studio enterprise version visualstudio release c steps to reproduce csharp int patternmatchequality equals where csharp public class patternmatchequality private static readonly equalitycomparer defaultcomparer equalitycomparer default public static bool equals tkey tkey switch case iequatable key return key equals default return defaultcomparer equals throw new exception expected behavior il output il ldarg constrained tkey callvirt instance bool class system iequatable equals tkey ret actual behavior il output il method public hidebysig static bool equals tkey tkey cil managed maxstack locals init tkey v class system iequatable v il ldarg il stloc v il ldloc v il box tkey il brfalse s il il ldloc v il box tkey il isinst class system iequatable il dup il stloc v il brfalse s il il ldloc v il ldarg no constrained il callvirt instance bool class system iequatable equals tkey il ret il newobj instance void system exception ctor il throw end of method patternmatchequality equals cc gafter alekseyts agocke
| 0
|
42,661
| 11,205,188,491
|
IssuesEvent
|
2020-01-05 12:32:35
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
CI failing: Travis CI Py36 refguide and Linux_Python_36_32bit_full scipy.{optimize,special}
|
defect scipy.sparse
|
CI testing unexpectedly fails for some recent PRs gh-9719 and gh-11267.
#### Error message:
<s>
* Travis-CI fails building the refguide using Py3.6 with 4 `scipy.sparse` errors involving int64 and longlong such as
```
Expected:
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
Got:
<2x3 sparse matrix of type '<class 'numpy.longlong'>'
with 2 stored elements in Compressed Sparse Column format>
```
</s>
* Linux_Python_36_32bit_full has 11 failures in `scipy.optimize` (`scipy.optimize.tests.test_linprog.TestLinprogIPSparse` and `scipy.optimize.tests.test_trustregion_krylov`) and `scipy.special` (comparing with `mpmath`).
https://dev.azure.com/scipy-org/SciPy/_build/results?buildId=4378&view=logs&j=dca3794b-c3dc-52e6-709a-3545ea017448&t=42f16a10-8379-5482-623d-4749ae3e519a
The 4+7=11 errors are:
```
TestLinprogIPSparsePresolve.test_bug_6139
TestLinprogIPSparse.test_bug_6139
TestKrylovQuadraticSubproblem.test_for_the_easy_case
TestKrylovQuadraticSubproblem.test_for_very_close_to_zero
TestEllip.test_ellipkinc
TestEllip.test_ellipkinc_singular
TestSystematic.test_e1
TestSystematic.test_e1_complex
TestSystematic.test_ei
TestSystematic.test_ei_complex
TestSystematic.test_ker
```
gh-11267 is rebased on master 76da841e4 Mon Dec 23 02:15:40 2019 -0800 so is current.
The two PRs are in `stats` and shouldn't affect units tests in `optimize` or `special`. Are these tests known to fail with Py3.6?
|
1.0
|
CI failing: Travis CI Py36 refguide and Linux_Python_36_32bit_full scipy.{optimize,special} - CI testing unexpectedly fails for some recent PRs gh-9719 and gh-11267.
#### Error message:
<s>
* Travis-CI fails building the refguide using Py3.6 with 4 `scipy.sparse` errors involving int64 and longlong such as
```
Expected:
<2x3 sparse matrix of type '<class 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Column format>
Got:
<2x3 sparse matrix of type '<class 'numpy.longlong'>'
with 2 stored elements in Compressed Sparse Column format>
```
</s>
* Linux_Python_36_32bit_full has 11 failures in `scipy.optimize` (`scipy.optimize.tests.test_linprog.TestLinprogIPSparse` and `scipy.optimize.tests.test_trustregion_krylov`) and `scipy.special` (comparing with `mpmath`).
https://dev.azure.com/scipy-org/SciPy/_build/results?buildId=4378&view=logs&j=dca3794b-c3dc-52e6-709a-3545ea017448&t=42f16a10-8379-5482-623d-4749ae3e519a
The 4+7=11 errors are:
```
TestLinprogIPSparsePresolve.test_bug_6139
TestLinprogIPSparse.test_bug_6139
TestKrylovQuadraticSubproblem.test_for_the_easy_case
TestKrylovQuadraticSubproblem.test_for_very_close_to_zero
TestEllip.test_ellipkinc
TestEllip.test_ellipkinc_singular
TestSystematic.test_e1
TestSystematic.test_e1_complex
TestSystematic.test_ei
TestSystematic.test_ei_complex
TestSystematic.test_ker
```
gh-11267 is rebased on master 76da841e4 Mon Dec 23 02:15:40 2019 -0800 so is current.
The two PRs are in `stats` and shouldn't affect units tests in `optimize` or `special`. Are these tests known to fail with Py3.6?
|
defect
|
ci failing travis ci refguide and linux python full scipy optimize special ci testing unexpectedly fails for some recent prs gh and gh error message travis ci fails building the refguide using with scipy sparse errors involving and longlong such as expected with stored elements in compressed sparse column format got with stored elements in compressed sparse column format linux python full has failures in scipy optimize scipy optimize tests test linprog testlinprogipsparse and scipy optimize tests test trustregion krylov and scipy special comparing with mpmath the errors are testlinprogipsparsepresolve test bug testlinprogipsparse test bug testkrylovquadraticsubproblem test for the easy case testkrylovquadraticsubproblem test for very close to zero testellip test ellipkinc testellip test ellipkinc singular testsystematic test testsystematic test complex testsystematic test ei testsystematic test ei complex testsystematic test ker gh is rebased on master mon dec so is current the two prs are in stats and shouldn t affect units tests in optimize or special are these tests known to fail with
| 1
|
54,442
| 13,688,742,364
|
IssuesEvent
|
2020-09-30 12:11:55
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
GitHub Actions don't perform documentation tests and show submodule error
|
Defect DoNotPublish NotIDDChange
|
Issue overview
--------------
Two issues need to be resolved when using GitHub actions:
1. They were not set up to test the documentation (or create annotations from the LaTeX warnings/issues)
2. There was a submodule error issued post-checkout that was triggered by submodules in Penumbra
|
1.0
|
GitHub Actions don't perform documentation tests and show submodule error - Issue overview
--------------
Two issues need to be resolved when using GitHub actions:
1. They were not set up to test the documentation (or create annotations from the LaTeX warnings/issues)
2. There was a submodule error issued post-checkout that was triggered by submodules in Penumbra
|
defect
|
github actions don t perform documentation tests and show submodule error issue overview two issues need to be resolved when using github actions they were not set up to test the documentation or create annotations from the latex warnings issues there was a submodule error issued post checkout that was triggered by submodules in penumbra
| 1
|
63,576
| 17,773,894,551
|
IssuesEvent
|
2021-08-30 16:38:57
|
idaholab/raven
|
https://api.github.com/repos/idaholab/raven
|
closed
|
[DEFECT] Dymola interface does not show error
|
priority_normal defect
|
--------
Defect Description
--------
I tried to run dymola from RAVEN using the Code Models
##### What did you expect to see happen?
An explanation about why the Dymola executable could not be launched
##### What did you see instead?
The job failed without any explanation. The dslog is printed in one of the subfolders but not easily accessible
##### Do you have a suggested fix for the development team?
Print the location or the content of the Dymola dslog in the RAVEN out file
**Describe how to Reproduce**
Steps to reproduce the behavior:
1.
2.
3.
4.
**Screenshots and Input Files**
Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
**Platform (please complete the following information):**
- OS: [e.g. iOS] Windows
- Version: [e.g. 22]
- Dependencies Installation: [CONDA or PIP] CONDA
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [x] 1. If the issue is a defect, is the defect fixed?
- [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
[DEFECT] Dymola interface does not show error - --------
Defect Description
--------
I tried to run dymola from RAVEN using the Code Models
##### What did you expect to see happen?
An explanation about why the Dymola executable could not be launched
##### What did you see instead?
The job failed without any explanation. The dslog is printed in one of the subfolders but not easily accessible
##### Do you have a suggested fix for the development team?
Print the location or the content of the Dymola dslog in the RAVEN out file
**Describe how to Reproduce**
Steps to reproduce the behavior:
1.
2.
3.
4.
**Screenshots and Input Files**
Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
**Platform (please complete the following information):**
- OS: [e.g. iOS] Windows
- Version: [e.g. 22]
- Dependencies Installation: [CONDA or PIP] CONDA
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [x] 1. If the issue is a defect, is the defect fixed?
- [x] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
defect
|
dymola interface does not show error defect description i tried to run dymola from raven using the code models what did you expect to see happen an explanation about why the dymola executable could not be launched what did you see instead the job failed without any explanation the dslog is printed in one of the subfolders but not easily accessible do you have a suggested fix for the development team print the location or the content of the dymola dslog in the raven out file describe how to reproduce steps to reproduce the behavior screenshots and input files please attach the input file s that generate this error the simpler the input the faster we can find the issue platform please complete the following information os windows version dependencies installation conda for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 1
|
33,436
| 7,122,987,147
|
IssuesEvent
|
2018-01-19 13:58:35
|
p5n/archlinux-stuff
|
https://api.github.com/repos/p5n/archlinux-stuff
|
closed
|
xdg_menu license
|
Priority-Medium Type-Defect auto-migrated
|
```
Arch Linux xdg_menu package states "GPL", but source code contains only obscure
"All rights reserved". Is it GPL-licensed after all? Was author permission to
freely use the code ever received or do I have yet contact him?
```
Original issue reported on code.google.com by `Rvach...@nxt.ru` on 24 Oct 2012 at 7:51
|
1.0
|
xdg_menu license - ```
Arch Linux xdg_menu package states "GPL", but source code contains only obscure
"All rights reserved". Is it GPL-licensed after all? Was author permission to
freely use the code ever received or do I have yet contact him?
```
Original issue reported on code.google.com by `Rvach...@nxt.ru` on 24 Oct 2012 at 7:51
|
defect
|
xdg menu license arch linux xdg menu package states gpl but source code contains only obscure all rights reserved is it gpl licensed after all was author permission to freely use the code ever received or do i have yet contact him original issue reported on code google com by rvach nxt ru on oct at
| 1
|
40,445
| 9,998,864,478
|
IssuesEvent
|
2019-07-12 09:14:06
|
contao/contao
|
https://api.github.com/repos/contao/contao
|
closed
|
Visible fieldset and empty legend with template member_grouped on captcha field
|
defect
|
**Affected version(s)**
4.4.40
**Description**
Das Template member_grouped erzeugt folgende Ausgabe bei dem Captcha
```html
<fieldset>
<legend></legend>
<div class="widget widget-captcha mandatory" style="display: none;">
<label for="ctrl_registration">
<span class="invisible">Mandatory field </span>Security question<span class="mandatory">*</span>
</label>
<input type="text" name="captcha_registration" id="ctrl_registration" class="captcha mandatory" value="" aria-describedby="captcha_text_registration" maxlength="2" required="">
<span id="captcha_text_registration" class="captcha_text">Please calculate 1 plus 2.</span>
<input type="hidden" name="captcha_registration_hash" value="7a2028c128d50d7e5641ba176effff3bcf5df818645baf5c988c242dd8971ff7">
<div style="display:none">
<label for="ctrl_registration_hp">Do not fill in this field</label>
<input type="text" name="captcha_registration_name" id="ctrl_registration_hp" value="">
</div>
<script>
document.getElementById('ctrl_registration').parentNode.style.display = 'none';
document.getElementById('ctrl_registration').value = '3';
</script>
</div>
</fieldset>
```
Das oberste `DIV` ist `display:none`, aber die Elemente `fieldset` und `legend` nicht. Das sieht schlecht aus, wenn man `fieldset` und `legend` formatiert hat, wie z.B. in der Online-Demo.
Eine Änderung des Core-Templates https://github.com/contao/contao/blob/master/core-bundle/src/Resources/contao/templates/member/member_grouped.html5#L18-L25 könnte Abhilfe schaffen, z.B.
```php
...
<?php foreach ($this->categories as $legend=>$category): ?>
<?php if (!empty($category)): ?>
<?php if (!$legend): ?>
<?= implode('', $category) ?>
<?php else: ?>
<fieldset>
<legend><?= $legend ?></legend>
<?= implode('', $category) ?>
</fieldset>
<?php endif; ?>
<?php endif; ?>
<?php endforeach; ?>
...
```
In diesem Beispiel wird das `fieldset` und `legend` für das Captcha nicht gerendert.
Zudem ist die Legend leer, was auch nicht gut aussieht.
Eine Frage habe ich noch: Warum wird das Inline-Style `display:none` und das `value` per JS aufgesetzt? Könnte dies nicht direkt in der HTML-Variable mit enthalten sein? So sieht die Variable aus:
```html
[captcha] =>
<div class="widget widget-captcha mandatory">
<label for="ctrl_registration">
<span class="invisible">Pflichtfeld</span>Sicherheitsfrage<span class="mandatory">*</span>
</label>
<input type="text" name="c599e354e3dbb156567ecab7b0a015dd6" id="ctrl_registration" class="captcha mandatory" value="" aria-describedby="captcha_text_registration" maxlength="2" required>
<span id="captcha_text_registration" class="captcha_text">Was ist die Summe aus 6 und 2?</span>
<div style="display:none">
<label for="ctrl_registration_hp">Do not fill in this field</label>
<input type="text" name="c599e354e3dbb156567ecab7b0a015dd6_name" id="ctrl_registration_hp" value="">
</div>
<script>
document.getElementById('ctrl_registration').parentNode.style.display = 'none';
document.getElementById('ctrl_registration').value = '8';
</script>
</div>
```
**How to reproduce**
Online Demo:
Erstelle ein Modul vom Typ Registrierung.
Template member_grouped auswählen.
Modul auf eine Seite platzieren.
Im Frontend ansehen.
|
1.0
|
Visible fieldset and empty legend with template member_grouped on captcha field - **Affected version(s)**
4.4.40
**Description**
Das Template member_grouped erzeugt folgende Ausgabe bei dem Captcha
```html
<fieldset>
<legend></legend>
<div class="widget widget-captcha mandatory" style="display: none;">
<label for="ctrl_registration">
<span class="invisible">Mandatory field </span>Security question<span class="mandatory">*</span>
</label>
<input type="text" name="captcha_registration" id="ctrl_registration" class="captcha mandatory" value="" aria-describedby="captcha_text_registration" maxlength="2" required="">
<span id="captcha_text_registration" class="captcha_text">Please calculate 1 plus 2.</span>
<input type="hidden" name="captcha_registration_hash" value="7a2028c128d50d7e5641ba176effff3bcf5df818645baf5c988c242dd8971ff7">
<div style="display:none">
<label for="ctrl_registration_hp">Do not fill in this field</label>
<input type="text" name="captcha_registration_name" id="ctrl_registration_hp" value="">
</div>
<script>
document.getElementById('ctrl_registration').parentNode.style.display = 'none';
document.getElementById('ctrl_registration').value = '3';
</script>
</div>
</fieldset>
```
Das oberste `DIV` ist `display:none`, aber die Elemente `fieldset` und `legend` nicht. Das sieht schlecht aus, wenn man `fieldset` und `legend` formatiert hat, wie z.B. in der Online-Demo.
Eine Änderung des Core-Templates https://github.com/contao/contao/blob/master/core-bundle/src/Resources/contao/templates/member/member_grouped.html5#L18-L25 könnte Abhilfe schaffen, z.B.
```php
...
<?php foreach ($this->categories as $legend=>$category): ?>
<?php if (!empty($category)): ?>
<?php if (!$legend): ?>
<?= implode('', $category) ?>
<?php else: ?>
<fieldset>
<legend><?= $legend ?></legend>
<?= implode('', $category) ?>
</fieldset>
<?php endif; ?>
<?php endif; ?>
<?php endforeach; ?>
...
```
In diesem Beispiel wird das `fieldset` und `legend` für das Captcha nicht gerendert.
Zudem ist die Legend leer, was auch nicht gut aussieht.
Eine Frage habe ich noch: Warum wird das Inline-Style `display:none` und das `value` per JS aufgesetzt? Könnte dies nicht direkt in der HTML-Variable mit enthalten sein? So sieht die Variable aus:
```html
[captcha] =>
<div class="widget widget-captcha mandatory">
<label for="ctrl_registration">
<span class="invisible">Pflichtfeld</span>Sicherheitsfrage<span class="mandatory">*</span>
</label>
<input type="text" name="c599e354e3dbb156567ecab7b0a015dd6" id="ctrl_registration" class="captcha mandatory" value="" aria-describedby="captcha_text_registration" maxlength="2" required>
<span id="captcha_text_registration" class="captcha_text">Was ist die Summe aus 6 und 2?</span>
<div style="display:none">
<label for="ctrl_registration_hp">Do not fill in this field</label>
<input type="text" name="c599e354e3dbb156567ecab7b0a015dd6_name" id="ctrl_registration_hp" value="">
</div>
<script>
document.getElementById('ctrl_registration').parentNode.style.display = 'none';
document.getElementById('ctrl_registration').value = '8';
</script>
</div>
```
**How to reproduce**
Online Demo:
Erstelle ein Modul vom Typ Registrierung.
Template member_grouped auswählen.
Modul auf eine Seite platzieren.
Im Frontend ansehen.
|
defect
|
visible fieldset and empty legend with template member grouped on captcha field affected version s description das template member grouped erzeugt folgende ausgabe bei dem captcha html mandatory field security question please calculate plus do not fill in this field document getelementbyid ctrl registration parentnode style display none document getelementbyid ctrl registration value das oberste div ist display none aber die elemente fieldset und legend nicht das sieht schlecht aus wenn man fieldset und legend formatiert hat wie z b in der online demo eine änderung des core templates könnte abhilfe schaffen z b php categories as legend category in diesem beispiel wird das fieldset und legend für das captcha nicht gerendert zudem ist die legend leer was auch nicht gut aussieht eine frage habe ich noch warum wird das inline style display none und das value per js aufgesetzt könnte dies nicht direkt in der html variable mit enthalten sein so sieht die variable aus html pflichtfeld sicherheitsfrage do not fill in this field document getelementbyid ctrl registration parentnode style display none document getelementbyid ctrl registration value how to reproduce online demo erstelle ein modul vom typ registrierung template member grouped auswählen modul auf eine seite platzieren im frontend ansehen
| 1
|
508,086
| 14,689,536,786
|
IssuesEvent
|
2021-01-02 10:22:22
|
ChrisNZL/Tallowmere2
|
https://api.github.com/repos/ChrisNZL/Tallowmere2
|
opened
|
Create Nintendo Switch™ version
|
⚠ priority++
|
My Switch devkit is sitting pretty. Need to get the Switch version going.
|
1.0
|
Create Nintendo Switch™ version - My Switch devkit is sitting pretty. Need to get the Switch version going.
|
non_defect
|
create nintendo switch™ version my switch devkit is sitting pretty need to get the switch version going
| 0
|
138,976
| 20,751,572,212
|
IssuesEvent
|
2022-03-15 08:10:41
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Remove unnecessary scroll bar on shields v2 for details view
|
bug feature/shields design QA/Yes release-notes/exclude feature/shields/panel OS/Desktop
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Remove unnecessary scroll bar on shields v2 for details view
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Enable Shields v2 flag
2. Visit any site and open shields
3. Expand to advance view and click on blocked items, shows unnecessary scrollbars
## Actual result:
<!--Please add screenshots if needed-->
<img width="396" alt="image" src="https://user-images.githubusercontent.com/17010094/157500672-f763c41e-0962-4fbf-9f70-d34a069f7cd4.png">
## Expected result:
Only show scrollbar when there is an overflow from visible box
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
Easy
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
With flag enabled
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
cc: @nullhook @aguscruiz
|
1.0
|
Remove unnecessary scroll bar on shields v2 for details view - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Remove unnecessary scroll bar on shields v2 for details view
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Enable Shields v2 flag
2. Visit any site and open shields
3. Expand to advance view and click on blocked items, shows unnecessary scrollbars
## Actual result:
<!--Please add screenshots if needed-->
<img width="396" alt="image" src="https://user-images.githubusercontent.com/17010094/157500672-f763c41e-0962-4fbf-9f70-d34a069f7cd4.png">
## Expected result:
Only show scrollbar when there is an overflow from visible box
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
Easy
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
With flag enabled
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
cc: @nullhook @aguscruiz
|
non_defect
|
remove unnecessary scroll bar on shields for details view have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description remove unnecessary scroll bar on shields for details view steps to reproduce enable shields flag visit any site and open shields expand to advance view and click on blocked items shows unnecessary scrollbars actual result img width alt image src expected result only show scrollbar when there is an overflow from visible box reproduces how often easy brave version brave version info with flag enabled version channel information can you reproduce this issue with the current release can you reproduce this issue with the beta channel can you reproduce this issue with the nightly channel other additional information does the issue resolve itself when disabling brave shields does the issue resolve itself when disabling brave rewards is the issue reproducible on the latest version of chrome miscellaneous information cc nullhook aguscruiz
| 0
|
17,057
| 2,974,405,920
|
IssuesEvent
|
2015-07-15 00:18:02
|
davidhabib/Volunteers-for-Salesforce
|
https://api.github.com/repos/davidhabib/Volunteers-for-Salesforce
|
closed
|
Avoid job signups while one in progress, to avoid duplicate contacts being created
|
Defect High Priority
|
hub thread: https://powerofus.force.com/0D580000028Q8KH
I was finally able to reproduce this by simply clicking Signup on the popup dialog, and then quickly clicking another signup link and ok'ing the dialog. Solution is to probably disable the page until it refreshes from the server, to avoid this race condition.
|
1.0
|
Avoid job signups while one in progress, to avoid duplicate contacts being created - hub thread: https://powerofus.force.com/0D580000028Q8KH
I was finally able to reproduce this by simply clicking Signup on the popup dialog, and then quickly clicking another signup link and ok'ing the dialog. Solution is to probably disable the page until it refreshes from the server, to avoid this race condition.
|
defect
|
avoid job signups while one in progress to avoid duplicate contacts being created hub thread i was finally able to reproduce this by simply clicking signup on the popup dialog and then quickly clicking another signup link and ok ing the dialog solution is to probably disable the page until it refreshes from the server to avoid this race condition
| 1
|
28,019
| 4,077,241,694
|
IssuesEvent
|
2016-05-30 07:03:29
|
V-Squared/v2-Production
|
https://api.github.com/repos/V-Squared/v2-Production
|
closed
|
E: Design ViPanel & ViSurge 05.05.
|
m.size.epic m.stage.2.design m.type.E.pcb m.type.E.schematic
|
- [x] Fix → [Safety Issue of Fuse Holder](#issuecomment-219592890)
- [x] Create snap shot of new design, send to HC to publish in Issue for review by BC
- [x] @bcaswelch Review new design
- [x] Complete Design
- [x] Trigger review to HC
|
1.0
|
E: Design ViPanel & ViSurge 05.05. - - [x] Fix → [Safety Issue of Fuse Holder](#issuecomment-219592890)
- [x] Create snap shot of new design, send to HC to publish in Issue for review by BC
- [x] @bcaswelch Review new design
- [x] Complete Design
- [x] Trigger review to HC
|
non_defect
|
e design vipanel visurge fix → issuecomment create snap shot of new design send to hc to publish in issue for review by bc bcaswelch review new design complete design trigger review to hc
| 0
|
58,401
| 16,525,826,585
|
IssuesEvent
|
2021-05-26 19:58:14
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
opened
|
Component Sizing Summary Report table headers are redundant in using the word "Design"
|
Defect NotIDDChange
|
Issue overview
--------------
The column headers for a number of tables in the Component Sizing Summary Report is redundant in using the word "Design". Examples of this are shown below:



### Details
Some additional details for this issue (if relevant):
- EnergyPlus v9.5
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
Component Sizing Summary Report table headers are redundant in using the word "Design" - Issue overview
--------------
The column headers for a number of tables in the Component Sizing Summary Report is redundant in using the word "Design". Examples of this are shown below:



### Details
Some additional details for this issue (if relevant):
- EnergyPlus v9.5
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
component sizing summary report table headers are redundant in using the word design issue overview the column headers for a number of tables in the component sizing summary report is redundant in using the word design examples of this are shown below details some additional details for this issue if relevant energyplus checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
43,093
| 11,464,165,246
|
IssuesEvent
|
2020-02-07 17:27:29
|
snowplow/snowplow-javascript-tracker
|
https://api.github.com/repos/snowplow/snowplow-javascript-tracker
|
closed
|
Investigate intermittent Travis CI failures for `safari 7 on OS X 10.9`
|
type:defect
|
Failing build: https://travis-ci.org/snowplow/snowplow-javascript-tracker/builds/112592093
Every so often this test will fail on SauceLabs.
|
1.0
|
Investigate intermittent Travis CI failures for `safari 7 on OS X 10.9` - Failing build: https://travis-ci.org/snowplow/snowplow-javascript-tracker/builds/112592093
Every so often this test will fail on SauceLabs.
|
defect
|
investigate intermittent travis ci failures for safari on os x failing build every so often this test will fail on saucelabs
| 1
|
53,736
| 13,262,208,513
|
IssuesEvent
|
2020-08-20 21:18:52
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
make tarball fails when libarchive is missing xv-devel package (Trac #1977)
|
Migrated from Trac cmake defect
|
In testing PnF for pole deployment (using make tarball to create a deployable object):
We found 'make tarball' was failing on spts-access:
LZMA_LIBRARIES-NOTFOUND => jeb.trunk.r154481.Linux-x86_64.gcc-4.4.6/lib/tools
realpath: /scratch/tschmidt/new/icecube/icetray/work/jeb/build/LZMA_LIBRARIES-NOTFOUND
Traceback (most recent call last):
File "/home/tschmidt/jeb/trunk/cmake/install_shlib.py", line 16, in <module>
for f in os.listdir(srcdir):
OSError: [Errno 2] No such file or directory: ''
the libarchive.cmake tool does separately check for xv-devel (lzma) installations, and failed to find it, but it was still added to the libs to install in the tarball list, and caused the fail.
To work around, we got xv-devel installed on spts-access.
Likely a minor edge case, but best documented....
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1977">https://code.icecube.wisc.edu/projects/icecube/ticket/1977</a>, reported by blaufussand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:03",
"_ts": "1550067243750755",
"description": "In testing PnF for pole deployment (using make tarball to create a deployable object):\n\nWe found 'make tarball' was failing on spts-access:\nLZMA_LIBRARIES-NOTFOUND => jeb.trunk.r154481.Linux-x86_64.gcc-4.4.6/lib/tools\nrealpath: /scratch/tschmidt/new/icecube/icetray/work/jeb/build/LZMA_LIBRARIES-NOTFOUND\nTraceback (most recent call last):\n File \"/home/tschmidt/jeb/trunk/cmake/install_shlib.py\", line 16, in <module>\n for f in os.listdir(srcdir):\nOSError: [Errno 2] No such file or directory: ''\n\n\nthe libarchive.cmake tool does separately check for xv-devel (lzma) installations, and failed to find it, but it was still added to the libs to install in the tarball list, and caused the fail. \n\nTo work around, we got xv-devel installed on spts-access. \n\nLikely a minor edge case, but best documented....",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2017-03-31T14:17:02",
"component": "cmake",
"summary": "make tarball fails when libarchive is missing xv-devel package",
"priority": "minor",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
make tarball fails when libarchive is missing xv-devel package (Trac #1977) - In testing PnF for pole deployment (using make tarball to create a deployable object):
We found 'make tarball' was failing on spts-access:
LZMA_LIBRARIES-NOTFOUND => jeb.trunk.r154481.Linux-x86_64.gcc-4.4.6/lib/tools
realpath: /scratch/tschmidt/new/icecube/icetray/work/jeb/build/LZMA_LIBRARIES-NOTFOUND
Traceback (most recent call last):
File "/home/tschmidt/jeb/trunk/cmake/install_shlib.py", line 16, in <module>
for f in os.listdir(srcdir):
OSError: [Errno 2] No such file or directory: ''
the libarchive.cmake tool does separately check for xv-devel (lzma) installations, and failed to find it, but it was still added to the libs to install in the tarball list, and caused the fail.
To work around, we got xv-devel installed on spts-access.
Likely a minor edge case, but best documented....
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1977">https://code.icecube.wisc.edu/projects/icecube/ticket/1977</a>, reported by blaufussand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:03",
"_ts": "1550067243750755",
"description": "In testing PnF for pole deployment (using make tarball to create a deployable object):\n\nWe found 'make tarball' was failing on spts-access:\nLZMA_LIBRARIES-NOTFOUND => jeb.trunk.r154481.Linux-x86_64.gcc-4.4.6/lib/tools\nrealpath: /scratch/tschmidt/new/icecube/icetray/work/jeb/build/LZMA_LIBRARIES-NOTFOUND\nTraceback (most recent call last):\n File \"/home/tschmidt/jeb/trunk/cmake/install_shlib.py\", line 16, in <module>\n for f in os.listdir(srcdir):\nOSError: [Errno 2] No such file or directory: ''\n\n\nthe libarchive.cmake tool does separately check for xv-devel (lzma) installations, and failed to find it, but it was still added to the libs to install in the tarball list, and caused the fail. \n\nTo work around, we got xv-devel installed on spts-access. \n\nLikely a minor edge case, but best documented....",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2017-03-31T14:17:02",
"component": "cmake",
"summary": "make tarball fails when libarchive is missing xv-devel package",
"priority": "minor",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
make tarball fails when libarchive is missing xv devel package trac in testing pnf for pole deployment using make tarball to create a deployable object we found make tarball was failing on spts access lzma libraries notfound jeb trunk linux gcc lib tools realpath scratch tschmidt new icecube icetray work jeb build lzma libraries notfound traceback most recent call last file home tschmidt jeb trunk cmake install shlib py line in for f in os listdir srcdir oserror no such file or directory the libarchive cmake tool does separately check for xv devel lzma installations and failed to find it but it was still added to the libs to install in the tarball list and caused the fail to work around we got xv devel installed on spts access likely a minor edge case but best documented migrated from json status closed changetime ts description in testing pnf for pole deployment using make tarball to create a deployable object n nwe found make tarball was failing on spts access nlzma libraries notfound jeb trunk linux gcc lib tools nrealpath scratch tschmidt new icecube icetray work jeb build lzma libraries notfound ntraceback most recent call last n file home tschmidt jeb trunk cmake install shlib py line in n for f in os listdir srcdir noserror no such file or directory n n nthe libarchive cmake tool does separately check for xv devel lzma installations and failed to find it but it was still added to the libs to install in the tarball list and caused the fail n nto work around we got xv devel installed on spts access n nlikely a minor edge case but best documented reporter blaufuss cc resolution fixed time component cmake summary make tarball fails when libarchive is missing xv devel package priority minor keywords milestone owner nega type defect
| 1
|
11,156
| 16,532,723,157
|
IssuesEvent
|
2021-05-27 08:12:22
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
`separateMajorMinor` flag in package rule doesn't override the global one for grouped dependencies
|
priority-5-triage status:requirements type:bug
|
**How are you running Renovate?**
- [x] WhiteSource Renovate hosted app on github.com
- [ ] Self hosted
Renovate version: 25.31.2
**Describe the bug**
`separateMajorMinor` flag in package rules doesn't seem to be overridden the global value for grouped dependencies.
For instance,
```
"separateMajorMinor": false,
"packageRules": [
{
"matchPackagePatterns": [
"^org\\.eclipse\\.jetty:jetty-bom$",
"^version\\.hibernate-validator$"
],
"separateMajorMinor": true,
"separateMinorPatch": true
}
]
```
We should have two PRs for Hibernate Validator `6.1.7.Final` given that `6.2.0.Final` and `7.0.1.Final` are available. However, there is only one open PR: https://github.com/hisener/renovate-tests/pull/65
Seems like it only affects grouped dependencies as we have two separate PRs for Jetty. See https://github.com/hisener/renovate-tests/pull/63 and https://github.com/hisener/renovate-tests/pull/64.
Renovate config: https://github.com/hisener/renovate-tests/blob/master/renovate.json
See also https://github.com/renovatebot/renovate/pull/9746#issuecomment-838016944.
**Relevant debug logs**
<details><summary>Click me to see logs</summary>
```
DEBUG: packageFiles with updates
{
"config": {
"maven": [
{
"datasource": "maven",
"packageFile": "pom.xml",
"deps": [
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 0,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:43.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
},
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator-test-utils",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 1,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:28.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
},
{
"datasource": "maven",
"depName": "org.eclipse.jetty:jetty-bom",
"currentValue": "9.4.40.v20210413",
"fileReplacePosition": 1286,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"depType": "import",
"depIndex": 2,
"warnings": [],
"sourceUrl": "https://github.com/eclipse/jetty.project",
"homepage": "https://eclipse.org/jetty",
"currentVersion": "9.4.40.v20210413",
"isSingleVersion": true,
"fixedVersion": "9.4.40.v20210413",
"updates": [
{
"bucket": "patch",
"newVersion": "9.4.41.v20210516",
"newValue": "9.4.41.v20210516",
"releaseTimestamp": "2021-05-17T00:08:06.000Z",
"newMajor": 9,
"newMinor": 4,
"updateType": "patch",
"branchName": "renovate/org.eclipse.jetty-jetty-bom-9.4.x"
},
{
"bucket": "major",
"newVersion": "11.0.3",
"newValue": "11.0.3",
"releaseTimestamp": "2021-05-20T22:05:15.000Z",
"newMajor": 11,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/org.eclipse.jetty-jetty-bom-11.x"
}
]
},
{
"datasource": "maven",
"depName": "org.apache.maven.plugins:maven-compiler-plugin",
"currentValue": "3.8.1",
"fileReplacePosition": 1709,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"depIndex": 3,
"warnings": [],
"currentVersion": "3.8.1",
"fixedVersion": "3.8.1",
"updates": []
},
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator-annotation-processor",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 4,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:48.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
}
]
}
]
}
}
DEBUG: processRepo()
DEBUG: Processing 3 branches: renovate/hibernate-validator, renovate/org.eclipse.jetty-jetty-bom-11.x, renovate/org.eclipse.jetty-jetty-bom-9.4.x
```
</details>
**Have you created a minimal reproduction repository?**
- [x] I have provided a minimal reproduction repository
- [ ] I don't have time for that, but it happens in a public repository I have linked to
- [ ] I don't have time for that, and cannot share my private repository
- [ ] The nature of this bug means it's impossible to reproduce publicly
|
1.0
|
`separateMajorMinor` flag in package rule doesn't override the global one for grouped dependencies - **How are you running Renovate?**
- [x] WhiteSource Renovate hosted app on github.com
- [ ] Self hosted
Renovate version: 25.31.2
**Describe the bug**
`separateMajorMinor` flag in package rules doesn't seem to be overridden the global value for grouped dependencies.
For instance,
```
"separateMajorMinor": false,
"packageRules": [
{
"matchPackagePatterns": [
"^org\\.eclipse\\.jetty:jetty-bom$",
"^version\\.hibernate-validator$"
],
"separateMajorMinor": true,
"separateMinorPatch": true
}
]
```
We should have two PRs for Hibernate Validator `6.1.7.Final` given that `6.2.0.Final` and `7.0.1.Final` are available. However, there is only one open PR: https://github.com/hisener/renovate-tests/pull/65
Seems like it only affects grouped dependencies as we have two separate PRs for Jetty. See https://github.com/hisener/renovate-tests/pull/63 and https://github.com/hisener/renovate-tests/pull/64.
Renovate config: https://github.com/hisener/renovate-tests/blob/master/renovate.json
See also https://github.com/renovatebot/renovate/pull/9746#issuecomment-838016944.
**Relevant debug logs**
<details><summary>Click me to see logs</summary>
```
DEBUG: packageFiles with updates
{
"config": {
"maven": [
{
"datasource": "maven",
"packageFile": "pom.xml",
"deps": [
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 0,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:43.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
},
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator-test-utils",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 1,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:28.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
},
{
"datasource": "maven",
"depName": "org.eclipse.jetty:jetty-bom",
"currentValue": "9.4.40.v20210413",
"fileReplacePosition": 1286,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"depType": "import",
"depIndex": 2,
"warnings": [],
"sourceUrl": "https://github.com/eclipse/jetty.project",
"homepage": "https://eclipse.org/jetty",
"currentVersion": "9.4.40.v20210413",
"isSingleVersion": true,
"fixedVersion": "9.4.40.v20210413",
"updates": [
{
"bucket": "patch",
"newVersion": "9.4.41.v20210516",
"newValue": "9.4.41.v20210516",
"releaseTimestamp": "2021-05-17T00:08:06.000Z",
"newMajor": 9,
"newMinor": 4,
"updateType": "patch",
"branchName": "renovate/org.eclipse.jetty-jetty-bom-9.4.x"
},
{
"bucket": "major",
"newVersion": "11.0.3",
"newValue": "11.0.3",
"releaseTimestamp": "2021-05-20T22:05:15.000Z",
"newMajor": 11,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/org.eclipse.jetty-jetty-bom-11.x"
}
]
},
{
"datasource": "maven",
"depName": "org.apache.maven.plugins:maven-compiler-plugin",
"currentValue": "3.8.1",
"fileReplacePosition": 1709,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"depIndex": 3,
"warnings": [],
"currentVersion": "3.8.1",
"fixedVersion": "3.8.1",
"updates": []
},
{
"datasource": "maven",
"depName": "org.hibernate.validator:hibernate-validator-annotation-processor",
"currentValue": "6.1.7.Final",
"fileReplacePosition": 536,
"registryUrls": [
"https://repo.maven.apache.org/maven2"
],
"groupName": "version.hibernate-validator",
"depIndex": 4,
"warnings": [],
"currentVersion": "6.1.7.Final",
"isSingleVersion": true,
"fixedVersion": "6.1.7.Final",
"updates": [
{
"bucket": "latest",
"newVersion": "7.0.1.Final",
"newValue": "7.0.1.Final",
"releaseTimestamp": "2021-02-06T18:26:48.000Z",
"newMajor": 7,
"newMinor": 0,
"updateType": "major",
"branchName": "renovate/hibernate-validator"
}
]
}
]
}
]
}
}
DEBUG: processRepo()
DEBUG: Processing 3 branches: renovate/hibernate-validator, renovate/org.eclipse.jetty-jetty-bom-11.x, renovate/org.eclipse.jetty-jetty-bom-9.4.x
```
</details>
**Have you created a minimal reproduction repository?**
- [x] I have provided a minimal reproduction repository
- [ ] I don't have time for that, but it happens in a public repository I have linked to
- [ ] I don't have time for that, and cannot share my private repository
- [ ] The nature of this bug means it's impossible to reproduce publicly
|
non_defect
|
separatemajorminor flag in package rule doesn t override the global one for grouped dependencies how are you running renovate whitesource renovate hosted app on github com self hosted renovate version describe the bug separatemajorminor flag in package rules doesn t seem to be overridden the global value for grouped dependencies for instance separatemajorminor false packagerules matchpackagepatterns org eclipse jetty jetty bom version hibernate validator separatemajorminor true separateminorpatch true we should have two prs for hibernate validator final given that final and final are available however there is only one open pr seems like it only affects grouped dependencies as we have two separate prs for jetty see and renovate config see also relevant debug logs click me to see logs debug packagefiles with updates config maven datasource maven packagefile pom xml deps datasource maven depname org hibernate validator hibernate validator currentvalue final filereplaceposition registryurls groupname version hibernate validator depindex warnings currentversion final issingleversion true fixedversion final updates bucket latest newversion final newvalue final releasetimestamp newmajor newminor updatetype major branchname renovate hibernate validator datasource maven depname org hibernate validator hibernate validator test utils currentvalue final filereplaceposition registryurls groupname version hibernate validator depindex warnings currentversion final issingleversion true fixedversion final updates bucket latest newversion final newvalue final releasetimestamp newmajor newminor updatetype major branchname renovate hibernate validator datasource maven depname org eclipse jetty jetty bom currentvalue filereplaceposition registryurls deptype import depindex warnings sourceurl homepage currentversion issingleversion true fixedversion updates bucket patch newversion newvalue releasetimestamp newmajor newminor updatetype patch branchname renovate org eclipse jetty jetty bom x bucket major newversion newvalue releasetimestamp newmajor newminor updatetype major branchname renovate org eclipse jetty jetty bom x datasource maven depname org apache maven plugins maven compiler plugin currentvalue filereplaceposition registryurls depindex warnings currentversion fixedversion updates datasource maven depname org hibernate validator hibernate validator annotation processor currentvalue final filereplaceposition registryurls groupname version hibernate validator depindex warnings currentversion final issingleversion true fixedversion final updates bucket latest newversion final newvalue final releasetimestamp newmajor newminor updatetype major branchname renovate hibernate validator debug processrepo debug processing branches renovate hibernate validator renovate org eclipse jetty jetty bom x renovate org eclipse jetty jetty bom x have you created a minimal reproduction repository i have provided a minimal reproduction repository i don t have time for that but it happens in a public repository i have linked to i don t have time for that and cannot share my private repository the nature of this bug means it s impossible to reproduce publicly
| 0
|
46,623
| 2,963,550,492
|
IssuesEvent
|
2015-07-10 11:13:35
|
thexerteproject/xerteonlinetoolkits
|
https://api.github.com/repos/thexerteproject/xerteonlinetoolkits
|
closed
|
Workspace doesn't refresh if popups are blocked when creating new LO
|
bug high priority v3 Release
|
It often results in multiple clicks and then when Workspace is refreshed there are loads of LOs..
We can fix this by clearing name after click and refreshing Workspace differently...
|
1.0
|
Workspace doesn't refresh if popups are blocked when creating new LO - It often results in multiple clicks and then when Workspace is refreshed there are loads of LOs..
We can fix this by clearing name after click and refreshing Workspace differently...
|
non_defect
|
workspace doesn t refresh if popups are blocked when creating new lo it often results in multiple clicks and then when workspace is refreshed there are loads of los we can fix this by clearing name after click and refreshing workspace differently
| 0
|
59,659
| 17,023,194,169
|
IssuesEvent
|
2021-07-03 00:48:08
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Order of layers added causes old permalinks to show maplint layer incorrectly
|
Component: mapnik Priority: minor Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 5.07pm, Wednesday, 2nd January 2008]**
When the maplint layer was added, maplint replaced the position of marker in the permalink layers parameter.
Old permalinks that had the markers layer set to show, will cause the maplint layer to show.
One solution is to swap the order in which the maplint and marker layers are added to the map. (see patch)
Old permalink examples: http://www.openstreetmap.org/?lat=48.21162&lon=16.33916&zoom=17&layers=0BT
http://osm.ge.pythonmoo.co.uk/?lat=48.21162&lon=16.33916&zoom=17&layers=0BT functions as expected on my patched rails port.
|
1.0
|
Order of layers added causes old permalinks to show maplint layer incorrectly - **[Submitted to the original trac issue database at 5.07pm, Wednesday, 2nd January 2008]**
When the maplint layer was added, maplint replaced the position of marker in the permalink layers parameter.
Old permalinks that had the markers layer set to show, will cause the maplint layer to show.
One solution is to swap the order in which the maplint and marker layers are added to the map. (see patch)
Old permalink examples: http://www.openstreetmap.org/?lat=48.21162&lon=16.33916&zoom=17&layers=0BT
http://osm.ge.pythonmoo.co.uk/?lat=48.21162&lon=16.33916&zoom=17&layers=0BT functions as expected on my patched rails port.
|
defect
|
order of layers added causes old permalinks to show maplint layer incorrectly when the maplint layer was added maplint replaced the position of marker in the permalink layers parameter old permalinks that had the markers layer set to show will cause the maplint layer to show one solution is to swap the order in which the maplint and marker layers are added to the map see patch old permalink examples functions as expected on my patched rails port
| 1
|
144,757
| 19,298,878,833
|
IssuesEvent
|
2021-12-13 01:02:19
|
snowdensb/job-dsl-plugin
|
https://api.github.com/repos/snowdensb/job-dsl-plugin
|
closed
|
CVE-2021-44228 (High) detected in log4j-1.2.9.jar, log4j-1.2.12.jar - autoclosed
|
security vulnerability
|
## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>log4j-1.2.9.jar</b>, <b>log4j-1.2.12.jar</b></p></summary>
<p>
<details><summary><b>log4j-1.2.9.jar</b></p></summary>
<p></p>
<p>Path to dependency file: job-dsl-plugin/job-dsl-plugin/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.9/55856d711ab8b88f8c7b04fd85ff1643ffbfde7c/log4j-1.2.9.jar</p>
<p>
Dependency Hierarchy:
- jenkins-core-2.176.jar (Root Library)
- acegi-security-1.0.7.jar
- :x: **log4j-1.2.9.jar** (Vulnerable Library)
</details>
<details><summary><b>log4j-1.2.12.jar</b></p></summary>
<p></p>
<p>Path to dependency file: job-dsl-plugin/job-dsl-plugin/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210927190226_TRFFHW/downloadResource_SQXZWB/20210927190451/log4j-1.2.12.jar</p>
<p>
Dependency Hierarchy:
- vsphere-cloud-1.1.11.jar (Root Library)
- json-lib-2.1-rev7.jar
- commons-beanutils-1.7.0.jar
- commons-logging-1.1.jar
- :x: **log4j-1.2.12.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.
<p>Publish Date: 2021-11-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jfh8-c2jp-5v3q">https://github.com/advisories/GHSA-jfh8-c2jp-5v3q</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.15.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.9","packageFilePaths":["/job-dsl-plugin/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.jenkins-ci.main:jenkins-core:2.176;org.acegisecurity:acegi-security:1.0.7;log4j:log4j:1.2.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0","isBinary":false},{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.12","packageFilePaths":["/job-dsl-plugin/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.jenkins-ci.plugins:vsphere-cloud:1.1.11;org.kohsuke.stapler:json-lib:2.1-rev7;commons-beanutils:commons-beanutils:1.7.0;commons-logging:commons-logging:1.1;log4j:log4j:1.2.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44228","vulnerabilityDetails":"Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228","cvss3Severity":"high","cvss3Score":"10.0","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-44228 (High) detected in log4j-1.2.9.jar, log4j-1.2.12.jar - autoclosed - ## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>log4j-1.2.9.jar</b>, <b>log4j-1.2.12.jar</b></p></summary>
<p>
<details><summary><b>log4j-1.2.9.jar</b></p></summary>
<p></p>
<p>Path to dependency file: job-dsl-plugin/job-dsl-plugin/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/log4j/log4j/1.2.9/55856d711ab8b88f8c7b04fd85ff1643ffbfde7c/log4j-1.2.9.jar</p>
<p>
Dependency Hierarchy:
- jenkins-core-2.176.jar (Root Library)
- acegi-security-1.0.7.jar
- :x: **log4j-1.2.9.jar** (Vulnerable Library)
</details>
<details><summary><b>log4j-1.2.12.jar</b></p></summary>
<p></p>
<p>Path to dependency file: job-dsl-plugin/job-dsl-plugin/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210927190226_TRFFHW/downloadResource_SQXZWB/20210927190451/log4j-1.2.12.jar</p>
<p>
Dependency Hierarchy:
- vsphere-cloud-1.1.11.jar (Root Library)
- json-lib-2.1-rev7.jar
- commons-beanutils-1.7.0.jar
- commons-logging-1.1.jar
- :x: **log4j-1.2.12.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.
<p>Publish Date: 2021-11-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jfh8-c2jp-5v3q">https://github.com/advisories/GHSA-jfh8-c2jp-5v3q</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.15.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.9","packageFilePaths":["/job-dsl-plugin/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.jenkins-ci.main:jenkins-core:2.176;org.acegisecurity:acegi-security:1.0.7;log4j:log4j:1.2.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0","isBinary":false},{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.12","packageFilePaths":["/job-dsl-plugin/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.jenkins-ci.plugins:vsphere-cloud:1.1.11;org.kohsuke.stapler:json-lib:2.1-rev7;commons-beanutils:commons-beanutils:1.7.0;commons-logging:commons-logging:1.1;log4j:log4j:1.2.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.15.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44228","vulnerabilityDetails":"Log4j versions prior to 2.15.0 are subject to a remote code execution vulnerability via the ldap JNDI parser.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44228","cvss3Severity":"high","cvss3Score":"10.0","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in jar jar autoclosed cve high severity vulnerability vulnerable libraries jar jar jar path to dependency file job dsl plugin job dsl plugin build gradle path to vulnerable library home wss scanner gradle caches modules files jar dependency hierarchy jenkins core jar root library acegi security jar x jar vulnerable library jar path to dependency file job dsl plugin job dsl plugin build gradle path to vulnerable library tmp ws ua trffhw downloadresource sqxzwb jar dependency hierarchy vsphere cloud jar root library json lib jar commons beanutils jar commons logging jar x jar vulnerable library found in base branch master vulnerability details versions prior to are subject to a remote code execution vulnerability via the ldap jndi parser publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org jenkins ci main jenkins core org acegisecurity acegi security isminimumfixversionavailable true minimumfixversion org apache logging core isbinary false packagetype java groupid packagename packageversion packagefilepaths istransitivedependency true dependencytree org jenkins ci plugins vsphere cloud org kohsuke stapler json lib commons beanutils commons beanutils commons logging commons logging isminimumfixversionavailable true minimumfixversion org apache logging core isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails versions prior to are subject to a remote code execution vulnerability via the ldap jndi parser vulnerabilityurl
| 0
|
33,835
| 7,267,154,752
|
IssuesEvent
|
2018-02-20 02:52:40
|
jccastillo0007/eFacturaT
|
https://api.github.com/repos/jccastillo0007/eFacturaT
|
closed
|
Navision Nota de crédito - Marca error en número de pedimento, y pareciera que está OK
|
bug defect
|
PARA LA NOTA DE CRÉDITO: 18AVMX005
SI VEO EN EL LOG, MANDA QUE EL PEDIMENTO ES, ES DECIR CON 1 ESPACIO:
17 43 4325 7003170
PERO SI CONSULTO EN LA TABLA CORRESPONDIENTE, SI TIENE LOS 2 ESPACIOS REQUERIDOS POR EL SAT:
select
[Document No_] as "no. factura"
,[No_] as "referencia"
,[Nº pedimento]
from [IPNavision400SP2].[dbo].[MEXICO - IP - PEDIMENTO$Sales Cr_Memo Line]
where [Document No_]
in ( '18AVMX005' );
TONS YA NO ENTENDÍ…
|
1.0
|
Navision Nota de crédito - Marca error en número de pedimento, y pareciera que está OK - PARA LA NOTA DE CRÉDITO: 18AVMX005
SI VEO EN EL LOG, MANDA QUE EL PEDIMENTO ES, ES DECIR CON 1 ESPACIO:
17 43 4325 7003170
PERO SI CONSULTO EN LA TABLA CORRESPONDIENTE, SI TIENE LOS 2 ESPACIOS REQUERIDOS POR EL SAT:
select
[Document No_] as "no. factura"
,[No_] as "referencia"
,[Nº pedimento]
from [IPNavision400SP2].[dbo].[MEXICO - IP - PEDIMENTO$Sales Cr_Memo Line]
where [Document No_]
in ( '18AVMX005' );
TONS YA NO ENTENDÍ…
|
defect
|
navision nota de crédito marca error en número de pedimento y pareciera que está ok para la nota de crédito si veo en el log manda que el pedimento es es decir con espacio pero si consulto en la tabla correspondiente si tiene los espacios requeridos por el sat select as no factura as referencia from where in tons ya no entendí…
| 1
|
328,971
| 10,010,582,965
|
IssuesEvent
|
2019-07-15 08:25:37
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mail.google.com - site is not usable
|
browser-fenix engine-gecko priority-critical
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mail.google.com
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 5.1.1
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Could not attach file
**Steps to Reproduce**:
Was composing mail, clicked on attach file, selected file and then nothing happened whereas it was supposed to attach the file to the mail.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@kuruv`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
mail.google.com - site is not usable - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mail.google.com
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 5.1.1
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Could not attach file
**Steps to Reproduce**:
Was composing mail, clicked on attach file, selected file and then nothing happened whereas it was supposed to attach the file to the mail.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@kuruv`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
mail google com site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description could not attach file steps to reproduce was composing mail clicked on attach file selected file and then nothing happened whereas it was supposed to attach the file to the mail browser configuration none submitted in the name of kuruv from with ❤️
| 0
|
14,663
| 17,786,556,680
|
IssuesEvent
|
2021-08-31 11:48:39
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Parcel V2 sandbox issue
|
type: support / not a bug (process) untriaged team-Local-Exec
|
### Description of the problem / feature request:
Bazel sandbox on Linux is causing problems with parcel V2 RC
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
I trying to run parcel V2 similiar to
the example for parcel V1 at
https://github.com/bazelbuild/rules_nodejs/tree/stable/examples/parcel
replacing
parcel.bzl
with
https://gist.github.com/kohlerm/b41c6b14f63db757341a618ed9bdb5de
(still not completely working)
it fails with
**Error: Bad file descriptor
Error: Bad file descriptor**
when running with the optoin
--spawn_strategy=standalone
the message disappears.
I suspect this is because parcel V2 is using memory mapped files for their cache see https://v2.parceljs.org/blog/rc0/
### What operating system are you running Bazel on?
Distributor ID: Kali
Description: Kali GNU/Linux Rolling
Release: 2020.3
Codename: kali-rolling
WSL2 on Windows 10
### What's the output of `bazel info release`?
> Replace this line with your answer.
release 3.7.2- (@non-git)
installed via nix package manager
> Replace this line with your answer.
### Have you found anything relevant by searching the web?
No
### Any other information, logs, or outputs that you want to share?
```
bazel build --subcommands bundle
INFO: Analyzed target //:bundle (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //:bundle [action 'Bundling JavaScript bundle.js [parcel]', configuration: de16027a7878784780b32c21a23af2a4b61e42de13ca1bc3615d29f2d1aacb01, execution platform: @local_config_platform//:host]
(cd /home/kohlerm/.cache/bazel/_bazel_kohlerm/bbfa98f54bdb6f11a7cff30094948c96/execroot/examples_parcel && \
exec env - \
BAZEL_NODE_MODULES_ROOTS='' \
COMPILATION_MODE=fastbuild \
bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache '--bazel_node_modules_manifest=bazel-out/k8-fastbuild/bin/_bundle.module_mappings.json')
ERROR: /home/kohlerm/bazel_parcel/BUILD.bazel:4:7: Bundling JavaScript bundle.js [parcel] failed (Exit 1): parcel.sh failed: error executing command bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache ... (remaining 1 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox parcel.sh failed: error executing command bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache ... (remaining 1 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
**Error: Bad file descriptor
Error: Bad file descriptor**
Building...
Bundling...
Packaging & Optimizing...
✨ Built in 783ms
bazel-out/k8-fastbuild/bin/foo.js 60 B 157ms
Target //:bundle failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 6.375s, Critical Path: 6.26s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
```
|
1.0
|
Parcel V2 sandbox issue -
### Description of the problem / feature request:
Bazel sandbox on Linux is causing problems with parcel V2 RC
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
I trying to run parcel V2 similiar to
the example for parcel V1 at
https://github.com/bazelbuild/rules_nodejs/tree/stable/examples/parcel
replacing
parcel.bzl
with
https://gist.github.com/kohlerm/b41c6b14f63db757341a618ed9bdb5de
(still not completely working)
it fails with
**Error: Bad file descriptor
Error: Bad file descriptor**
when running with the optoin
--spawn_strategy=standalone
the message disappears.
I suspect this is because parcel V2 is using memory mapped files for their cache see https://v2.parceljs.org/blog/rc0/
### What operating system are you running Bazel on?
Distributor ID: Kali
Description: Kali GNU/Linux Rolling
Release: 2020.3
Codename: kali-rolling
WSL2 on Windows 10
### What's the output of `bazel info release`?
> Replace this line with your answer.
release 3.7.2- (@non-git)
installed via nix package manager
> Replace this line with your answer.
### Have you found anything relevant by searching the web?
No
### Any other information, logs, or outputs that you want to share?
```
bazel build --subcommands bundle
INFO: Analyzed target //:bundle (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //:bundle [action 'Bundling JavaScript bundle.js [parcel]', configuration: de16027a7878784780b32c21a23af2a4b61e42de13ca1bc3615d29f2d1aacb01, execution platform: @local_config_platform//:host]
(cd /home/kohlerm/.cache/bazel/_bazel_kohlerm/bbfa98f54bdb6f11a7cff30094948c96/execroot/examples_parcel && \
exec env - \
BAZEL_NODE_MODULES_ROOTS='' \
COMPILATION_MODE=fastbuild \
bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache '--bazel_node_modules_manifest=bazel-out/k8-fastbuild/bin/_bundle.module_mappings.json')
ERROR: /home/kohlerm/bazel_parcel/BUILD.bazel:4:7: Bundling JavaScript bundle.js [parcel] failed (Exit 1): parcel.sh failed: error executing command bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache ... (remaining 1 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox parcel.sh failed: error executing command bazel-out/host/bin/external/npm/parcel/bin/parcel.sh build foo.js --dist-dir bazel-out/k8-fastbuild/bin --cache-dir /tmp/cache ... (remaining 1 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
**Error: Bad file descriptor
Error: Bad file descriptor**
Building...
Bundling...
Packaging & Optimizing...
✨ Built in 783ms
bazel-out/k8-fastbuild/bin/foo.js 60 B 157ms
Target //:bundle failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 6.375s, Critical Path: 6.26s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
```
|
non_defect
|
parcel sandbox issue description of the problem feature request bazel sandbox on linux is causing problems with parcel rc bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible i trying to run parcel similiar to the example for parcel at replacing parcel bzl with still not completely working it fails with error bad file descriptor error bad file descriptor when running with the optoin spawn strategy standalone the message disappears i suspect this is because parcel is using memory mapped files for their cache see what operating system are you running bazel on distributor id kali description kali gnu linux rolling release codename kali rolling on windows what s the output of bazel info release replace this line with your answer release non git installed via nix package manager replace this line with your answer have you found anything relevant by searching the web no any other information logs or outputs that you want to share bazel build subcommands bundle info analyzed target bundle packages loaded targets configured info found target subcommand bundle configuration execution platform local config platform host cd home kohlerm cache bazel bazel kohlerm execroot examples parcel exec env bazel node modules roots compilation mode fastbuild bazel out host bin external npm parcel bin parcel sh build foo js dist dir bazel out fastbuild bin cache dir tmp cache bazel node modules manifest bazel out fastbuild bin bundle module mappings json error home kohlerm bazel parcel build bazel bundling javascript bundle js failed exit parcel sh failed error executing command bazel out host bin external npm parcel bin parcel sh build foo js dist dir bazel out fastbuild bin cache dir tmp cache remaining argument s skipped use sandbox debug to see verbose messages from the sandbox parcel sh failed error executing command bazel out host bin external npm parcel bin parcel sh build foo js dist dir bazel out fastbuild bin cache dir tmp cache remaining argument s skipped use sandbox debug to see verbose messages from the sandbox error bad file descriptor error bad file descriptor building bundling packaging optimizing ✨ built in bazel out fastbuild bin foo js b target bundle failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path info processes internal failed build did not complete successfully
| 0
|
61,195
| 14,943,463,768
|
IssuesEvent
|
2021-01-25 23:06:55
|
NVIDIA/TensorRT
|
https://api.github.com/repos/NVIDIA/TensorRT
|
closed
|
NvInfer.h not found
|
Component: OSS Build Component: Plugins triaged
|
Dear all,
I manage to install TensorRT using a tar file by referring to the [official NVIDIA site](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar) and try to convert PyTorch model to TensorRT model. The example codes for the TensorRT are [TensorRT-RetinaFace](https://github.com/wang-xinyu/tensorrtx/tree/master/retinaface).
However, I'm having a problem with a Nvinfer.h.
I think the Nvinfer.h is not existed in my machine.
I installed below packages such as GPU driver, CUDA toolkit, cuDNN and TensorRT by referring to the [DL installation](https://github.com/vujadeyoon/DL-UbuntuMATE18.04LTS-Installation) and [TensorRT-Torch2TRT](https://github.com/vujadeyoon/TensorRT-Torch2TRT).
My machine environments are as follows:
- Operating System (OS): Ubuntu MATE 18.04.3 LTS (Bionic)
- Graphics Processing Unit (GPU): NVIDIA TITAN RTX, 4ea
- GPU driver: Nvidia-440.100
- CUDA toolkit: 10.1 (default), 10.2
- cuDNN: cuDNN v7.6.5
- PyTorch: 1.3.0
- TensorRT: 7.0.0.11
- Torch2TRT: 0.1.0
The debug information is as below.
```bash
/home/vujadeyoon/Desktop/tensorrtx/example/decode.h:6:10: fatal error: NvInfer.h: No such file or directory
#include "NvInfer.h"
```
My questions are below:
1. How to install the Nvinfer.h.
2. Could not the Nvinfer.h be installed when installing TensorRT using a tar file?
3. Please give me any other advice.
Best regards,
Vujadeyoon.
|
1.0
|
NvInfer.h not found - Dear all,
I manage to install TensorRT using a tar file by referring to the [official NVIDIA site](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar) and try to convert PyTorch model to TensorRT model. The example codes for the TensorRT are [TensorRT-RetinaFace](https://github.com/wang-xinyu/tensorrtx/tree/master/retinaface).
However, I'm having a problem with a Nvinfer.h.
I think the Nvinfer.h is not existed in my machine.
I installed below packages such as GPU driver, CUDA toolkit, cuDNN and TensorRT by referring to the [DL installation](https://github.com/vujadeyoon/DL-UbuntuMATE18.04LTS-Installation) and [TensorRT-Torch2TRT](https://github.com/vujadeyoon/TensorRT-Torch2TRT).
My machine environments are as follows:
- Operating System (OS): Ubuntu MATE 18.04.3 LTS (Bionic)
- Graphics Processing Unit (GPU): NVIDIA TITAN RTX, 4ea
- GPU driver: Nvidia-440.100
- CUDA toolkit: 10.1 (default), 10.2
- cuDNN: cuDNN v7.6.5
- PyTorch: 1.3.0
- TensorRT: 7.0.0.11
- Torch2TRT: 0.1.0
The debug information is as below.
```bash
/home/vujadeyoon/Desktop/tensorrtx/example/decode.h:6:10: fatal error: NvInfer.h: No such file or directory
#include "NvInfer.h"
```
My questions are below:
1. How to install the Nvinfer.h.
2. Could not the Nvinfer.h be installed when installing TensorRT using a tar file?
3. Please give me any other advice.
Best regards,
Vujadeyoon.
|
non_defect
|
nvinfer h not found dear all i manage to install tensorrt using a tar file by referring to the and try to convert pytorch model to tensorrt model the example codes for the tensorrt are however i m having a problem with a nvinfer h i think the nvinfer h is not existed in my machine i installed below packages such as gpu driver cuda toolkit cudnn and tensorrt by referring to the and my machine environments are as follows operating system os ubuntu mate lts bionic graphics processing unit gpu nvidia titan rtx gpu driver nvidia cuda toolkit default cudnn cudnn pytorch tensorrt the debug information is as below bash home vujadeyoon desktop tensorrtx example decode h fatal error nvinfer h no such file or directory include nvinfer h my questions are below how to install the nvinfer h could not the nvinfer h be installed when installing tensorrt using a tar file please give me any other advice best regards vujadeyoon
| 0
|
170,370
| 14,257,538,521
|
IssuesEvent
|
2020-11-20 03:56:03
|
post-grad-beta-test/remote-startups
|
https://api.github.com/repos/post-grad-beta-test/remote-startups
|
opened
|
Project needs a README
|
documentation
|
As a developer who is new to the project, I want to see an informative README, so that I can get started with the code right away.
# Acceptance Criteria
* README includes the following information:
* Overview / description of the project.
* How to install and configure.
* Includes instructions to create `.env` file.
* How to run the server.
* How to run tests.
|
1.0
|
Project needs a README - As a developer who is new to the project, I want to see an informative README, so that I can get started with the code right away.
# Acceptance Criteria
* README includes the following information:
* Overview / description of the project.
* How to install and configure.
* Includes instructions to create `.env` file.
* How to run the server.
* How to run tests.
|
non_defect
|
project needs a readme as a developer who is new to the project i want to see an informative readme so that i can get started with the code right away acceptance criteria readme includes the following information overview description of the project how to install and configure includes instructions to create env file how to run the server how to run tests
| 0
|
333,258
| 10,119,590,919
|
IssuesEvent
|
2019-07-31 11:53:13
|
JuliaMV/culture-portal
|
https://api.github.com/repos/JuliaMV/culture-portal
|
closed
|
Create Author page template
|
priority: mediocre
|
Create author page template with fields for the following information (no content or draft content if desired):
- [x] name and photo
- [x] years of life
- [x] timeline
- [x] list of artist's works with the date of creation
- [x] video
- [x] geotag
- [x] photo gallery with author's picture and pictures of his/her works
Should look good in mobile devices.
|
1.0
|
Create Author page template - Create author page template with fields for the following information (no content or draft content if desired):
- [x] name and photo
- [x] years of life
- [x] timeline
- [x] list of artist's works with the date of creation
- [x] video
- [x] geotag
- [x] photo gallery with author's picture and pictures of his/her works
Should look good in mobile devices.
|
non_defect
|
create author page template create author page template with fields for the following information no content or draft content if desired name and photo years of life timeline list of artist s works with the date of creation video geotag photo gallery with author s picture and pictures of his her works should look good in mobile devices
| 0
|
119,316
| 15,498,334,127
|
IssuesEvent
|
2021-03-11 06:17:46
|
purnima143/Kurakoo
|
https://api.github.com/repos/purnima143/Kurakoo
|
closed
|
Design: Signup & login page on figma
|
design good first issue gssoc21 help wanted level2 medium ui/ux
|
- **UI Prototyping** with figma tool [figma design](https://www.figma.com/file/1gYZlafa8bUZu61ji10unF/Kurakoo?node-id=0%3A1).
- **Please follow the 3 primary colours - #FFBB00 ,#FF6F00 ,#411818 for designing the pages**
## Content for signup
- Name
- Email id
- Password
- confirm password
- Year/sem
- College
- Signup button
## Content for login
- Email id
- Password
- Login button
## Any queries?
You can comment it here or in Figma file
|
1.0
|
Design: Signup & login page on figma - - **UI Prototyping** with figma tool [figma design](https://www.figma.com/file/1gYZlafa8bUZu61ji10unF/Kurakoo?node-id=0%3A1).
- **Please follow the 3 primary colours - #FFBB00 ,#FF6F00 ,#411818 for designing the pages**
## Content for signup
- Name
- Email id
- Password
- confirm password
- Year/sem
- College
- Signup button
## Content for login
- Email id
- Password
- Login button
## Any queries?
You can comment it here or in Figma file
|
non_defect
|
design signup login page on figma ui prototyping with figma tool please follow the primary colours for designing the pages content for signup name email id password confirm password year sem college signup button content for login email id password login button any queries you can comment it here or in figma file
| 0
|
3,390
| 2,610,061,818
|
IssuesEvent
|
2015-02-26 18:18:10
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
路桥检查不育哪里正规
|
auto-migrated Priority-Medium Type-Defect
|
```
路桥检查不育哪里正规【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:28
|
1.0
|
路桥检查不育哪里正规 - ```
路桥检查不育哪里正规【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:28
|
defect
|
路桥检查不育哪里正规 路桥检查不育哪里正规【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
27,794
| 5,104,700,225
|
IssuesEvent
|
2017-01-05 02:39:05
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Mismatch between #if/#endif and namespace scope brackets in this_thread_executers.hpp
|
type: defect
|
https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/runtime/threads/executors/this_thread_executors.hpp
Note that the namespaces are declared inside a #if/#endif construct but the closing brackets are outside any such preprocessor scope.
|
1.0
|
Mismatch between #if/#endif and namespace scope brackets in this_thread_executers.hpp - https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/runtime/threads/executors/this_thread_executors.hpp
Note that the namespaces are declared inside a #if/#endif construct but the closing brackets are outside any such preprocessor scope.
|
defect
|
mismatch between if endif and namespace scope brackets in this thread executers hpp note that the namespaces are declared inside a if endif construct but the closing brackets are outside any such preprocessor scope
| 1
|
1,860
| 2,603,972,592
|
IssuesEvent
|
2015-02-24 19:00:38
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳病毒性疣能治好吗
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳病毒性疣能治好吗〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:01
|
1.0
|
沈阳病毒性疣能治好吗 - ```
沈阳病毒性疣能治好吗〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:01
|
defect
|
沈阳病毒性疣能治好吗 沈阳病毒性疣能治好吗〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at
| 1
|
34,057
| 7,781,665,688
|
IssuesEvent
|
2018-06-06 01:43:02
|
BBAD-Furniture/BBAD-furniture
|
https://api.github.com/repos/BBAD-Furniture/BBAD-furniture
|
closed
|
View the full list of all products
|
Code Review
|
The product catalog, so that I can see everything that's available
API Routes needed:
Reviews, Products, Users, Cart
and Axios, in Redux Store.
|
1.0
|
View the full list of all products - The product catalog, so that I can see everything that's available
API Routes needed:
Reviews, Products, Users, Cart
and Axios, in Redux Store.
|
non_defect
|
view the full list of all products the product catalog so that i can see everything that s available api routes needed reviews products users cart and axios in redux store
| 0
|
60,606
| 17,023,469,878
|
IssuesEvent
|
2021-07-03 02:11:40
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Last commit to es.yml broke all utf-8 non-ascii characters
|
Component: website Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 9.36pm, Thursday, 27th August 2009]**
The latest commit http://trac.openstreetmap.org/changeset/17284 to file http://trac.openstreetmap.org/browser/sites/rails_port/config/locales/es.yml?rev=17284 replaced all UTF-8 characters with some other encoding (probably broken or misconfigured text editor used)
|
1.0
|
Last commit to es.yml broke all utf-8 non-ascii characters - **[Submitted to the original trac issue database at 9.36pm, Thursday, 27th August 2009]**
The latest commit http://trac.openstreetmap.org/changeset/17284 to file http://trac.openstreetmap.org/browser/sites/rails_port/config/locales/es.yml?rev=17284 replaced all UTF-8 characters with some other encoding (probably broken or misconfigured text editor used)
|
defect
|
last commit to es yml broke all utf non ascii characters the latest commit to file replaced all utf characters with some other encoding probably broken or misconfigured text editor used
| 1
|
25,759
| 4,440,865,261
|
IssuesEvent
|
2016-08-19 06:40:18
|
pcolby/bipolar
|
https://api.github.com/repos/pcolby/bipolar
|
closed
|
windows Bipolar-0.5.2.297.exe fails to install / start - MSVCP140.dll missing
|
defect
|
Hello,
just tried to install version Bipolar-0.5.2.297.exe on Win7 64Bit, Polar Flow Sync 2.6.2
During the install provess a message appeared that the hook could not be installed.
By trying to do that step manually "bipolar.exe -install-hook" am message box stated that "MSVCP140.dll" is missing
MSVCP120.dll is present in the Bipolar path, but no MSVCP140.dll
Regards
Andreas
|
1.0
|
windows Bipolar-0.5.2.297.exe fails to install / start - MSVCP140.dll missing - Hello,
just tried to install version Bipolar-0.5.2.297.exe on Win7 64Bit, Polar Flow Sync 2.6.2
During the install provess a message appeared that the hook could not be installed.
By trying to do that step manually "bipolar.exe -install-hook" am message box stated that "MSVCP140.dll" is missing
MSVCP120.dll is present in the Bipolar path, but no MSVCP140.dll
Regards
Andreas
|
defect
|
windows bipolar exe fails to install start dll missing hello just tried to install version bipolar exe on polar flow sync during the install provess a message appeared that the hook could not be installed by trying to do that step manually bipolar exe install hook am message box stated that dll is missing dll is present in the bipolar path but no dll regards andreas
| 1
|
223,566
| 24,711,923,095
|
IssuesEvent
|
2022-10-20 02:00:00
|
alpersonalwebsite/react-mobx-redux
|
https://api.github.com/repos/alpersonalwebsite/react-mobx-redux
|
closed
|
WS-2020-0042 (High) detected in acorn-5.7.4.tgz - autoclosed
|
security vulnerability
|
## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>acorn-5.7.4.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jsdom/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- jest-24.7.1.tgz
- jest-cli-24.9.0.tgz
- jest-config-24.9.0.tgz
- jest-environment-jsdom-24.9.0.tgz
- jsdom-11.12.0.tgz
- :x: **acorn-5.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-mobx-redux/commit/9d4d3ea35cd8910e06bb2d4c7863b9f6ebd313d9">9d4d3ea35cd8910e06bb2d4c7863b9f6ebd313d9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-01</p>
<p>Fix Resolution (acorn): 6.4.1</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0042 (High) detected in acorn-5.7.4.tgz - autoclosed - ## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>acorn-5.7.4.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jsdom/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- jest-24.7.1.tgz
- jest-cli-24.9.0.tgz
- jest-config-24.9.0.tgz
- jest-environment-jsdom-24.9.0.tgz
- jsdom-11.12.0.tgz
- :x: **acorn-5.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-mobx-redux/commit/9d4d3ea35cd8910e06bb2d4c7863b9f6ebd313d9">9d4d3ea35cd8910e06bb2d4c7863b9f6ebd313d9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-01</p>
<p>Fix Resolution (acorn): 6.4.1</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in acorn tgz autoclosed ws high severity vulnerability vulnerable library acorn tgz ecmascript parser library home page a href path to dependency file package json path to vulnerable library node modules jsdom node modules acorn package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz jest config tgz jest environment jsdom tgz jsdom tgz x acorn tgz vulnerable library found in head commit a href found in base branch master vulnerability details acorn is vulnerable to regex dos a regex of the form u causes the parser to enter an infinite loop attackers may leverage the vulnerability leading to a denial of service since the string is not valid and it results in it being sanitized before reaching the parser publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution acorn direct dependency fix resolution react scripts step up your open source security game with mend
| 0
|
66,275
| 20,110,484,411
|
IssuesEvent
|
2022-02-07 14:40:20
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Compact Serialization / invalid field names ?
|
Type: Defect
|
**Context**
Running server 5.1-SNAPSHOT as downloaded from Maven on Feb. 7th.
**Describe the bug**
Create a compact schema with one field named `value` and push the schema to the cluster with `ClientSendSchemaCodec`. Works fine, no error is reported. Try to fetch the schema back with `ClientFetchSchemaCodec`, the cluster returns `null`. Server-side operations that would need to schema fail with a "schema unknown" error.
**Expected behavior**
Ideally, it should just work with the a named `value`. However, if some field names are indeed reserved, then I would expect (a) it to be documented somewhere with a list of reserved names and (b) the `ClientSendSchemaCodec` to fail and report a meaningful error when using a reserved name.
Note: haven't tried the `ClientSendAllSchemasCodec` but I assume the issue is there too. Don't know if it silently reject only the invalid schemas, or all of them, or all schemas *after* the invalid one...
|
1.0
|
Compact Serialization / invalid field names ? - **Context**
Running server 5.1-SNAPSHOT as downloaded from Maven on Feb. 7th.
**Describe the bug**
Create a compact schema with one field named `value` and push the schema to the cluster with `ClientSendSchemaCodec`. Works fine, no error is reported. Try to fetch the schema back with `ClientFetchSchemaCodec`, the cluster returns `null`. Server-side operations that would need to schema fail with a "schema unknown" error.
**Expected behavior**
Ideally, it should just work with the a named `value`. However, if some field names are indeed reserved, then I would expect (a) it to be documented somewhere with a list of reserved names and (b) the `ClientSendSchemaCodec` to fail and report a meaningful error when using a reserved name.
Note: haven't tried the `ClientSendAllSchemasCodec` but I assume the issue is there too. Don't know if it silently reject only the invalid schemas, or all of them, or all schemas *after* the invalid one...
|
defect
|
compact serialization invalid field names context running server snapshot as downloaded from maven on feb describe the bug create a compact schema with one field named value and push the schema to the cluster with clientsendschemacodec works fine no error is reported try to fetch the schema back with clientfetchschemacodec the cluster returns null server side operations that would need to schema fail with a schema unknown error expected behavior ideally it should just work with the a named value however if some field names are indeed reserved then i would expect a it to be documented somewhere with a list of reserved names and b the clientsendschemacodec to fail and report a meaningful error when using a reserved name note haven t tried the clientsendallschemascodec but i assume the issue is there too don t know if it silently reject only the invalid schemas or all of them or all schemas after the invalid one
| 1
|
319,956
| 27,410,717,570
|
IssuesEvent
|
2023-03-01 10:20:16
|
openPMD/openPMD-api
|
https://api.github.com/repos/openPMD/openPMD-api
|
opened
|
Examples: Fix Windows PDB
|
bug tests machine/system
|
With the `dev` branch pre 0.15.0:
```
C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\src\examples\10_streaming_write.cpp : fatal error C1041: cannot open program database 'C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\bin\RelWithDebInfo\vc143.pdb'; if multiple CL.EXE write to the same .PDB file, please use /FS [C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\10_streaming_write.vcxproj]
Building Custom Rule C:/Users/runneradmin/AppData/Local/Temp/ci-pU5qdA784w/src/CMakeLists.txt
Building Custom Rule C:/Users/runneradmin/AppData/Local/Temp/ci-pU5qdA784w/src/CMakeLists.txt
12_span_write.cpp
C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\src\examples\12_span_write.cpp : fatal error C1041: cannot open program database 'C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\bin\RelWithDebInfo\vc143.pdb'; if multiple CL.EXE write to the same .PDB file, please use /FS [C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\12_span_write.vcxproj]
```
First seen in https://github.com/ax3l/cmake-easyinstall/pull/12
|
1.0
|
Examples: Fix Windows PDB - With the `dev` branch pre 0.15.0:
```
C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\src\examples\10_streaming_write.cpp : fatal error C1041: cannot open program database 'C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\bin\RelWithDebInfo\vc143.pdb'; if multiple CL.EXE write to the same .PDB file, please use /FS [C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\10_streaming_write.vcxproj]
Building Custom Rule C:/Users/runneradmin/AppData/Local/Temp/ci-pU5qdA784w/src/CMakeLists.txt
Building Custom Rule C:/Users/runneradmin/AppData/Local/Temp/ci-pU5qdA784w/src/CMakeLists.txt
12_span_write.cpp
C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\src\examples\12_span_write.cpp : fatal error C1041: cannot open program database 'C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\bin\RelWithDebInfo\vc143.pdb'; if multiple CL.EXE write to the same .PDB file, please use /FS [C:\Users\runneradmin\AppData\Local\Temp\ci-pU5qdA784w\build\12_span_write.vcxproj]
```
First seen in https://github.com/ax3l/cmake-easyinstall/pull/12
|
non_defect
|
examples fix windows pdb with the dev branch pre c users runneradmin appdata local temp ci src examples streaming write cpp fatal error cannot open program database c users runneradmin appdata local temp ci build bin relwithdebinfo pdb if multiple cl exe write to the same pdb file please use fs building custom rule c users runneradmin appdata local temp ci src cmakelists txt building custom rule c users runneradmin appdata local temp ci src cmakelists txt span write cpp c users runneradmin appdata local temp ci src examples span write cpp fatal error cannot open program database c users runneradmin appdata local temp ci build bin relwithdebinfo pdb if multiple cl exe write to the same pdb file please use fs first seen in
| 0
|
177,769
| 21,509,178,221
|
IssuesEvent
|
2022-04-28 01:12:59
|
amccool/AngularASPNETCore2WebApiAuth
|
https://api.github.com/repos/amccool/AngularASPNETCore2WebApiAuth
|
closed
|
WS-2019-0291 (High) detected in handlebars-4.0.11.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0291 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/AngularASPNETCore2WebApiAuth/src/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/AngularASPNETCore2WebApiAuth/src/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-1.3.3.tgz (Root Library)
- istanbul-api-1.2.1.tgz
- istanbul-reports-1.1.3.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amccool/AngularASPNETCore2WebApiAuth/commit/aaa0eeb0237c3fe61b491b5e658b404be0a3a83f">aaa0eeb0237c3fe61b491b5e658b404be0a3a83f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.3.0 is vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Objects' __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-10-06
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1558>WS-2019-0291</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-10-06</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0291 (High) detected in handlebars-4.0.11.tgz - autoclosed - ## WS-2019-0291 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/AngularASPNETCore2WebApiAuth/src/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/AngularASPNETCore2WebApiAuth/src/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-1.3.3.tgz (Root Library)
- istanbul-api-1.2.1.tgz
- istanbul-reports-1.1.3.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amccool/AngularASPNETCore2WebApiAuth/commit/aaa0eeb0237c3fe61b491b5e658b404be0a3a83f">aaa0eeb0237c3fe61b491b5e658b404be0a3a83f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.3.0 is vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Objects' __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-10-06
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1558>WS-2019-0291</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-10-06</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm src package json path to vulnerable library tmp ws scm src node modules handlebars package json dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details handlebars before is vulnerable to prototype pollution leading to remote code execution templates may alter an objects proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
20,206
| 3,315,219,189
|
IssuesEvent
|
2015-11-06 10:46:05
|
akvo/akvo-flow-mobile
|
https://api.github.com/repos/akvo/akvo-flow-mobile
|
opened
|
Question Group translations
|
Defect
|
For forms that have translated question groups, the question group translations do not show up when languages are switched.
|
1.0
|
Question Group translations - For forms that have translated question groups, the question group translations do not show up when languages are switched.
|
defect
|
question group translations for forms that have translated question groups the question group translations do not show up when languages are switched
| 1
|
12,469
| 2,700,656,343
|
IssuesEvent
|
2015-04-04 12:14:07
|
MKergall/osmbonuspack
|
https://api.github.com/repos/MKergall/osmbonuspack
|
closed
|
git migration
|
auto-migrated Priority-Medium Type-Defect
|
```
Hello, thanks for this library.
Do you want to migrate on git? Seems, it's much better to develop in it.
Also, I already fork osmbonuspack on github and add my implementation for
clickable/closeable bubbles. You can see it here, if interesting.
https://github.com/Sash0k/osmbonuspack/tree/clickable-bubbles
```
Original issue reported on code.google.com by `voz...@gmail.com` on 6 Jun 2014 at 8:40
|
1.0
|
git migration - ```
Hello, thanks for this library.
Do you want to migrate on git? Seems, it's much better to develop in it.
Also, I already fork osmbonuspack on github and add my implementation for
clickable/closeable bubbles. You can see it here, if interesting.
https://github.com/Sash0k/osmbonuspack/tree/clickable-bubbles
```
Original issue reported on code.google.com by `voz...@gmail.com` on 6 Jun 2014 at 8:40
|
defect
|
git migration hello thanks for this library do you want to migrate on git seems it s much better to develop in it also i already fork osmbonuspack on github and add my implementation for clickable closeable bubbles you can see it here if interesting original issue reported on code google com by voz gmail com on jun at
| 1
|
232,270
| 18,855,030,680
|
IssuesEvent
|
2021-11-12 04:23:30
|
Tencent/bk-ci
|
https://api.github.com/repos/Tencent/bk-ci
|
closed
|
feat: 对BatchScript脚本执行过程加锁,防止被系统清理掉
|
stage/uat stage/test kind/enhancement area/ci/agent test/passed uat/passed priority/critical-urgent
|
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
windows 执行batch脚本的特点
假设batch插件内容为
```batch
set extScriptName=a.bat
call :LOG_INFO Start call %extScriptName%
call %extScriptName%
if errorlevel 1 (
call :LOG_INFO Run "%extScriptName%" Error!
goto :FAILURE
)
:LOG_INFO
echo [%date:~0,10% %time% %~n0] INFO %*
goto :EOF
```
上述调用的a.bat脚本内容假设为:
```batch
call :LOG_INFO start ping
ping 127.0.0.1 -n 60
GOTO :SUCCESS
goto :SUCCESS
:SUCCESS
call :LOG_INFO Good Luck.
endlocal
popd
exit /b 0
:FAILURE
call :LOG_ERROR %*
endlocal
popd
exit /b 1
:LOG_INFO
echo [%date:~0,10% %time% %~n0] INFO %*
goto :EOF
```
此时启动流水线,然后跑到C:/Users/userxxx/AppData/Local/Temp/下删除 paas_build_script_xxxx.bat 脚本,等60秒后,会发现报错了, 原因是该脚本被删除后, a.bat脚本在执行完后返回,会发现上述脚本已经被删除导致失败
**Why is this needed**:
对paas_build_script_xxxx.bat脚本进行加锁, 如果加锁失败则也继续运行,不应该失败(保持现有逻辑)
|
2.0
|
feat: 对BatchScript脚本执行过程加锁,防止被系统清理掉 - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
windows 执行batch脚本的特点
假设batch插件内容为
```batch
set extScriptName=a.bat
call :LOG_INFO Start call %extScriptName%
call %extScriptName%
if errorlevel 1 (
call :LOG_INFO Run "%extScriptName%" Error!
goto :FAILURE
)
:LOG_INFO
echo [%date:~0,10% %time% %~n0] INFO %*
goto :EOF
```
上述调用的a.bat脚本内容假设为:
```batch
call :LOG_INFO start ping
ping 127.0.0.1 -n 60
GOTO :SUCCESS
goto :SUCCESS
:SUCCESS
call :LOG_INFO Good Luck.
endlocal
popd
exit /b 0
:FAILURE
call :LOG_ERROR %*
endlocal
popd
exit /b 1
:LOG_INFO
echo [%date:~0,10% %time% %~n0] INFO %*
goto :EOF
```
此时启动流水线,然后跑到C:/Users/userxxx/AppData/Local/Temp/下删除 paas_build_script_xxxx.bat 脚本,等60秒后,会发现报错了, 原因是该脚本被删除后, a.bat脚本在执行完后返回,会发现上述脚本已经被删除导致失败
**Why is this needed**:
对paas_build_script_xxxx.bat脚本进行加锁, 如果加锁失败则也继续运行,不应该失败(保持现有逻辑)
|
non_defect
|
feat 对batchscript脚本执行过程加锁,防止被系统清理掉 what would you like to be added windows 执行batch脚本的特点 假设batch插件内容为 batch set extscriptname a bat call log info start call extscriptname call extscriptname if errorlevel call log info run extscriptname error goto failure log info echo info goto eof 上述调用的a bat脚本内容假设为 batch call log info start ping ping n goto success goto success success call log info good luck endlocal popd exit b failure call log error endlocal popd exit b log info echo info goto eof 此时启动流水线,然后跑到c users userxxx appdata local temp 下删除 paas build script xxxx bat 脚本, ,会发现报错了, 原因是该脚本被删除后, a bat脚本在执行完后返回,会发现上述脚本已经被删除导致失败 why is this needed 对paas build script xxxx bat脚本进行加锁, 如果加锁失败则也继续运行,不应该失败(保持现有逻辑)
| 0
|
20,218
| 3,317,272,531
|
IssuesEvent
|
2015-11-06 20:50:06
|
spockframework/spock
|
https://api.github.com/repos/spockframework/spock
|
closed
|
IllegalArgumentException when mocking java.io.PrintStream
|
Module-Core Status-New Type-Defect
|
Originally reported on Google Code with ID 384
```
When I attempt to run a spec that mocks java.io.PrintStream I get an IllegalArgumentException
thrown:
class PrintSpec extends Specification {
private PrintStream printStream = Mock()
}
spock-core version - 0.7-groovy-2.0
cglib-nodep version - 3.1
objenesis version - 2.1
groovy-all version - 2.3.9
Here is the stacktrace:
java.lang.IllegalArgumentException
at net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
at org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
at org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
at org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
at org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
at org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:47)
at org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:282)
at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:83)
```
Reported by `jameslorenzen` on 2015-01-21 05:38:49
|
1.0
|
IllegalArgumentException when mocking java.io.PrintStream - Originally reported on Google Code with ID 384
```
When I attempt to run a spec that mocks java.io.PrintStream I get an IllegalArgumentException
thrown:
class PrintSpec extends Specification {
private PrintStream printStream = Mock()
}
spock-core version - 0.7-groovy-2.0
cglib-nodep version - 3.1
objenesis version - 2.1
groovy-all version - 2.3.9
Here is the stacktrace:
java.lang.IllegalArgumentException
at net.sf.cglib.proxy.BridgeMethodResolver.resolveAll(BridgeMethodResolver.java:61)
at net.sf.cglib.proxy.Enhancer.emitMethods(Enhancer.java:911)
at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:498)
at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
at org.spockframework.mock.runtime.ProxyBasedMockFactory$CglibMockFactory.createMock(ProxyBasedMockFactory.java:91)
at org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:49)
at org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:51)
at org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:44)
at org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:47)
at org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:282)
at org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:83)
```
Reported by `jameslorenzen` on 2015-01-21 05:38:49
|
defect
|
illegalargumentexception when mocking java io printstream originally reported on google code with id when i attempt to run a spec that mocks java io printstream i get an illegalargumentexception thrown class printspec extends specification private printstream printstream mock spock core version groovy cglib nodep version objenesis version groovy all version here is the stacktrace java lang illegalargumentexception at net sf cglib proxy bridgemethodresolver resolveall bridgemethodresolver java at net sf cglib proxy enhancer emitmethods enhancer java at net sf cglib proxy enhancer generateclass enhancer java at net sf cglib core defaultgeneratorstrategy generate defaultgeneratorstrategy java at net sf cglib core abstractclassgenerator create abstractclassgenerator java at net sf cglib proxy enhancer createhelper enhancer java at net sf cglib proxy enhancer createclass enhancer java at org spockframework mock runtime proxybasedmockfactory cglibmockfactory createmock proxybasedmockfactory java at org spockframework mock runtime proxybasedmockfactory create proxybasedmockfactory java at org spockframework mock runtime javamockfactory create javamockfactory java at org spockframework mock runtime compositemockfactory create compositemockfactory java at org spockframework lang specinternals createmock specinternals java at org spockframework lang specinternals createmockimpl specinternals java at org spockframework lang specinternals mockimpl specinternals java reported by jameslorenzen on
| 1
|
251,305
| 21,468,764,163
|
IssuesEvent
|
2022-04-26 07:35:00
|
redhat-developer/odo
|
https://api.github.com/repos/redhat-developer/odo
|
closed
|
ibmcloud command failing on windows
|
area/testing
|
/area testing
I've noticed that `ibmcloud` command that is being executed as part of our test suite on windows tests are failing
https://cloud.ibm.com/devops/pipelines/929da3ed-6cad-4026-a847-7a8485f20fa4/582614a3-b910-4659-ac27-6288668c5fe6/a1d74ffa-3d39-4dbb-8583-b0a34f87f3bd?env_id=ibm:yp:eu-de
This is probably due to the use of POSIX style path and not using windows path
```
+ ibmcloud cos upload --bucket odo-tests-openshift-logs --key pr-5675-windows-tests-651.html --file /tmp/pr-5675-windows-tests-651.html
FAILED
The value in flag '--file' is invalid
NAME:
ibmcloud cos upload - Upload objects from S3 concurrently.
USAGE:
ibmcloud cos upload --bucket BUCKET_NAME --key KEY --file PATH [--concurrency value] [--max-upload-parts PARTS] [--part-size SIZE] [--leave-parts-on-errors] [--cache-control CACHING_DIRECTIVES] [--content-disposition DIRECTIVES] [--content-encoding CONTENT_ENCODING] [--content-language LANGUAGE] [--content-length SIZE] [--content-md5 MD5] [--content-type MIME] [--metadata STRUCTURE] [--region REGION] [--output FORMAT] [--json]
OPTIONS:
--bucket BUCKET_NAME The name (BUCKET_NAME) of the bucket.
--key KEY The KEY of the object.
--file PATH The PATH to the file to upload.
--concurrency value The number of goroutines to spin up in parallel per call to Upload when sending parts. Default value is 5.
--max-upload-parts PARTS Max number of PARTS which will be uploaded to S3 that calculates the part size of the object to be uploaded. Limit is 10,000 parts.
--part-size SIZE The buffer SIZE (in bytes) to use when buffering data into chunks and ending them as parts to S3. The minimum allowed part size is 5MB.
--leave-parts-on-errors Setting this value to true will cause the SDK to avoid calling AbortMultipartUpload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
--cache-control CACHING_DIRECTIVES Specifies CACHING_DIRECTIVES for the request/reply chain.
--content-disposition DIRECTIVES Specifies presentational information (DIRECTIVES).
--content-encoding CONTENT_ENCODING Specifies what content encodings (CONTENT_ENCODING) have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
--content-language LANGUAGE The LANGUAGE the content is in.
--content-length SIZE SIZE of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.
--content-md5 MD5 The base64-encoded 128-bit MD5 digest of the data.
--content-type MIME A standard MIME type describing the format of the object data.
--metadata STRUCTURE A STRUCTURE using JSON syntax. See IBM Cloud Documentation.
--region REGION The REGION where the bucket is present. If this flag is not provided, the program will use the default option specified in config.
--output FORMAT Output FORMAT can be only json or text.
--json [Deprecated] Output returned in raw JSON format.
```
|
1.0
|
ibmcloud command failing on windows - /area testing
I've noticed that `ibmcloud` command that is being executed as part of our test suite on windows tests are failing
https://cloud.ibm.com/devops/pipelines/929da3ed-6cad-4026-a847-7a8485f20fa4/582614a3-b910-4659-ac27-6288668c5fe6/a1d74ffa-3d39-4dbb-8583-b0a34f87f3bd?env_id=ibm:yp:eu-de
This is probably due to the use of POSIX style path and not using windows path
```
+ ibmcloud cos upload --bucket odo-tests-openshift-logs --key pr-5675-windows-tests-651.html --file /tmp/pr-5675-windows-tests-651.html
FAILED
The value in flag '--file' is invalid
NAME:
ibmcloud cos upload - Upload objects from S3 concurrently.
USAGE:
ibmcloud cos upload --bucket BUCKET_NAME --key KEY --file PATH [--concurrency value] [--max-upload-parts PARTS] [--part-size SIZE] [--leave-parts-on-errors] [--cache-control CACHING_DIRECTIVES] [--content-disposition DIRECTIVES] [--content-encoding CONTENT_ENCODING] [--content-language LANGUAGE] [--content-length SIZE] [--content-md5 MD5] [--content-type MIME] [--metadata STRUCTURE] [--region REGION] [--output FORMAT] [--json]
OPTIONS:
--bucket BUCKET_NAME The name (BUCKET_NAME) of the bucket.
--key KEY The KEY of the object.
--file PATH The PATH to the file to upload.
--concurrency value The number of goroutines to spin up in parallel per call to Upload when sending parts. Default value is 5.
--max-upload-parts PARTS Max number of PARTS which will be uploaded to S3 that calculates the part size of the object to be uploaded. Limit is 10,000 parts.
--part-size SIZE The buffer SIZE (in bytes) to use when buffering data into chunks and ending them as parts to S3. The minimum allowed part size is 5MB.
--leave-parts-on-errors Setting this value to true will cause the SDK to avoid calling AbortMultipartUpload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
--cache-control CACHING_DIRECTIVES Specifies CACHING_DIRECTIVES for the request/reply chain.
--content-disposition DIRECTIVES Specifies presentational information (DIRECTIVES).
--content-encoding CONTENT_ENCODING Specifies what content encodings (CONTENT_ENCODING) have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
--content-language LANGUAGE The LANGUAGE the content is in.
--content-length SIZE SIZE of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.
--content-md5 MD5 The base64-encoded 128-bit MD5 digest of the data.
--content-type MIME A standard MIME type describing the format of the object data.
--metadata STRUCTURE A STRUCTURE using JSON syntax. See IBM Cloud Documentation.
--region REGION The REGION where the bucket is present. If this flag is not provided, the program will use the default option specified in config.
--output FORMAT Output FORMAT can be only json or text.
--json [Deprecated] Output returned in raw JSON format.
```
|
non_defect
|
ibmcloud command failing on windows area testing i ve noticed that ibmcloud command that is being executed as part of our test suite on windows tests are failing this is probably due to the use of posix style path and not using windows path ibmcloud cos upload bucket odo tests openshift logs key pr windows tests html file tmp pr windows tests html failed the value in flag file is invalid name ibmcloud cos upload upload objects from concurrently usage ibmcloud cos upload bucket bucket name key key file path options bucket bucket name the name bucket name of the bucket key key the key of the object file path the path to the file to upload concurrency value the number of goroutines to spin up in parallel per call to upload when sending parts default value is max upload parts parts max number of parts which will be uploaded to that calculates the part size of the object to be uploaded limit is parts part size size the buffer size in bytes to use when buffering data into chunks and ending them as parts to the minimum allowed part size is leave parts on errors setting this value to true will cause the sdk to avoid calling abortmultipartupload on a failure leaving all successfully uploaded parts on for manual recovery cache control caching directives specifies caching directives for the request reply chain content disposition directives specifies presentational information directives content encoding content encoding specifies what content encodings content encoding have been applied to the object and thus what decoding mechanisms must be applied to obtain the media type referenced by the content type header field content language language the language the content is in content length size size of the body in bytes this parameter is useful when the size of the body cannot be determined automatically content the encoded bit digest of the data content type mime a standard mime type describing the format of the object data metadata structure a structure using json syntax see ibm cloud documentation region region the region where the bucket is present if this flag is not provided the program will use the default option specified in config output format output format can be only json or text json output returned in raw json format
| 0
|
375,407
| 26,161,276,590
|
IssuesEvent
|
2022-12-31 15:22:40
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Roadmap 2022 (discussion)
|
feature comp-documentation
|
This is ClickHouse open-source roadmap 2022.
Descriptions and links to be filled.
This roadmap does not cover the tasks related to infrastructure, orchestration, documentation, marketing, integrations, SaaS, drivers, etc.
See also:
Roadmap 2021: [#17623](https://github.com/ClickHouse/ClickHouse/issues/17623)
Roadmap 2020: [in Russian](https://github.com/ClickHouse/ClickHouse/blob/be29057de1835f6f4a17e03a422b45b81efe6833/docs/ru/whats-new/extended-roadmap.md)
# Main Tasks
### ✔️ Make clickhouse-keeper Production Ready
✔️ It is already feature-complete and being used in production.
✔️ Update documentation to replace ZooKeeper with clickhouse-keeper everywhere.
### ✔️ Support for Backup and Restore
✔️ Backup of tables, databases, servers and clusters.
✔️ Incremental backups. Support for partial restore.
✔️ Support for pluggable backup storage options.
### Semistructured Data
✔️ JSON data type with automatic type inference and dynamic subcolumns.
✔️ Sparse column format and optimization of functions for sparse columns. #22535
Dynamic selection of column format - full, const, sparse, low cardinality.
Hybrid wide/compact data part format for huge number of columns.
### ✔️ Type Inference for Data Import
✔️ Allow to skip column names and types if data format already contains schema (e.g. Parquet, Avro).
✔️ Allow to infer types for text formats (e.g. CSV, TSV, JSONEachRow).
#32455
### Support for Transactions
Atomic insert of more than one block or to more than one partition into MergeTree and ReplicatedMergeTree tables.
Atomic insert into table and dependent materialized views. Atomic insert into multiple tables.
Multiple SELECTs from one consistent snapshot.
Atomic insert into distributed table.
### ✔️ Lightweight DELETE
✔️ Make mutations more lightweight by using delete-masks.
✔️ It won't enable frequent UPDATE/DELETE like in OLTP databases, but will make it more close.
### SQL Compatibility Improvements
Untangle name resolution and query analysis.
Initial support for correlated subqueries.
✔️ Allow using window functions inside expressions.
Add compatibility aliases for some window functions, etc.
✔️ Support for GROUPING SETS.
### JOIN Improvements
Support for join reordering.
Extend the cases when condition pushdown is applicable.
Convert anti-join to NOT IN.
Use table sorting for DISTINCT optimization.
Use table sorting for merge JOIN.
✔️ Grace hash join algorithm.
### Resource Management
✔️ Memory overcommit (sort and hard memory limits).
Enable external GROUP BY and ORDER BY by default.
IO operations scheduler with priorities.
✔️ Make scalar subqueries accountable.
CPU and network priorities.
### Separation of Storage and Compute
✔️ Parallel reading from replicas.
✔️ Dynamic cluster configuration with service discovery.
✔️ Caching of data from object storage.
Simplification of ReplicatedMergeTree.
Shared metadata storage.
# Experimental and Intern Tasks
### Streaming Queries
Fix POPULATE for materialized views.
Unification of materialized views, live views and window views.
Allow to set up subscriptions on top of all tables including Merge, Distributed.
Normalization of Kafka tables with storing offsets in ClickHouse.
Support for exactly once consumption from Kafka, non-consuming reads and multiple consumers.
Streaming queries with GROUP BY, ORDER BY with windowing criterias.
Persistent queues on top of ClickHouse tables.
### Integration with ML/AI
:wastebasket: Integration with Tensorflow
:wastebasket: Integration with MADLib
### GPU Support
Compile expressions to GPU
### Unique Key Constraint
### User Defined Data Types
### Incremental aggregation in memory
### Key-value data marts
### Text Classification
### Graph Processing
### Foreign SQL Dialects in ClickHouse
Support for MySQL dialect or Apache Calcite as an option.
### Batch Jobs and Refreshable Materialized Views
### Embedded ClickHouse Engine
### Data Hub
# Build And Testing Improvements
### Testing
✔️ Add tests for AArch64 builds.
✔️ Automated tests for backward compatibility.
Server-side query fuzzer for all kind of tests.
✔️ Fuzzing of query settings in functional tests.
SQL function based fuzzer.
Fuzzer of data formats.
Integrate with SQLogicTest.
Import obfuscated queries from Yandex Metrica.
### Builds
✔️ Docker images for AArch64.
Enable missing libraries for AArch64 builds.
Add and explore Musl builds.
Build all libraries with our own CMake files.
Embed root certificates to the binary.
Embed DNS resolver to the binary.
Add ClickHouse to Snap, so people will not install obsolete versions by accident.
|
1.0
|
Roadmap 2022 (discussion) - This is ClickHouse open-source roadmap 2022.
Descriptions and links to be filled.
This roadmap does not cover the tasks related to infrastructure, orchestration, documentation, marketing, integrations, SaaS, drivers, etc.
See also:
Roadmap 2021: [#17623](https://github.com/ClickHouse/ClickHouse/issues/17623)
Roadmap 2020: [in Russian](https://github.com/ClickHouse/ClickHouse/blob/be29057de1835f6f4a17e03a422b45b81efe6833/docs/ru/whats-new/extended-roadmap.md)
# Main Tasks
### ✔️ Make clickhouse-keeper Production Ready
✔️ It is already feature-complete and being used in production.
✔️ Update documentation to replace ZooKeeper with clickhouse-keeper everywhere.
### ✔️ Support for Backup and Restore
✔️ Backup of tables, databases, servers and clusters.
✔️ Incremental backups. Support for partial restore.
✔️ Support for pluggable backup storage options.
### Semistructured Data
✔️ JSON data type with automatic type inference and dynamic subcolumns.
✔️ Sparse column format and optimization of functions for sparse columns. #22535
Dynamic selection of column format - full, const, sparse, low cardinality.
Hybrid wide/compact data part format for huge number of columns.
### ✔️ Type Inference for Data Import
✔️ Allow to skip column names and types if data format already contains schema (e.g. Parquet, Avro).
✔️ Allow to infer types for text formats (e.g. CSV, TSV, JSONEachRow).
#32455
### Support for Transactions
Atomic insert of more than one block or to more than one partition into MergeTree and ReplicatedMergeTree tables.
Atomic insert into table and dependent materialized views. Atomic insert into multiple tables.
Multiple SELECTs from one consistent snapshot.
Atomic insert into distributed table.
### ✔️ Lightweight DELETE
✔️ Make mutations more lightweight by using delete-masks.
✔️ It won't enable frequent UPDATE/DELETE like in OLTP databases, but will make it more close.
### SQL Compatibility Improvements
Untangle name resolution and query analysis.
Initial support for correlated subqueries.
✔️ Allow using window functions inside expressions.
Add compatibility aliases for some window functions, etc.
✔️ Support for GROUPING SETS.
### JOIN Improvements
Support for join reordering.
Extend the cases when condition pushdown is applicable.
Convert anti-join to NOT IN.
Use table sorting for DISTINCT optimization.
Use table sorting for merge JOIN.
✔️ Grace hash join algorithm.
### Resource Management
✔️ Memory overcommit (sort and hard memory limits).
Enable external GROUP BY and ORDER BY by default.
IO operations scheduler with priorities.
✔️ Make scalar subqueries accountable.
CPU and network priorities.
### Separation of Storage and Compute
✔️ Parallel reading from replicas.
✔️ Dynamic cluster configuration with service discovery.
✔️ Caching of data from object storage.
Simplification of ReplicatedMergeTree.
Shared metadata storage.
# Experimental and Intern Tasks
### Streaming Queries
Fix POPULATE for materialized views.
Unification of materialized views, live views and window views.
Allow to set up subscriptions on top of all tables including Merge, Distributed.
Normalization of Kafka tables with storing offsets in ClickHouse.
Support for exactly once consumption from Kafka, non-consuming reads and multiple consumers.
Streaming queries with GROUP BY, ORDER BY with windowing criterias.
Persistent queues on top of ClickHouse tables.
### Integration with ML/AI
:wastebasket: Integration with Tensorflow
:wastebasket: Integration with MADLib
### GPU Support
Compile expressions to GPU
### Unique Key Constraint
### User Defined Data Types
### Incremental aggregation in memory
### Key-value data marts
### Text Classification
### Graph Processing
### Foreign SQL Dialects in ClickHouse
Support for MySQL dialect or Apache Calcite as an option.
### Batch Jobs and Refreshable Materialized Views
### Embedded ClickHouse Engine
### Data Hub
# Build And Testing Improvements
### Testing
✔️ Add tests for AArch64 builds.
✔️ Automated tests for backward compatibility.
Server-side query fuzzer for all kind of tests.
✔️ Fuzzing of query settings in functional tests.
SQL function based fuzzer.
Fuzzer of data formats.
Integrate with SQLogicTest.
Import obfuscated queries from Yandex Metrica.
### Builds
✔️ Docker images for AArch64.
Enable missing libraries for AArch64 builds.
Add and explore Musl builds.
Build all libraries with our own CMake files.
Embed root certificates to the binary.
Embed DNS resolver to the binary.
Add ClickHouse to Snap, so people will not install obsolete versions by accident.
|
non_defect
|
roadmap discussion this is clickhouse open source roadmap descriptions and links to be filled this roadmap does not cover the tasks related to infrastructure orchestration documentation marketing integrations saas drivers etc see also roadmap roadmap main tasks ✔️ make clickhouse keeper production ready ✔️ it is already feature complete and being used in production ✔️ update documentation to replace zookeeper with clickhouse keeper everywhere ✔️ support for backup and restore ✔️ backup of tables databases servers and clusters ✔️ incremental backups support for partial restore ✔️ support for pluggable backup storage options semistructured data ✔️ json data type with automatic type inference and dynamic subcolumns ✔️ sparse column format and optimization of functions for sparse columns dynamic selection of column format full const sparse low cardinality hybrid wide compact data part format for huge number of columns ✔️ type inference for data import ✔️ allow to skip column names and types if data format already contains schema e g parquet avro ✔️ allow to infer types for text formats e g csv tsv jsoneachrow support for transactions atomic insert of more than one block or to more than one partition into mergetree and replicatedmergetree tables atomic insert into table and dependent materialized views atomic insert into multiple tables multiple selects from one consistent snapshot atomic insert into distributed table ✔️ lightweight delete ✔️ make mutations more lightweight by using delete masks ✔️ it won t enable frequent update delete like in oltp databases but will make it more close sql compatibility improvements untangle name resolution and query analysis initial support for correlated subqueries ✔️ allow using window functions inside expressions add compatibility aliases for some window functions etc ✔️ support for grouping sets join improvements support for join reordering extend the cases when condition pushdown is applicable convert anti join to not in use table sorting for distinct optimization use table sorting for merge join ✔️ grace hash join algorithm resource management ✔️ memory overcommit sort and hard memory limits enable external group by and order by by default io operations scheduler with priorities ✔️ make scalar subqueries accountable cpu and network priorities separation of storage and compute ✔️ parallel reading from replicas ✔️ dynamic cluster configuration with service discovery ✔️ caching of data from object storage simplification of replicatedmergetree shared metadata storage experimental and intern tasks streaming queries fix populate for materialized views unification of materialized views live views and window views allow to set up subscriptions on top of all tables including merge distributed normalization of kafka tables with storing offsets in clickhouse support for exactly once consumption from kafka non consuming reads and multiple consumers streaming queries with group by order by with windowing criterias persistent queues on top of clickhouse tables integration with ml ai wastebasket integration with tensorflow wastebasket integration with madlib gpu support compile expressions to gpu unique key constraint user defined data types incremental aggregation in memory key value data marts text classification graph processing foreign sql dialects in clickhouse support for mysql dialect or apache calcite as an option batch jobs and refreshable materialized views embedded clickhouse engine data hub build and testing improvements testing ✔️ add tests for builds ✔️ automated tests for backward compatibility server side query fuzzer for all kind of tests ✔️ fuzzing of query settings in functional tests sql function based fuzzer fuzzer of data formats integrate with sqlogictest import obfuscated queries from yandex metrica builds ✔️ docker images for enable missing libraries for builds add and explore musl builds build all libraries with our own cmake files embed root certificates to the binary embed dns resolver to the binary add clickhouse to snap so people will not install obsolete versions by accident
| 0
|
3,497
| 2,610,063,777
|
IssuesEvent
|
2015-02-26 18:18:45
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
黄岩看不育哪家效果好
|
auto-migrated Priority-Medium Type-Defect
|
```
黄岩看不育哪家效果好【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:57
|
1.0
|
黄岩看不育哪家效果好 - ```
黄岩看不育哪家效果好【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:57
|
defect
|
黄岩看不育哪家效果好 黄岩看不育哪家效果好【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
52,674
| 13,224,896,720
|
IssuesEvent
|
2020-08-17 20:04:07
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
lazy frame needs docs (Trac #136)
|
Migrated from Trac defect documentation
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/136">https://code.icecube.wisc.edu/projects/icecube/ticket/136</a>, reported by troyand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"_ts": "1416713876900096",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-30T04:42:37",
"component": "documentation",
"summary": "lazy frame needs docs",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
lazy frame needs docs (Trac #136) -
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/136">https://code.icecube.wisc.edu/projects/icecube/ticket/136</a>, reported by troyand owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"_ts": "1416713876900096",
"description": "",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-30T04:42:37",
"component": "documentation",
"summary": "lazy frame needs docs",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
lazy frame needs docs trac migrated from json status closed changetime ts description reporter troy cc resolution wont or cant fix time component documentation summary lazy frame needs docs priority normal keywords milestone owner blaufuss type defect
| 1
|
440,229
| 12,696,140,680
|
IssuesEvent
|
2020-06-22 09:33:51
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
can't start threebot server
|
priority_critical
|
```python3
server = j.servers.threebot.get("default")
server.start()
```

|
1.0
|
can't start threebot server - ```python3
server = j.servers.threebot.get("default")
server.start()
```

|
non_defect
|
can t start threebot server server j servers threebot get default server start
| 0
|
523,525
| 15,184,321,586
|
IssuesEvent
|
2021-02-15 09:25:05
|
online-judge-tools/oj
|
https://api.github.com/repos/online-judge-tools/oj
|
closed
|
Show a hint message when the cookie.jar is broken
|
difficulty:low enhancement good first issue priority:low
|
## Description / 説明
The cookie.jar may be broken, and the workaround for such cases is just removing the cookie.jar. Showing a hint message for this help users.
## Motivation / 動機
There is a user who is confused by the broken cookie.jar.
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">online judge toolsでテストケースダウンロードしてる最中に手が滑ってターミナル閉じちゃったら以降http.cookiejar.loaderrorみたいなの出てきてダウンロードできなくなって頭抱えてる</p>— みやもり (@_rniya_) <a href="https://twitter.com/_rniya_/status/1361227763982639108?ref_src=twsrc%5Etfw">February 15, 2021</a></blockquote>
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">解決法みたいなののコメントないしcookiejarみたいなやつ削除しても大丈夫なのかとかよくわからん</p>— みやもり (@_rniya_) <a href="https://twitter.com/_rniya_/status/1361228098042089472?ref_src=twsrc%5Etfw">February 15, 2021</a></blockquote>
|
1.0
|
Show a hint message when the cookie.jar is broken - ## Description / 説明
The cookie.jar may be broken, and the workaround for such cases is just removing the cookie.jar. Showing a hint message for this help users.
## Motivation / 動機
There is a user who is confused by the broken cookie.jar.
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">online judge toolsでテストケースダウンロードしてる最中に手が滑ってターミナル閉じちゃったら以降http.cookiejar.loaderrorみたいなの出てきてダウンロードできなくなって頭抱えてる</p>— みやもり (@_rniya_) <a href="https://twitter.com/_rniya_/status/1361227763982639108?ref_src=twsrc%5Etfw">February 15, 2021</a></blockquote>
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">解決法みたいなののコメントないしcookiejarみたいなやつ削除しても大丈夫なのかとかよくわからん</p>— みやもり (@_rniya_) <a href="https://twitter.com/_rniya_/status/1361228098042089472?ref_src=twsrc%5Etfw">February 15, 2021</a></blockquote>
|
non_defect
|
show a hint message when the cookie jar is broken description 説明 the cookie jar may be broken and the workaround for such cases is just removing the cookie jar showing a hint message for this help users motivation 動機 there is a user who is confused by the broken cookie jar online judge toolsでテストケースダウンロードしてる最中に手が滑ってターミナル閉じちゃったら以降http cookiejar loaderrorみたいなの出てきてダウンロードできなくなって頭抱えてる mdash みやもり rniya 解決法みたいなののコメントないしcookiejarみたいなやつ削除しても大丈夫なのかとかよくわからん mdash みやもり rniya
| 0
|
42,753
| 11,254,366,457
|
IssuesEvent
|
2020-01-11 23:05:10
|
hasse69/rar2fs
|
https://api.github.com/repos/hasse69/rar2fs
|
closed
|
Regression: Reduce memory footprint during archive scan
|
Defect Priority-High
|
rar2fs currently does not list archive files anymore. Prior to this commit is good - this is the commit that breaks my archives.
```
rar2fs#/mnt/Web /mnt/WebU fuse ro,allow_other,uid=kyle,gid=storage,umask=0222,kernel_cache,--seek-length=2,--date-rar 0 0
```
```
FileServer /tmp/rar2fs # git bisect bad
4bc904fd182f0365e41f9130408ec6be437f2423 is the first bad commit
commit 4bc904fd182f0365e41f9130408ec6be437f2423
Author: Hans Beckerus <hans.beckerus at gmail.com>
Date: Sat Nov 23 15:39:39 2019 +0100
Reduce memory footprint during archive scan
When scanning an archive for files a linked list it created with all
files and properties before being processed by file system functions
such as readdir. This cause some memory overhead since a lot of data
is required to be kept resident for a longer period of time. Since the
lifetime of the data collected is relatively short there is not need
to pre-fetch all information like this. Instead handle file by file
and use only a single temporary object to hold whatever meta data is
necessary. The performance is also expected to be improved by a change
like this since less dynamic heap allocations are required but it also
results in a loop unwind that will increase number of functions calls.
Measurements of some common use-cases indicated a performance increase
of approximately 15%-20% but there are also reports of no improvement
at all or even the opposite. The latter should however be considered a
rare and exceptional case.
This change was triggered by issue #122 for which a very huge archive
was mounted with more than 100k files.
Signed-off-by: Hans Beckerus <hans.beckerus at gmail.com>
src/dllext.cpp | 211 +++++++++++++++++++++++++++------------------------------
src/dllext.hpp | 14 ++--
src/rar2fs.c | 146 +++++++++++++++++++--------------------
3 files changed, 174 insertions(+), 197 deletions(-)
```
|
1.0
|
Regression: Reduce memory footprint during archive scan - rar2fs currently does not list archive files anymore. Prior to this commit is good - this is the commit that breaks my archives.
```
rar2fs#/mnt/Web /mnt/WebU fuse ro,allow_other,uid=kyle,gid=storage,umask=0222,kernel_cache,--seek-length=2,--date-rar 0 0
```
```
FileServer /tmp/rar2fs # git bisect bad
4bc904fd182f0365e41f9130408ec6be437f2423 is the first bad commit
commit 4bc904fd182f0365e41f9130408ec6be437f2423
Author: Hans Beckerus <hans.beckerus at gmail.com>
Date: Sat Nov 23 15:39:39 2019 +0100
Reduce memory footprint during archive scan
When scanning an archive for files a linked list it created with all
files and properties before being processed by file system functions
such as readdir. This cause some memory overhead since a lot of data
is required to be kept resident for a longer period of time. Since the
lifetime of the data collected is relatively short there is not need
to pre-fetch all information like this. Instead handle file by file
and use only a single temporary object to hold whatever meta data is
necessary. The performance is also expected to be improved by a change
like this since less dynamic heap allocations are required but it also
results in a loop unwind that will increase number of functions calls.
Measurements of some common use-cases indicated a performance increase
of approximately 15%-20% but there are also reports of no improvement
at all or even the opposite. The latter should however be considered a
rare and exceptional case.
This change was triggered by issue #122 for which a very huge archive
was mounted with more than 100k files.
Signed-off-by: Hans Beckerus <hans.beckerus at gmail.com>
src/dllext.cpp | 211 +++++++++++++++++++++++++++------------------------------
src/dllext.hpp | 14 ++--
src/rar2fs.c | 146 +++++++++++++++++++--------------------
3 files changed, 174 insertions(+), 197 deletions(-)
```
|
defect
|
regression reduce memory footprint during archive scan currently does not list archive files anymore prior to this commit is good this is the commit that breaks my archives mnt web mnt webu fuse ro allow other uid kyle gid storage umask kernel cache seek length date rar fileserver tmp git bisect bad is the first bad commit commit author hans beckerus date sat nov reduce memory footprint during archive scan when scanning an archive for files a linked list it created with all files and properties before being processed by file system functions such as readdir this cause some memory overhead since a lot of data is required to be kept resident for a longer period of time since the lifetime of the data collected is relatively short there is not need to pre fetch all information like this instead handle file by file and use only a single temporary object to hold whatever meta data is necessary the performance is also expected to be improved by a change like this since less dynamic heap allocations are required but it also results in a loop unwind that will increase number of functions calls measurements of some common use cases indicated a performance increase of approximately but there are also reports of no improvement at all or even the opposite the latter should however be considered a rare and exceptional case this change was triggered by issue for which a very huge archive was mounted with more than files signed off by hans beckerus src dllext cpp src dllext hpp src c files changed insertions deletions
| 1
|
37,835
| 8,530,278,363
|
IssuesEvent
|
2018-11-03 20:48:59
|
ralsina/devicenzo
|
https://api.github.com/repos/ralsina/devicenzo
|
closed
|
seg fault
|
Priority-Medium Type-Defect auto-migrated
|
```
i downloaded
http://code.google.com/p/devicenzo/source/browse/trunk/devicenzo.py and when i
run it i get a seg fault... might wanna look into to it.
patx@patx-desktop:~/Desktop$ python devicenzo.py.py
Traceback (most recent call last):
File "devicenzo.py.py", line 168, in <module>
wb.addTab(QtCore.QUrl('http://devicenzo.googlecode.com'))
File "devicenzo.py.py", line 79, in addTab
self.tabs.setCurrentIndex(self.tabs.addTab(Tab(url, self), ""))
File "devicenzo.py.py", line 151, in __init__
self.previewer = QtGui.QPrintPreviewDialog(paintRequested=self.wb.print_)
TypeError: 'print_()' has no overload that is compatible with
'paintRequested(QPrinter*)'
Segmentation fault
```
Original issue reported on code.google.com by `patx44` on 19 Jul 2011 at 9:05
|
1.0
|
seg fault - ```
i downloaded
http://code.google.com/p/devicenzo/source/browse/trunk/devicenzo.py and when i
run it i get a seg fault... might wanna look into to it.
patx@patx-desktop:~/Desktop$ python devicenzo.py.py
Traceback (most recent call last):
File "devicenzo.py.py", line 168, in <module>
wb.addTab(QtCore.QUrl('http://devicenzo.googlecode.com'))
File "devicenzo.py.py", line 79, in addTab
self.tabs.setCurrentIndex(self.tabs.addTab(Tab(url, self), ""))
File "devicenzo.py.py", line 151, in __init__
self.previewer = QtGui.QPrintPreviewDialog(paintRequested=self.wb.print_)
TypeError: 'print_()' has no overload that is compatible with
'paintRequested(QPrinter*)'
Segmentation fault
```
Original issue reported on code.google.com by `patx44` on 19 Jul 2011 at 9:05
|
defect
|
seg fault i downloaded and when i run it i get a seg fault might wanna look into to it patx patx desktop desktop python devicenzo py py traceback most recent call last file devicenzo py py line in wb addtab qtcore qurl file devicenzo py py line in addtab self tabs setcurrentindex self tabs addtab tab url self file devicenzo py py line in init self previewer qtgui qprintpreviewdialog paintrequested self wb print typeerror print has no overload that is compatible with paintrequested qprinter segmentation fault original issue reported on code google com by on jul at
| 1
|
21,225
| 28,311,099,999
|
IssuesEvent
|
2023-04-10 15:27:50
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
closed
|
Retrieve and load events from database to feed
|
Processing Task Sprint 3
|
Task Tests
Test 1:
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build
2. Verify you can see events that are different and unique.

3. Click on an event and verify you can see the post info such as poster, description, thumbnail, image, etc.
4. Go to the database -> Posts https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/sql.php?server=1&db=cse442_2023_spring_team_b_db&table=Posts&pos=0
and verify that you can see the thumbnail image name corresponding to the poster.

6. Access the directory within the cheshire server via WinSCP or CyberDuck
7. Go into the uploads folder and make sure you can see the exact file name in the folder.

8. Download it to make sure it is the exact same image.
|
1.0
|
Retrieve and load events from database to feed - Task Tests
Test 1:
1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build
2. Verify you can see events that are different and unique.

3. Click on an event and verify you can see the post info such as poster, description, thumbnail, image, etc.
4. Go to the database -> Posts https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/sql.php?server=1&db=cse442_2023_spring_team_b_db&table=Posts&pos=0
and verify that you can see the thumbnail image name corresponding to the poster.

6. Access the directory within the cheshire server via WinSCP or CyberDuck
7. Go into the uploads folder and make sure you can see the exact file name in the folder.

8. Download it to make sure it is the exact same image.
|
non_defect
|
retrieve and load events from database to feed task tests test go to verify you can see events that are different and unique click on an event and verify you can see the post info such as poster description thumbnail image etc go to the database posts and verify that you can see the thumbnail image name corresponding to the poster access the directory within the cheshire server via winscp or cyberduck go into the uploads folder and make sure you can see the exact file name in the folder download it to make sure it is the exact same image
| 0
|
110,707
| 13,930,920,632
|
IssuesEvent
|
2020-10-22 03:47:18
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Flutter nested navigators - please provide an official working example of Flutter nested navigators between various Scaffold pages and a BottomNavigationBar
|
d: api docs f: material design f: routes framework
|
Hi Flutter team,
I was researching nested navigators, but there is nowhere a working example with Scaffolds. Could we have one in addition to the snippet posted here?
https://api.flutter.dev/flutter/widgets/Navigator-class.html#nesting-navigators
I am seeing this question all over the place:
- https://stackoverflow.com/questions/60515525/bottom-navigation-bar-with-sub-navigators-for-each-tab
- https://stackoverflow.com/questions/48098085/nesting-routes-with-flutter
- https://stackoverflow.com/questions/55716230/how-to-do-nested-navigation-in-flutter
- https://stackoverflow.com/questions/55213680/in-a-nested-navigator-structure-of-flutter-how-do-you-get-the-a-specific-naviga
- https://stackoverflow.com/questions/56890424/use-nested-navigator-with-willpopscope-in-flutter
Even this attempt has lots of issues filed: https://github.com/bizz84/nested-navigation-demo-flutter
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill out the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. ...
2. ...
3. ...
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
```
```
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
```
```
|
1.0
|
Flutter nested navigators - please provide an official working example of Flutter nested navigators between various Scaffold pages and a BottomNavigationBar - Hi Flutter team,
I was researching nested navigators, but there is nowhere a working example with Scaffolds. Could we have one in addition to the snippet posted here?
https://api.flutter.dev/flutter/widgets/Navigator-class.html#nesting-navigators
I am seeing this question all over the place:
- https://stackoverflow.com/questions/60515525/bottom-navigation-bar-with-sub-navigators-for-each-tab
- https://stackoverflow.com/questions/48098085/nesting-routes-with-flutter
- https://stackoverflow.com/questions/55716230/how-to-do-nested-navigation-in-flutter
- https://stackoverflow.com/questions/55213680/in-a-nested-navigator-structure-of-flutter-how-do-you-get-the-a-specific-naviga
- https://stackoverflow.com/questions/56890424/use-nested-navigator-with-willpopscope-in-flutter
Even this attempt has lots of issues filed: https://github.com/bizz84/nested-navigation-demo-flutter
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill out the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. ...
2. ...
3. ...
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
```
```
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
```
```
|
non_defect
|
flutter nested navigators please provide an official working example of flutter nested navigators between various scaffold pages and a bottomnavigationbar hi flutter team i was researching nested navigators but there is nowhere a working example with scaffolds could we have one in addition to the snippet posted here i am seeing this question all over the place even this attempt has lots of issues filed thank you for using flutter if you are looking for support please check out our documentation or consider asking a question on stack overflow if you have found a bug or if our documentation doesn t have an answer to what you re looking for then fill out the template below please read our guide to filing a bug first steps to reproduce logs include the full logs of the commands you are running between the lines with the backticks below if you are running any flutter commands please include the output of running them with verbose for example the output of running flutter verbose create foo
| 0
|
17,716
| 3,012,940,764
|
IssuesEvent
|
2015-07-29 04:25:57
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
Set custom forms - once a form is set it is not possible to go back to automatic form generation
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Open the attached file in the editor
2. Select "Set custom form" for the task "Call for papers"
3. Remove the http reference (leave "http://")
4. Select OK
5. Save file
What is the expected output?
http://
What do you see instead?
http://localhost:8080/forms/call.html (i.e. the reference to the form has
not been removed)
What version of the product are you using?
update 3
On what operating system?
Windows XP
Please provide any additional information below.
```
Original issue reported on code.google.com by `petia.wo...@gmail.com` on 24 Nov 2008 at 8:35
Attachments:
* [Conference-Example-Editor-v2.0-Example-v6.0.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-230/comment-0/Conference-Example-Editor-v2.0-Example-v6.0.yawl)
|
1.0
|
Set custom forms - once a form is set it is not possible to go back to automatic form generation - ```
What steps will reproduce the problem?
1. Open the attached file in the editor
2. Select "Set custom form" for the task "Call for papers"
3. Remove the http reference (leave "http://")
4. Select OK
5. Save file
What is the expected output?
http://
What do you see instead?
http://localhost:8080/forms/call.html (i.e. the reference to the form has
not been removed)
What version of the product are you using?
update 3
On what operating system?
Windows XP
Please provide any additional information below.
```
Original issue reported on code.google.com by `petia.wo...@gmail.com` on 24 Nov 2008 at 8:35
Attachments:
* [Conference-Example-Editor-v2.0-Example-v6.0.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-230/comment-0/Conference-Example-Editor-v2.0-Example-v6.0.yawl)
|
defect
|
set custom forms once a form is set it is not possible to go back to automatic form generation what steps will reproduce the problem open the attached file in the editor select set custom form for the task call for papers remove the http reference leave select ok save file what is the expected output http what do you see instead i e the reference to the form has not been removed what version of the product are you using update on what operating system windows xp please provide any additional information below original issue reported on code google com by petia wo gmail com on nov at attachments
| 1
|
244,582
| 7,876,998,893
|
IssuesEvent
|
2018-06-26 04:37:46
|
Dallas-Makerspace/tracker
|
https://api.github.com/repos/Dallas-Makerspace/tracker
|
closed
|
[Feature] Members directory / Jobboard
|
CR/enhancement Committee/Public Relations Priority/LOW Volunteer/help wanted wontfix
|
Profiles:
- name
- user portraits
- bios
- telephone ( optional )
- email (optional )
- talk link / discord talk
- Areas of interest
- SME tags
Integration with talk and discord apis
Skill sets
- contact mailing list per skill tag ( notifices on new posting )
- AD groups?
- notifies "found a hire"
Jobs posting
Rating system ( comments / karma)
Jobs life cycle and community flagging
Statistic tracking (per job, per poster, per member, etc..)
should follow 12 factor && pwa standards
|
1.0
|
[Feature] Members directory / Jobboard - Profiles:
- name
- user portraits
- bios
- telephone ( optional )
- email (optional )
- talk link / discord talk
- Areas of interest
- SME tags
Integration with talk and discord apis
Skill sets
- contact mailing list per skill tag ( notifices on new posting )
- AD groups?
- notifies "found a hire"
Jobs posting
Rating system ( comments / karma)
Jobs life cycle and community flagging
Statistic tracking (per job, per poster, per member, etc..)
should follow 12 factor && pwa standards
|
non_defect
|
members directory jobboard profiles name user portraits bios telephone optional email optional talk link discord talk areas of interest sme tags integration with talk and discord apis skill sets contact mailing list per skill tag notifices on new posting ad groups notifies found a hire jobs posting rating system comments karma jobs life cycle and community flagging statistic tracking per job per poster per member etc should follow factor pwa standards
| 0
|
19,224
| 5,825,945,598
|
IssuesEvent
|
2017-05-08 01:43:03
|
houtan1/PortOpt-WebApp
|
https://api.github.com/repos/houtan1/PortOpt-WebApp
|
opened
|
Expand and update hard-coded capabilities
|
V1(hard-coded)
|
Add 2 more stocks and update all number for stocks, allowing the user to have more than 1 choice for pairing.
Error checking and alerting if choosing non-negatively correlating stocks.
|
1.0
|
Expand and update hard-coded capabilities - Add 2 more stocks and update all number for stocks, allowing the user to have more than 1 choice for pairing.
Error checking and alerting if choosing non-negatively correlating stocks.
|
non_defect
|
expand and update hard coded capabilities add more stocks and update all number for stocks allowing the user to have more than choice for pairing error checking and alerting if choosing non negatively correlating stocks
| 0
|
16,258
| 2,886,043,599
|
IssuesEvent
|
2015-06-12 04:10:24
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
3.0.6 - Subquery w/hasMany creating bad SQL
|
Defect ORM
|
Hi guys. When defining a `hasMany` relationship with a `strategy` of `subquery` it tries to use an undefined column as the `foreignKey` (ignoring the `foreignKey` I'm trying to use). As I understand it, either way should work without having to change anything. I'd prefer to use `subquery` since I'm dealing with a lot of data.
I have this..
```
$ft = TableRegistry::get('FeaturedTags');
$ft->belongsTo('Tags');
$t = TableRegistry::get('Tags');
$t->hasMany('_i18n', [
'foreignKey' => 'id',
'className' => 'TagsTranslations',
'strategy' => 'subquery'
]);
$featuredTags = $ft
->find()
->contain(['Tags' => ['_i18n']])
->toArray();
```
This produces..
```
SELECT `_i18n`.`id` AS `_i18n__id`, `_i18n`.`locale` AS `_i18n__locale`, `_i18n`.`name` AS `_i18n__name` FROM `tags_translations` `_i18n`
INNER JOIN (
SELECT (`Tags`.`tag_id`) FROM `featured_tags` `FeaturedTags`
LEFT JOIN `tags` `Tags` ON `Tags`.`id` = (`FeaturedTags`.`tag_id`)
GROUP BY `Tags`.`tag_id`
) `Tags` ON `_i18n`.`id` = (`Tags`.`tag_id`)
```
However, ``Tags.tag_id`` doesn't exist. It should be ``Tags.id``. I think in the past something like this came up. I checked out everything back to 3.0.0 and it didn't fix it. Changing it to `select` gets me the data I need.
Schema..
```
CREATE TABLE `featured_tags` (
`tag_id` int(11) NOT NULL,
`priority` int(11) NOT NULL,
PRIMARY KEY (`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `tags` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`family` int(11) NOT NULL DEFAULT '0',
`priority` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=711 DEFAULT CHARSET=latin1;
CREATE TABLE `tags_translations` (
`id` int(10) unsigned NOT NULL DEFAULT '0',
`locale` varchar(5) CHARACTER SET utf8 NOT NULL DEFAULT '',
`name` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
PRIMARY KEY (`id`,`locale`),
FULLTEXT KEY `title` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
```
|
1.0
|
3.0.6 - Subquery w/hasMany creating bad SQL - Hi guys. When defining a `hasMany` relationship with a `strategy` of `subquery` it tries to use an undefined column as the `foreignKey` (ignoring the `foreignKey` I'm trying to use). As I understand it, either way should work without having to change anything. I'd prefer to use `subquery` since I'm dealing with a lot of data.
I have this..
```
$ft = TableRegistry::get('FeaturedTags');
$ft->belongsTo('Tags');
$t = TableRegistry::get('Tags');
$t->hasMany('_i18n', [
'foreignKey' => 'id',
'className' => 'TagsTranslations',
'strategy' => 'subquery'
]);
$featuredTags = $ft
->find()
->contain(['Tags' => ['_i18n']])
->toArray();
```
This produces..
```
SELECT `_i18n`.`id` AS `_i18n__id`, `_i18n`.`locale` AS `_i18n__locale`, `_i18n`.`name` AS `_i18n__name` FROM `tags_translations` `_i18n`
INNER JOIN (
SELECT (`Tags`.`tag_id`) FROM `featured_tags` `FeaturedTags`
LEFT JOIN `tags` `Tags` ON `Tags`.`id` = (`FeaturedTags`.`tag_id`)
GROUP BY `Tags`.`tag_id`
) `Tags` ON `_i18n`.`id` = (`Tags`.`tag_id`)
```
However, ``Tags.tag_id`` doesn't exist. It should be ``Tags.id``. I think in the past something like this came up. I checked out everything back to 3.0.0 and it didn't fix it. Changing it to `select` gets me the data I need.
Schema..
```
CREATE TABLE `featured_tags` (
`tag_id` int(11) NOT NULL,
`priority` int(11) NOT NULL,
PRIMARY KEY (`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `tags` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`family` int(11) NOT NULL DEFAULT '0',
`priority` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=711 DEFAULT CHARSET=latin1;
CREATE TABLE `tags_translations` (
`id` int(10) unsigned NOT NULL DEFAULT '0',
`locale` varchar(5) CHARACTER SET utf8 NOT NULL DEFAULT '',
`name` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
PRIMARY KEY (`id`,`locale`),
FULLTEXT KEY `title` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
```
|
defect
|
subquery w hasmany creating bad sql hi guys when defining a hasmany relationship with a strategy of subquery it tries to use an undefined column as the foreignkey ignoring the foreignkey i m trying to use as i understand it either way should work without having to change anything i d prefer to use subquery since i m dealing with a lot of data i have this ft tableregistry get featuredtags ft belongsto tags t tableregistry get tags t hasmany foreignkey id classname tagstranslations strategy subquery featuredtags ft find contain toarray this produces select id as id locale as locale name as name from tags translations inner join select tags tag id from featured tags featuredtags left join tags tags on tags id featuredtags tag id group by tags tag id tags on id tags tag id however tags tag id doesn t exist it should be tags id i think in the past something like this came up i checked out everything back to and it didn t fix it changing it to select gets me the data i need schema create table featured tags tag id int not null priority int not null primary key tag id engine innodb default charset create table tags id int unsigned not null auto increment family int not null default priority int not null default primary key id engine innodb auto increment default charset create table tags translations id int unsigned not null default locale varchar character set not null default name varchar character set not null default primary key id locale fulltext key title name engine innodb default charset collate bin
| 1
|
37,356
| 4,804,338,843
|
IssuesEvent
|
2016-11-02 13:16:53
|
fgpv-vpgf/fgpv-vpgf
|
https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf
|
closed
|
Create UI for GeoSearch Component
|
addition: feature experience: design priority: medium
|
Create the UI component to implement GeoSearch based on mockup created by https://github.com/fgpv-vpgf/fgpv-vpgf/issues/1163.
|
1.0
|
Create UI for GeoSearch Component - Create the UI component to implement GeoSearch based on mockup created by https://github.com/fgpv-vpgf/fgpv-vpgf/issues/1163.
|
non_defect
|
create ui for geosearch component create the ui component to implement geosearch based on mockup created by
| 0
|
17,973
| 3,013,836,656
|
IssuesEvent
|
2015-07-29 11:35:31
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
resourceService does not communicate properly with data source (postgresql service)
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.Download and install PostgreSQL 9.3.5 using the
postgresql-9.3.5-3-windows-x64 installer and setting user postgres with passwd
yawl and creating DB yawl with pgAdmin III
2.Use tomcat 7.0.56
3.Try to login to the resourceService
(http://localhost:8080/resourceService/faces/Login.jsp)
What is the expected output? What do you see instead?
Expected is a login with an appropriate resourceService form. Instead, I get
following Error Message:
"Missing or invalid organisational data source. The resource service requires a
connection to a valid data source that contains organisational data. Please
check the settings of the 'OrgDataSource' parameter in the service's web.xml to
ensure a valid data source is set, and/or check the configuration properties
set for the data source."
My own investigation shows, that some part of yawls has succeeded in the DB
connection, as the tables are created and thus existent.
Setting hibernate logging to DEBUG provided me with a logfile that does not
contain any obvious errors.
What version of the product are you using? On what operating system?
I manually installed the YAWL_CoreServices_3.0.zip package on Windows 8.1 Pro
Please provide any additional information below.
I attach the hibernate logfile
```
Original issue reported on code.google.com by `avh.ba.h...@gmail.com` on 13 Nov 2014 at 11:22
Attachments:
* [yawl_hibernate.log](https://storage.googleapis.com/google-code-attachments/yawl/issue-540/comment-0/yawl_hibernate.log)
|
1.0
|
resourceService does not communicate properly with data source (postgresql service) - ```
What steps will reproduce the problem?
1.Download and install PostgreSQL 9.3.5 using the
postgresql-9.3.5-3-windows-x64 installer and setting user postgres with passwd
yawl and creating DB yawl with pgAdmin III
2.Use tomcat 7.0.56
3.Try to login to the resourceService
(http://localhost:8080/resourceService/faces/Login.jsp)
What is the expected output? What do you see instead?
Expected is a login with an appropriate resourceService form. Instead, I get
following Error Message:
"Missing or invalid organisational data source. The resource service requires a
connection to a valid data source that contains organisational data. Please
check the settings of the 'OrgDataSource' parameter in the service's web.xml to
ensure a valid data source is set, and/or check the configuration properties
set for the data source."
My own investigation shows, that some part of yawls has succeeded in the DB
connection, as the tables are created and thus existent.
Setting hibernate logging to DEBUG provided me with a logfile that does not
contain any obvious errors.
What version of the product are you using? On what operating system?
I manually installed the YAWL_CoreServices_3.0.zip package on Windows 8.1 Pro
Please provide any additional information below.
I attach the hibernate logfile
```
Original issue reported on code.google.com by `avh.ba.h...@gmail.com` on 13 Nov 2014 at 11:22
Attachments:
* [yawl_hibernate.log](https://storage.googleapis.com/google-code-attachments/yawl/issue-540/comment-0/yawl_hibernate.log)
|
defect
|
resourceservice does not communicate properly with data source postgresql service what steps will reproduce the problem download and install postgresql using the postgresql windows installer and setting user postgres with passwd yawl and creating db yawl with pgadmin iii use tomcat try to login to the resourceservice what is the expected output what do you see instead expected is a login with an appropriate resourceservice form instead i get following error message missing or invalid organisational data source the resource service requires a connection to a valid data source that contains organisational data please check the settings of the orgdatasource parameter in the service s web xml to ensure a valid data source is set and or check the configuration properties set for the data source my own investigation shows that some part of yawls has succeeded in the db connection as the tables are created and thus existent setting hibernate logging to debug provided me with a logfile that does not contain any obvious errors what version of the product are you using on what operating system i manually installed the yawl coreservices zip package on windows pro please provide any additional information below i attach the hibernate logfile original issue reported on code google com by avh ba h gmail com on nov at attachments
| 1
|
233,977
| 25,793,357,534
|
IssuesEvent
|
2022-12-10 09:31:41
|
turkdevops/karma-jasmine
|
https://api.github.com/repos/turkdevops/karma-jasmine
|
closed
|
CVE-2021-27292 (High) detected in ua-parser-js-0.7.21.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-27292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.21.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.21.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.0.8.tgz (Root Library)
- :x: **ua-parser-js-0.7.21.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/karma-jasmine/commit/c7d3ef39e2adf00f4ceec1ffae2d36ce56c19cc0">c7d3ef39e2adf00f4ceec1ffae2d36ce56c19cc0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ua-parser-js >= 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27292>CVE-2021-27292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution (ua-parser-js): 0.7.24</p>
<p>Direct dependency fix Resolution (karma): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-27292 (High) detected in ua-parser-js-0.7.21.tgz - autoclosed - ## CVE-2021-27292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.21.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.21.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.0.8.tgz (Root Library)
- :x: **ua-parser-js-0.7.21.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/karma-jasmine/commit/c7d3ef39e2adf00f4ceec1ffae2d36ce56c19cc0">c7d3ef39e2adf00f4ceec1ffae2d36ce56c19cc0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ua-parser-js >= 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-27292>CVE-2021-27292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution (ua-parser-js): 0.7.24</p>
<p>Direct dependency fix Resolution (karma): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in ua parser js tgz autoclosed cve high severity vulnerability vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file package json path to vulnerable library node modules ua parser js package json dependency hierarchy karma tgz root library x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details ua parser js fixed in uses a regular expression which is vulnerable to denial of service if an attacker sends a malicious user agent header ua parser js will get stuck processing it for an extended period of time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution ua parser js direct dependency fix resolution karma step up your open source security game with mend
| 0
|
13,095
| 2,732,897,005
|
IssuesEvent
|
2015-04-17 10:04:07
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
repo2: fields in Modelinfo should resize automatically (and must be resizable manually)
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. select model in repo
2. move modelinfo frame on the right
3.
What is the expected output?
- the fields should resize when moving the frame
What do you see instead?
- fields have fixed size
Please provide any additional information below.
- ideally the editing field could also be resized, just like the one I am
typing in now
```
Original issue reported on code.google.com by `alexande...@googlemail.com` on 23 Nov 2008 at 12:48
Attachments:
* [editing fields fixed.tiff](https://storage.googleapis.com/google-code-attachments/oryx-editor/issue-312/comment-0/editing fields fixed.tiff)
|
1.0
|
repo2: fields in Modelinfo should resize automatically (and must be resizable manually) - ```
What steps will reproduce the problem?
1. select model in repo
2. move modelinfo frame on the right
3.
What is the expected output?
- the fields should resize when moving the frame
What do you see instead?
- fields have fixed size
Please provide any additional information below.
- ideally the editing field could also be resized, just like the one I am
typing in now
```
Original issue reported on code.google.com by `alexande...@googlemail.com` on 23 Nov 2008 at 12:48
Attachments:
* [editing fields fixed.tiff](https://storage.googleapis.com/google-code-attachments/oryx-editor/issue-312/comment-0/editing fields fixed.tiff)
|
defect
|
fields in modelinfo should resize automatically and must be resizable manually what steps will reproduce the problem select model in repo move modelinfo frame on the right what is the expected output the fields should resize when moving the frame what do you see instead fields have fixed size please provide any additional information below ideally the editing field could also be resized just like the one i am typing in now original issue reported on code google com by alexande googlemail com on nov at attachments fields fixed tiff
| 1
|
161,653
| 25,378,411,925
|
IssuesEvent
|
2022-11-21 15:42:16
|
metatablecat/Tabby
|
https://api.github.com/repos/metatablecat/Tabby
|
opened
|
Projects
|
enhancement status: needs design
|
Projects would be a new way of Tabby loading in plugin runtimes. Right now, Tabby does all of this in the backend with no developer input into the loading order (minus `Priority`) and configuration of the loader.
The solution?
## Projects
Projects would provide developers with the ability to load in fragments of different workspaces in a single plugin.
|
1.0
|
Projects - Projects would be a new way of Tabby loading in plugin runtimes. Right now, Tabby does all of this in the backend with no developer input into the loading order (minus `Priority`) and configuration of the loader.
The solution?
## Projects
Projects would provide developers with the ability to load in fragments of different workspaces in a single plugin.
|
non_defect
|
projects projects would be a new way of tabby loading in plugin runtimes right now tabby does all of this in the backend with no developer input into the loading order minus priority and configuration of the loader the solution projects projects would provide developers with the ability to load in fragments of different workspaces in a single plugin
| 0
|
699,145
| 24,006,499,044
|
IssuesEvent
|
2022-09-14 15:10:15
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[DocDB] Log spew on rocksdb init
|
kind/enhancement good first issue area/docdb priority/medium
|
Jira Link: [DB-581](https://yugabyte.atlassian.net/browse/DB-581)
### Description
As part of https://phabricator.dev.yugabyte.com/D10932 / c9aee058dde24a08d669a98e996b0f8368c6cd86, a probably unintended side effect is that we are no longer actually updating the default values of some auto-tune flags, such as compaction threads. This leads to some log spew on rocksdb init (eg: tablet creation / truncate / etc)
|
1.0
|
[DocDB] Log spew on rocksdb init - Jira Link: [DB-581](https://yugabyte.atlassian.net/browse/DB-581)
### Description
As part of https://phabricator.dev.yugabyte.com/D10932 / c9aee058dde24a08d669a98e996b0f8368c6cd86, a probably unintended side effect is that we are no longer actually updating the default values of some auto-tune flags, such as compaction threads. This leads to some log spew on rocksdb init (eg: tablet creation / truncate / etc)
|
non_defect
|
log spew on rocksdb init jira link description as part of a probably unintended side effect is that we are no longer actually updating the default values of some auto tune flags such as compaction threads this leads to some log spew on rocksdb init eg tablet creation truncate etc
| 0
|
411,248
| 27,816,678,547
|
IssuesEvent
|
2023-03-18 19:13:05
|
Gobidev/pfetch-rs
|
https://api.github.com/repos/Gobidev/pfetch-rs
|
closed
|
Metrics wrong in readme
|
documentation
|
In the first line of the benchmark table, the mean is larger than the min and the max, and the max is smaller than the min.
https://github.com/Gobidev/pfetch-rs/blame/main/README.md#L62
|
1.0
|
Metrics wrong in readme - In the first line of the benchmark table, the mean is larger than the min and the max, and the max is smaller than the min.
https://github.com/Gobidev/pfetch-rs/blame/main/README.md#L62
|
non_defect
|
metrics wrong in readme in the first line of the benchmark table the mean is larger than the min and the max and the max is smaller than the min
| 0
|
26,948
| 4,839,659,068
|
IssuesEvent
|
2016-11-09 10:15:39
|
google/google-authenticator
|
https://api.github.com/repos/google/google-authenticator
|
closed
|
Specific google_authenticator key generation options do not allow auth
|
bug libpam Priority-Medium Type-Defect
|
Original [issue 394](https://code.google.com/p/google-authenticator/issues/detail?id=394) created by acesmythe on 2014-06-27T00:31:33.000Z:
<b>What steps will reproduce the problem?</b>
1. (CentOS 6.5) Install via the steps listed at http://www.techrepublic.com/blog/linux-and-open-source/two-factor-ssh-authentication-via-google-secures-linux-logins/ (Short version; get the git, compile, edit the necessary pam and sshd configs, restart sshd)
2. Generate new time-based codes with google_authenticator enabling either rate-limiting or disallowing multiple uses
<b>What is the expected output? What do you see instead?</b>
If either of those options are selected, the generated codes do not allow me to authenticate. If they are both (and only both) disabled, authentication works fine (showing that this is not a time-related issue).
An interesting note is that even the backup codes fail to function with this issue. With both items disabled, backup codes work fine.
<b>What version of the product are you using? On what operating system?</b>
- CentOS 6.5
- Google Authenticator PAM module (1d0bf2e6cff7) from https://code.google.com/p/google-authenticator/source/browse/?r=1d0bf2e6cff7a5e503580d29ca33634ce09386ca
- OpenSSH_5.3p1
<b>Please provide any additional information below.</b>
|
1.0
|
Specific google_authenticator key generation options do not allow auth - Original [issue 394](https://code.google.com/p/google-authenticator/issues/detail?id=394) created by acesmythe on 2014-06-27T00:31:33.000Z:
<b>What steps will reproduce the problem?</b>
1. (CentOS 6.5) Install via the steps listed at http://www.techrepublic.com/blog/linux-and-open-source/two-factor-ssh-authentication-via-google-secures-linux-logins/ (Short version; get the git, compile, edit the necessary pam and sshd configs, restart sshd)
2. Generate new time-based codes with google_authenticator enabling either rate-limiting or disallowing multiple uses
<b>What is the expected output? What do you see instead?</b>
If either of those options are selected, the generated codes do not allow me to authenticate. If they are both (and only both) disabled, authentication works fine (showing that this is not a time-related issue).
An interesting note is that even the backup codes fail to function with this issue. With both items disabled, backup codes work fine.
<b>What version of the product are you using? On what operating system?</b>
- CentOS 6.5
- Google Authenticator PAM module (1d0bf2e6cff7) from https://code.google.com/p/google-authenticator/source/browse/?r=1d0bf2e6cff7a5e503580d29ca33634ce09386ca
- OpenSSH_5.3p1
<b>Please provide any additional information below.</b>
|
defect
|
specific google authenticator key generation options do not allow auth original created by acesmythe on what steps will reproduce the problem centos install via the steps listed at short version get the git compile edit the necessary pam and sshd configs restart sshd generate new time based codes with google authenticator enabling either rate limiting or disallowing multiple uses what is the expected output what do you see instead if either of those options are selected the generated codes do not allow me to authenticate if they are both and only both disabled authentication works fine showing that this is not a time related issue an interesting note is that even the backup codes fail to function with this issue with both items disabled backup codes work fine what version of the product are you using on what operating system centos google authenticator pam module from openssh please provide any additional information below
| 1
|
6,299
| 2,610,239,994
|
IssuesEvent
|
2015-02-26 19:16:32
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
台州割包茎手术到哪家医院比较好
|
auto-migrated Priority-Medium Type-Defect
|
```
台州割包茎手术到哪家医院比较好【台州五洲生殖医院】24小
时健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院�
��址:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘�
��104、108、118、198及椒江一金清公交车直达枫南小区,乘坐107
、105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 11:55
|
1.0
|
台州割包茎手术到哪家医院比较好 - ```
台州割包茎手术到哪家医院比较好【台州五洲生殖医院】24小
时健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院�
��址:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘�
��104、108、118、198及椒江一金清公交车直达枫南小区,乘坐107
、105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 11:55
|
defect
|
台州割包茎手术到哪家医院比较好 台州割包茎手术到哪家医院比较好【台州五洲生殖医院】 时健康咨询热线 微信号tzwzszyy 医院� ��址 (枫南大转盘旁)乘车线路 乘� �� 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
208,336
| 15,886,398,581
|
IssuesEvent
|
2021-04-09 22:32:07
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
The translations of the string 'Blob Container' are inconsistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)' on the 'Attach with Azure AD' dialog
|
🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.15.1
**Build**: 20200904.2
**Branch**: main
**Platform/OS:** Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Language:** German
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Deutsch' -> Restart Storage Explorer.
3. Open connect dialog -> Select the localized option 'Add a resource via Azure Active Directory (Azure AD)' -> Click 'Next'.
4. Select one account -> Click 'Next' -> Expand the dropdown list of the 'Resource type' on the 'Attach with Azure AD' dialog.
5. Check the localized string.
**Expect Experience:**
The translations of the string 'Blob Container' are consistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)'.
**Actual Experience:**
The translations of the string 'Blob Container' are inconsistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)'

**More Info:**
1. This issue doesn't reproduce on other languages.
2. The screenshot in ENU.

|
1.0
|
The translations of the string 'Blob Container' are inconsistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)' on the 'Attach with Azure AD' dialog - **Storage Explorer Version:** 1.15.1
**Build**: 20200904.2
**Branch**: main
**Platform/OS:** Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Language:** German
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Deutsch' -> Restart Storage Explorer.
3. Open connect dialog -> Select the localized option 'Add a resource via Azure Active Directory (Azure AD)' -> Click 'Next'.
4. Select one account -> Click 'Next' -> Expand the dropdown list of the 'Resource type' on the 'Attach with Azure AD' dialog.
5. Check the localized string.
**Expect Experience:**
The translations of the string 'Blob Container' are consistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)'.
**Actual Experience:**
The translations of the string 'Blob Container' are inconsistent between strings 'Blob Container' and 'Blob Container (ADLS Gen2)'

**More Info:**
1. This issue doesn't reproduce on other languages.
2. The screenshot in ENU.

|
non_defect
|
the translations of the string blob container are inconsistent between strings blob container and blob container adls on the attach with azure ad dialog storage explorer version build branch main platform os windows linux ubuntu macos catalina architecture language german regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select deutsch restart storage explorer open connect dialog select the localized option add a resource via azure active directory azure ad click next select one account click next expand the dropdown list of the resource type on the attach with azure ad dialog check the localized string expect experience the translations of the string blob container are consistent between strings blob container and blob container adls actual experience the translations of the string blob container are inconsistent between strings blob container and blob container adls more info this issue doesn t reproduce on other languages the screenshot in enu
| 0
|
22,546
| 3,665,357,520
|
IssuesEvent
|
2016-02-19 15:47:13
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Exception if dart:async not specified in _embedder.yaml
|
analyzer-stability area-analyzer priority-high Type-Defect
|
Given the following ```_embedder.yaml``` file
```
embedder_libs:
'dart:core': '/absolute/path/to/fletch-sdk/internal/dart_lib/lib/core/core.dart'
# 'dart:async': '/absolute/path/to/any/dart-sdk/async/async.dart'
'dart:fletch': '/absolute/path/to/fletch-sdk/internal/fletch_lib/lib/fletch/fletch.dart'
analyzer:
language:
enableAsync: false
```
the analyzer throws the exception shown below. If the ```dart:async``` line in the ```_embedder.yaml``` file above is uncommented, then the exception does not occur.
```
Invalid input descriptor (null, LIBRARY_ELEMENT3) for Run BuildTypeProviderTask on Instance of 'AnalysisContextTarget' Path: Run LibraryErrorsReadyTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run DartErrorsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run LibraryUnitErrorsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run GenerateHintsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run StrongModeVerifyUnitTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run EvaluateUnitConstantsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveLibraryReferencesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInLibraryClosureTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ReadyLibraryElement6Task on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInLibraryTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInUnitTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PartiallyResolveUnitReferencesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ReadyLibraryElement5Task on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveLibraryTypeNamesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveUnitTypeNamesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run BuildEnumMemberElementsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run BuildTypeProviderTask on Instance of 'AnalysisContextTarget'
#0 WorkItem.gatherInputs (package:analyzer/src/task/driver.dart:671:11)
#1 _WorkOrderDependencyWalker.getNextInput (package:analyzer/src/task/driver.dart:821:17)
#2 CycleAwareDependencyWalker.getNextStronglyConnectedComponent (package:analyzer/src/task/driver.dart:395:35)
#3 WorkOrder.moveNext. (package:analyzer/src/task/driver.dart:787:31)
#4 _PerformanceTagImpl.makeCurrentWhile (package:analyzer/src/generated/utilities_general.dart:194:15)
#5 WorkOrder.moveNext (package:analyzer/src/task/driver.dart:779:44)
#6 AnalysisDriver.performAnalysisTask (package:analyzer/src/task/driver.dart:245:35)
#7 AnalysisContextImpl.performAnalysisTask. (package:analyzer/src/context/context.dart:1122:27)
#8 _PerformanceTagImpl.makeCurrentWhile (package:analyzer/src/generated/utilities_general.dart:194:15)
#9 AnalysisContextImpl.performAnalysisTask (package:analyzer/src/context/context.dart:1120:50)
#10 PerformAnalysisOperation.perform (package:analysis_server/src/operation/operation_analysis.dart:371:37)
#11 AnalysisServer.performOperation (package:analysis_server/src/analysis_server.dart:807:17)
#12 Future.Future. (dart:async/future.dart:118)
#13 _rootRun (dart:async/zone.dart:903)
#14 _CustomZone.run (dart:async/zone.dart:802)
#15 _CustomZone.runGuarded (dart:async/zone.dart:708)
#16 _CustomZone.bindCallback. (dart:async/zone.dart:733)
#17 _rootRun (dart:async/zone.dart:907)
#18 _CustomZone.run (dart:async/zone.dart:802)
#19 _CustomZone.runGuarded (dart:async/zone.dart:708)
#20 _CustomZone.bindCallback. (dart:async/zone.dart:733)
#21 Timer._createTimer. (dart:async-patch/timer_patch.dart:16)
#22 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:385)
#23 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:414)
#24 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:150)
```
|
1.0
|
Exception if dart:async not specified in _embedder.yaml - Given the following ```_embedder.yaml``` file
```
embedder_libs:
'dart:core': '/absolute/path/to/fletch-sdk/internal/dart_lib/lib/core/core.dart'
# 'dart:async': '/absolute/path/to/any/dart-sdk/async/async.dart'
'dart:fletch': '/absolute/path/to/fletch-sdk/internal/fletch_lib/lib/fletch/fletch.dart'
analyzer:
language:
enableAsync: false
```
the analyzer throws the exception shown below. If the ```dart:async``` line in the ```_embedder.yaml``` file above is uncommented, then the exception does not occur.
```
Invalid input descriptor (null, LIBRARY_ELEMENT3) for Run BuildTypeProviderTask on Instance of 'AnalysisContextTarget' Path: Run LibraryErrorsReadyTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run DartErrorsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run LibraryUnitErrorsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run GenerateHintsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run StrongModeVerifyUnitTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run EvaluateUnitConstantsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveLibraryReferencesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInLibraryClosureTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ReadyLibraryElement6Task on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInLibraryTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PropagateVariableTypesInUnitTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run PartiallyResolveUnitReferencesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ReadyLibraryElement5Task on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveLibraryTypeNamesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run ResolveUnitTypeNamesTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run BuildEnumMemberElementsTask on /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart in /Users/danrubel/work/git/dartino/projects/fletch-test/bin/hello.dart| Run BuildTypeProviderTask on Instance of 'AnalysisContextTarget'
#0 WorkItem.gatherInputs (package:analyzer/src/task/driver.dart:671:11)
#1 _WorkOrderDependencyWalker.getNextInput (package:analyzer/src/task/driver.dart:821:17)
#2 CycleAwareDependencyWalker.getNextStronglyConnectedComponent (package:analyzer/src/task/driver.dart:395:35)
#3 WorkOrder.moveNext. (package:analyzer/src/task/driver.dart:787:31)
#4 _PerformanceTagImpl.makeCurrentWhile (package:analyzer/src/generated/utilities_general.dart:194:15)
#5 WorkOrder.moveNext (package:analyzer/src/task/driver.dart:779:44)
#6 AnalysisDriver.performAnalysisTask (package:analyzer/src/task/driver.dart:245:35)
#7 AnalysisContextImpl.performAnalysisTask. (package:analyzer/src/context/context.dart:1122:27)
#8 _PerformanceTagImpl.makeCurrentWhile (package:analyzer/src/generated/utilities_general.dart:194:15)
#9 AnalysisContextImpl.performAnalysisTask (package:analyzer/src/context/context.dart:1120:50)
#10 PerformAnalysisOperation.perform (package:analysis_server/src/operation/operation_analysis.dart:371:37)
#11 AnalysisServer.performOperation (package:analysis_server/src/analysis_server.dart:807:17)
#12 Future.Future. (dart:async/future.dart:118)
#13 _rootRun (dart:async/zone.dart:903)
#14 _CustomZone.run (dart:async/zone.dart:802)
#15 _CustomZone.runGuarded (dart:async/zone.dart:708)
#16 _CustomZone.bindCallback. (dart:async/zone.dart:733)
#17 _rootRun (dart:async/zone.dart:907)
#18 _CustomZone.run (dart:async/zone.dart:802)
#19 _CustomZone.runGuarded (dart:async/zone.dart:708)
#20 _CustomZone.bindCallback. (dart:async/zone.dart:733)
#21 Timer._createTimer. (dart:async-patch/timer_patch.dart:16)
#22 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:385)
#23 _Timer._handleMessage (dart:isolate-patch/timer_impl.dart:414)
#24 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:150)
```
|
defect
|
exception if dart async not specified in embedder yaml given the following embedder yaml file embedder libs dart core absolute path to fletch sdk internal dart lib lib core core dart dart async absolute path to any dart sdk async async dart dart fletch absolute path to fletch sdk internal fletch lib lib fletch fletch dart analyzer language enableasync false the analyzer throws the exception shown below if the dart async line in the embedder yaml file above is uncommented then the exception does not occur invalid input descriptor null library for run buildtypeprovidertask on instance of analysiscontexttarget path run libraryerrorsreadytask on users danrubel work git dartino projects fletch test bin hello dart run darterrorstask on users danrubel work git dartino projects fletch test bin hello dart run libraryuniterrorstask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run generatehintstask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run strongmodeverifyunittask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run evaluateunitconstantstask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run resolvelibraryreferencestask on users danrubel work git dartino projects fletch test bin hello dart run propagatevariabletypesinlibraryclosuretask on users danrubel work git dartino projects fletch test bin hello dart run on users danrubel work git dartino projects fletch test bin hello dart run propagatevariabletypesinlibrarytask on users danrubel work git dartino projects fletch test bin hello dart run propagatevariabletypesinunittask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run partiallyresolveunitreferencestask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run on users danrubel work git dartino projects fletch test bin hello dart run resolvelibrarytypenamestask on users danrubel work git dartino projects fletch test bin hello dart run resolveunittypenamestask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run buildenummemberelementstask on users danrubel work git dartino projects fletch test bin hello dart in users danrubel work git dartino projects fletch test bin hello dart run buildtypeprovidertask on instance of analysiscontexttarget workitem gatherinputs package analyzer src task driver dart workorderdependencywalker getnextinput package analyzer src task driver dart cycleawaredependencywalker getnextstronglyconnectedcomponent package analyzer src task driver dart workorder movenext package analyzer src task driver dart performancetagimpl makecurrentwhile package analyzer src generated utilities general dart workorder movenext package analyzer src task driver dart analysisdriver performanalysistask package analyzer src task driver dart analysiscontextimpl performanalysistask package analyzer src context context dart performancetagimpl makecurrentwhile package analyzer src generated utilities general dart analysiscontextimpl performanalysistask package analyzer src context context dart performanalysisoperation perform package analysis server src operation operation analysis dart analysisserver performoperation package analysis server src analysis server dart future future dart async future dart rootrun dart async zone dart customzone run dart async zone dart customzone runguarded dart async zone dart customzone bindcallback dart async zone dart rootrun dart async zone dart customzone run dart async zone dart customzone runguarded dart async zone dart customzone bindcallback dart async zone dart timer createtimer dart async patch timer patch dart timer runtimers dart isolate patch timer impl dart timer handlemessage dart isolate patch timer impl dart rawreceiveportimpl handlemessage dart isolate patch isolate patch dart
| 1
|
799,555
| 28,309,281,615
|
IssuesEvent
|
2023-04-10 14:03:04
|
status-im/status-mobile
|
https://api.github.com/repos/status-im/status-mobile
|
opened
|
The message can't be edited/replied/removed on the 'Pinned messages' section on the 'group detailed' page
|
bug low-priority group-chat
|
#### Steps to reproduce:
1. Create a group chat
2. Send the message inside the current chat
3. Pin the message
4. Go to the message list
5. Long tap on the group chat -> tap the 'Group details' option
6. Open the 'Pinned message' section
7. Long tap on the message
8. Select the following options:
- Edit message
- Reply
- Delete for everyone
#### Actual result:
- 'null is not an object' error is shown when 'edit message' is tapped
- Reply creation flow is not opened when 'reply' is tapped
- Message is not removed when 'delete for everyone' option is tapped
https://user-images.githubusercontent.com/52490791/230916573-9d4d3b07-762e-4912-b66d-3784b6800de3.mp4
#### Expected result:
- User is navigated to edit message flow when 'edit' option is tapped

- User is navigated to reply message flow when 'reply' option is tapped

- Message is removed when 'delete for everyone' option is tapped
|
1.0
|
The message can't be edited/replied/removed on the 'Pinned messages' section on the 'group detailed' page - #### Steps to reproduce:
1. Create a group chat
2. Send the message inside the current chat
3. Pin the message
4. Go to the message list
5. Long tap on the group chat -> tap the 'Group details' option
6. Open the 'Pinned message' section
7. Long tap on the message
8. Select the following options:
- Edit message
- Reply
- Delete for everyone
#### Actual result:
- 'null is not an object' error is shown when 'edit message' is tapped
- Reply creation flow is not opened when 'reply' is tapped
- Message is not removed when 'delete for everyone' option is tapped
https://user-images.githubusercontent.com/52490791/230916573-9d4d3b07-762e-4912-b66d-3784b6800de3.mp4
#### Expected result:
- User is navigated to edit message flow when 'edit' option is tapped

- User is navigated to reply message flow when 'reply' option is tapped

- Message is removed when 'delete for everyone' option is tapped
|
non_defect
|
the message can t be edited replied removed on the pinned messages section on the group detailed page steps to reproduce create a group chat send the message inside the current chat pin the message go to the message list long tap on the group chat tap the group details option open the pinned message section long tap on the message select the following options edit message reply delete for everyone actual result null is not an object error is shown when edit message is tapped reply creation flow is not opened when reply is tapped message is not removed when delete for everyone option is tapped expected result user is navigated to edit message flow when edit option is tapped user is navigated to reply message flow when reply option is tapped message is removed when delete for everyone option is tapped
| 0
|
48,693
| 13,184,719,448
|
IssuesEvent
|
2020-08-12 19:58:12
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
cmake goodies from trunk for V01-11-02 (Trac #66)
|
Incomplete Migration Migrated from Trac defect offline-software
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/66
, reported by blaufuss and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "esp. stuff from rev #33235 (svn info in ENGLISH, por favor)",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "cmake goodies from trunk for V01-11-02",
"priority": "normal",
"keywords": "",
"time": "2007-06-15T03:01:29",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
cmake goodies from trunk for V01-11-02 (Trac #66) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/66
, reported by blaufuss and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "esp. stuff from rev #33235 (svn info in ENGLISH, por favor)",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "cmake goodies from trunk for V01-11-02",
"priority": "normal",
"keywords": "",
"time": "2007-06-15T03:01:29",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
cmake goodies from trunk for trac migrated from reported by blaufuss and owned by json status closed changetime description esp stuff from rev svn info in english por favor reporter blaufuss cc resolution fixed ts component offline software summary cmake goodies from trunk for priority normal keywords time milestone owner type defect
| 1
|
38,032
| 8,638,487,236
|
IssuesEvent
|
2018-11-23 14:55:46
|
contao/contao
|
https://api.github.com/repos/contao/contao
|
closed
|
Wartungsmodus führt zu "Es ist ein Fehler aufgetreten"
|
defect
|
Ich habe das Problem von dem geschlossenen Issue https://github.com/contao/core-bundle/issues/1307 auch in Contao 4.6:
Bei mir unter Contao 4.6.7, PHP7.2 tritt das Problem auch auf.
Der letzte Eintrag aus der log-Datei ist folgender:
`[2018-11-08 21:13:32] request.CRITICAL: Uncaught PHP Exception RuntimeException: "Error when rendering "http://www.DOMAIN.de/_fragment?_hash=%2FqBeYf5LY3DkpKxI06SLYZn9I13vTAQeKy5ZMNA%2FvzQ%3D&_path=insertTag%3D%257B%257Bua%253A%253Aclass%257D%257D%26_format%3Dhtml%26_locale%3Dde%26_controller%3Dcontao.controller.insert_tags%253ArenderAction&clientCache=0&pageId=17&request=netzwerk-junge-wissenschaft.html" (Status code is 503)." at /var/www/xxx/hostingxxx/httpdocs/contao/vendor/symfony/http-kernel/HttpCache/AbstractSurrogate.php line 99 {"exception":"[object] (RuntimeException(code: 0): Error when rendering \"http://www.DOMAIN.de/_fragment?_hash=%2FqBeYf5LY3DkpKxI06SLYZn9I13vTAQeKy5ZMNA%2FvzQ%3D&_path=insertTag%3D%257B%257Bua%253A%253Aclass%257D%257D%26_format%3Dhtml%26_locale%3Dde%26_controller%3Dcontao.controller.insert_tags%253ArenderAction&clientCache=0&pageId=17&request=netzwerk-junge-wissenschaft.html\" (Status code is 503). at /var/www/xxx/hostingxxx/httpdocs/contao/vendor/symfony/http-kernel/HttpCache/AbstractSurrogate.php:99)"} []`
Auch der Quelltext der 500-Fehlerseite sieht noch sehr komisch aus (er fängt in der Klasse an den Fehlercode-Template zu schreiben):
```
<body id="top" class="<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>An Error Occurred: Internal Server Error</title>
</head>
<body>
<h1>Oops! An Error Occurred</h1>
<h2>The server returned a "500 Internal Server Error".</h2>
<div>
Something is broken. Please let us know what you were doing when this error occurred.
We will fix it as soon as possible. Sorry for any inconvenience caused.
</div>
</body>
```
|
1.0
|
Wartungsmodus führt zu "Es ist ein Fehler aufgetreten" - Ich habe das Problem von dem geschlossenen Issue https://github.com/contao/core-bundle/issues/1307 auch in Contao 4.6:
Bei mir unter Contao 4.6.7, PHP7.2 tritt das Problem auch auf.
Der letzte Eintrag aus der log-Datei ist folgender:
`[2018-11-08 21:13:32] request.CRITICAL: Uncaught PHP Exception RuntimeException: "Error when rendering "http://www.DOMAIN.de/_fragment?_hash=%2FqBeYf5LY3DkpKxI06SLYZn9I13vTAQeKy5ZMNA%2FvzQ%3D&_path=insertTag%3D%257B%257Bua%253A%253Aclass%257D%257D%26_format%3Dhtml%26_locale%3Dde%26_controller%3Dcontao.controller.insert_tags%253ArenderAction&clientCache=0&pageId=17&request=netzwerk-junge-wissenschaft.html" (Status code is 503)." at /var/www/xxx/hostingxxx/httpdocs/contao/vendor/symfony/http-kernel/HttpCache/AbstractSurrogate.php line 99 {"exception":"[object] (RuntimeException(code: 0): Error when rendering \"http://www.DOMAIN.de/_fragment?_hash=%2FqBeYf5LY3DkpKxI06SLYZn9I13vTAQeKy5ZMNA%2FvzQ%3D&_path=insertTag%3D%257B%257Bua%253A%253Aclass%257D%257D%26_format%3Dhtml%26_locale%3Dde%26_controller%3Dcontao.controller.insert_tags%253ArenderAction&clientCache=0&pageId=17&request=netzwerk-junge-wissenschaft.html\" (Status code is 503). at /var/www/xxx/hostingxxx/httpdocs/contao/vendor/symfony/http-kernel/HttpCache/AbstractSurrogate.php:99)"} []`
Auch der Quelltext der 500-Fehlerseite sieht noch sehr komisch aus (er fängt in der Klasse an den Fehlercode-Template zu schreiben):
```
<body id="top" class="<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>An Error Occurred: Internal Server Error</title>
</head>
<body>
<h1>Oops! An Error Occurred</h1>
<h2>The server returned a "500 Internal Server Error".</h2>
<div>
Something is broken. Please let us know what you were doing when this error occurred.
We will fix it as soon as possible. Sorry for any inconvenience caused.
</div>
</body>
```
|
defect
|
wartungsmodus führt zu es ist ein fehler aufgetreten ich habe das problem von dem geschlossenen issue auch in contao bei mir unter contao tritt das problem auch auf der letzte eintrag aus der log datei ist folgender request critical uncaught php exception runtimeexception error when rendering status code is at var www xxx hostingxxx httpdocs contao vendor symfony http kernel httpcache abstractsurrogate php line exception runtimeexception code error when rendering status code is at var www xxx hostingxxx httpdocs contao vendor symfony http kernel httpcache abstractsurrogate php auch der quelltext der fehlerseite sieht noch sehr komisch aus er fängt in der klasse an den fehlercode template zu schreiben an error occurred internal server error oops an error occurred the server returned a internal server error something is broken please let us know what you were doing when this error occurred we will fix it as soon as possible sorry for any inconvenience caused
| 1
|
186,806
| 6,742,777,974
|
IssuesEvent
|
2017-10-20 09:12:48
|
HabitRPG/habitica
|
https://api.github.com/repos/HabitRPG/habitica
|
closed
|
Post Redesign Cleanup
|
priority: minor status: issue: on hold type: website improvement
|
The redesign will happen in the `develop` branch so the old client will stay as long as it's not complete, after it's we'll have to do a bit of cleanup.
This list is not complete, we'll probably add new items:
- [ ] Remove Bower (and relative code, post install scripts, ...)
- [ ] Remove unused assets
- [ ] Remove common code that was only used on the client (and relative tests)
- [ ] Remove `/dist`
- [ ] Remove old Grunt tasks no longer used
- [ ] Remove old client tests (integration and unit)
- [ ] Remove test/server_side, test/migrations
- [ ] Remove all exceptions to /tests and /common from .eslintignore
- [ ] /test/content -> test/common/content
- [ ] Remove browserify's code and dependency
- [ ] Remove /common/dist if we manage to put spritesmith into the build step (anyway remove the generated js)
- [ ] Remove /common/public
- [ ] Remove unused npm packages (https://github.com/depcheck/depcheck)
- [ ] update npm deps
- [ ] devDeps vs deps in package.json
|
1.0
|
Post Redesign Cleanup - The redesign will happen in the `develop` branch so the old client will stay as long as it's not complete, after it's we'll have to do a bit of cleanup.
This list is not complete, we'll probably add new items:
- [ ] Remove Bower (and relative code, post install scripts, ...)
- [ ] Remove unused assets
- [ ] Remove common code that was only used on the client (and relative tests)
- [ ] Remove `/dist`
- [ ] Remove old Grunt tasks no longer used
- [ ] Remove old client tests (integration and unit)
- [ ] Remove test/server_side, test/migrations
- [ ] Remove all exceptions to /tests and /common from .eslintignore
- [ ] /test/content -> test/common/content
- [ ] Remove browserify's code and dependency
- [ ] Remove /common/dist if we manage to put spritesmith into the build step (anyway remove the generated js)
- [ ] Remove /common/public
- [ ] Remove unused npm packages (https://github.com/depcheck/depcheck)
- [ ] update npm deps
- [ ] devDeps vs deps in package.json
|
non_defect
|
post redesign cleanup the redesign will happen in the develop branch so the old client will stay as long as it s not complete after it s we ll have to do a bit of cleanup this list is not complete we ll probably add new items remove bower and relative code post install scripts remove unused assets remove common code that was only used on the client and relative tests remove dist remove old grunt tasks no longer used remove old client tests integration and unit remove test server side test migrations remove all exceptions to tests and common from eslintignore test content test common content remove browserify s code and dependency remove common dist if we manage to put spritesmith into the build step anyway remove the generated js remove common public remove unused npm packages update npm deps devdeps vs deps in package json
| 0
|
357,537
| 10,608,258,940
|
IssuesEvent
|
2019-10-11 07:02:51
|
canonical-web-and-design/maas-ui
|
https://api.github.com/repos/canonical-web-and-design/maas-ui
|
closed
|
Client stuck on loading if issue parsing csrf token on server
|
Bug 🐛 Priority: High
|
If maas server has csrf authentication enabled, the client will sit in a "loading" state indefinitely without an error.
The client should be raising a PROTOCOL_ERROR with the error "Invalid CSRF token", but this is not surfaced to the client.
|
1.0
|
Client stuck on loading if issue parsing csrf token on server - If maas server has csrf authentication enabled, the client will sit in a "loading" state indefinitely without an error.
The client should be raising a PROTOCOL_ERROR with the error "Invalid CSRF token", but this is not surfaced to the client.
|
non_defect
|
client stuck on loading if issue parsing csrf token on server if maas server has csrf authentication enabled the client will sit in a loading state indefinitely without an error the client should be raising a protocol error with the error invalid csrf token but this is not surfaced to the client
| 0
|
155,932
| 13,637,154,449
|
IssuesEvent
|
2020-09-25 07:20:44
|
daejo/team-profile-generator
|
https://api.github.com/repos/daejo/team-profile-generator
|
closed
|
Create initial files
|
documentation
|
- Create directory for lib, src, and test.
- Create "Employee.js", "Engineer.js", "Intern.js", "Manager.js" files inside the "lib" directory.
- Create test files for "Employee.js", "Engineer.js", "Intern.js", "Manager.js" inside the "__test__" directory.
- Install required packages for file to work (inquirer, jest, ...).
- Create ReadMe and .gitignore files.
|
1.0
|
Create initial files - - Create directory for lib, src, and test.
- Create "Employee.js", "Engineer.js", "Intern.js", "Manager.js" files inside the "lib" directory.
- Create test files for "Employee.js", "Engineer.js", "Intern.js", "Manager.js" inside the "__test__" directory.
- Install required packages for file to work (inquirer, jest, ...).
- Create ReadMe and .gitignore files.
|
non_defect
|
create initial files create directory for lib src and test create employee js engineer js intern js manager js files inside the lib directory create test files for employee js engineer js intern js manager js inside the test directory install required packages for file to work inquirer jest create readme and gitignore files
| 0
|
13,430
| 3,332,728,710
|
IssuesEvent
|
2015-11-11 21:28:41
|
jojobear99/PopTrayU
|
https://api.github.com/repos/jojobear99/PopTrayU
|
closed
|
Keyboard Lights Broken?
|
bug Needs Testing
|
Robo [reports](https://sourceforge.net/p/poptrayu/discussion/bugs/thread/5b4fa9eb/):
> 3) Plugin NotifyKeyboardLights not work, it should be removed from the installation or repaired plugin interface.
|
1.0
|
Keyboard Lights Broken? - Robo [reports](https://sourceforge.net/p/poptrayu/discussion/bugs/thread/5b4fa9eb/):
> 3) Plugin NotifyKeyboardLights not work, it should be removed from the installation or repaired plugin interface.
|
non_defect
|
keyboard lights broken robo plugin notifykeyboardlights not work it should be removed from the installation or repaired plugin interface
| 0
|
38,973
| 9,099,311,723
|
IssuesEvent
|
2019-02-20 03:50:27
|
extnet/Ext.NET
|
https://api.github.com/repos/extnet/Ext.NET
|
closed
|
Ext.menu.Item override relies on non-existing getIconUrl() method
|
4.x defect
|
Found: 4.7.1
Ext.NET issue: [Button overflow menu - set icon class returns error](https://forums.ext.net/showthread.php?62479)
The `Ext.menu.Item` override from Ext.NET relies on the class' `getIconCls()`, which does not exist and is not expected to, [according to Sencha Ext JS documentation on the class](https://docs.sencha.com/extjs/6.6.0/classic/Ext.menu.Item.html#cfg-iconCls).
So, Ext.NET override simply needs to rely in `this.iconCls` instead of `this.getIconCls()`.
|
1.0
|
Ext.menu.Item override relies on non-existing getIconUrl() method - Found: 4.7.1
Ext.NET issue: [Button overflow menu - set icon class returns error](https://forums.ext.net/showthread.php?62479)
The `Ext.menu.Item` override from Ext.NET relies on the class' `getIconCls()`, which does not exist and is not expected to, [according to Sencha Ext JS documentation on the class](https://docs.sencha.com/extjs/6.6.0/classic/Ext.menu.Item.html#cfg-iconCls).
So, Ext.NET override simply needs to rely in `this.iconCls` instead of `this.getIconCls()`.
|
defect
|
ext menu item override relies on non existing geticonurl method found ext net issue the ext menu item override from ext net relies on the class geticoncls which does not exist and is not expected to so ext net override simply needs to rely in this iconcls instead of this geticoncls
| 1
|
29,315
| 5,651,157,078
|
IssuesEvent
|
2017-04-08 02:04:10
|
obophenotype/porifera-ontology
|
https://api.github.com/repos/obophenotype/porifera-ontology
|
closed
|
Migrate to github
|
auto-migrated Priority-Medium Type-Defect
|
```
See:
https://docs.google.com/document/d/1NfaeHWdZ7BcnaGFfh09X63L4LL7VG_8RNCrLyNdZ8wk/
edit#
```
Original issue reported on code.google.com by `cmung...@gmail.com` on 12 Mar 2015 at 11:11
|
1.0
|
Migrate to github - ```
See:
https://docs.google.com/document/d/1NfaeHWdZ7BcnaGFfh09X63L4LL7VG_8RNCrLyNdZ8wk/
edit#
```
Original issue reported on code.google.com by `cmung...@gmail.com` on 12 Mar 2015 at 11:11
|
defect
|
migrate to github see edit original issue reported on code google com by cmung gmail com on mar at
| 1
|
716,746
| 24,646,562,485
|
IssuesEvent
|
2022-10-17 15:14:41
|
authzed/spicedb
|
https://api.github.com/repos/authzed/spicedb
|
opened
|
Add custom linter to catch zerolog builders without Send
|
priority/2 medium area/tooling
|
zerolog requires that logs are sent with `.Send` or `.Msg`; if one is missing, then we should have a custom linter catch this case and report it
|
1.0
|
Add custom linter to catch zerolog builders without Send - zerolog requires that logs are sent with `.Send` or `.Msg`; if one is missing, then we should have a custom linter catch this case and report it
|
non_defect
|
add custom linter to catch zerolog builders without send zerolog requires that logs are sent with send or msg if one is missing then we should have a custom linter catch this case and report it
| 0
|
13,965
| 2,789,804,219
|
IssuesEvent
|
2015-05-08 21:35:52
|
google/google-visualization-api-issues
|
https://api.github.com/repos/google/google-visualization-api-issues
|
opened
|
Query handled differently when generated server side than using javascript
|
Priority-Medium Type-Defect
|
Original [issue 159](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=159) created by orwant on 2010-01-07T21:21:57.000Z:
The same query generates two different looking bar charts when done on
the serverside (java) versus the browser (javascript). See the results
attached.
Serverside: Used QueryBuilder
Javascript: Used google.visualization.Query
Query: "select dbname, sum(ttl_acts) group by dbname pivot taxname"
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
QueryBuilder?
<b>Are you using the test environment (version 1.1)?</b>
Don't know.
<b>What operating system and browser are you using?</b>
Mac OS X, Chrome for Mac
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
1.0
|
Query handled differently when generated server side than using javascript - Original [issue 159](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=159) created by orwant on 2010-01-07T21:21:57.000Z:
The same query generates two different looking bar charts when done on
the serverside (java) versus the browser (javascript). See the results
attached.
Serverside: Used QueryBuilder
Javascript: Used google.visualization.Query
Query: "select dbname, sum(ttl_acts) group by dbname pivot taxname"
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
QueryBuilder?
<b>Are you using the test environment (version 1.1)?</b>
Don't know.
<b>What operating system and browser are you using?</b>
Mac OS X, Chrome for Mac
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
|
defect
|
query handled differently when generated server side than using javascript original created by orwant on the same query generates two different looking bar charts when done on the serverside java versus the browser javascript see the results attached serverside used querybuilder javascript used google visualization query query quot select dbname sum ttl acts group by dbname pivot taxname quot what component is this issue related to piechart linechart datatable query etc querybuilder are you using the test environment version don t know what operating system and browser are you using mac os x chrome for mac for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
| 1
|
55,461
| 14,498,045,489
|
IssuesEvent
|
2020-12-11 15:00:56
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Improve performance of implicit join algorithm
|
C: Performance E: All Editions P: High T: Defect
|
Implicit joins, introduced in jOOQ 3.11 (#1502) are a very useful feature. But they come at a price. Their current "cost" of calculation in a benchmark can be seen in this profiling session.

Compared to the SQL generation (`toSQLReferenceLimitDefault()`, 19s) the `registerTable()`, `scopeMarkStart()`, `scopeEnd()` and `scopeMarkEnd()` take 5s in this benchmark, even if the benchmark query does not use any implicit joins:
```java
ctx.select(T_PERFORMANCE_JOOQ.ID, T_PERFORMANCE_JOOQ.VALUE_INT, T_PERFORMANCE_JOOQ.VALUE_STRING)
.from(T_PERFORMANCE_JOOQ)
.limit(one())
.fetchLazy()
.forEach(r -> {});
```
|
1.0
|
Improve performance of implicit join algorithm - Implicit joins, introduced in jOOQ 3.11 (#1502) are a very useful feature. But they come at a price. Their current "cost" of calculation in a benchmark can be seen in this profiling session.

Compared to the SQL generation (`toSQLReferenceLimitDefault()`, 19s) the `registerTable()`, `scopeMarkStart()`, `scopeEnd()` and `scopeMarkEnd()` take 5s in this benchmark, even if the benchmark query does not use any implicit joins:
```java
ctx.select(T_PERFORMANCE_JOOQ.ID, T_PERFORMANCE_JOOQ.VALUE_INT, T_PERFORMANCE_JOOQ.VALUE_STRING)
.from(T_PERFORMANCE_JOOQ)
.limit(one())
.fetchLazy()
.forEach(r -> {});
```
|
defect
|
improve performance of implicit join algorithm implicit joins introduced in jooq are a very useful feature but they come at a price their current cost of calculation in a benchmark can be seen in this profiling session compared to the sql generation tosqlreferencelimitdefault the registertable scopemarkstart scopeend and scopemarkend take in this benchmark even if the benchmark query does not use any implicit joins java ctx select t performance jooq id t performance jooq value int t performance jooq value string from t performance jooq limit one fetchlazy foreach r
| 1
|
440,940
| 12,706,526,986
|
IssuesEvent
|
2020-06-23 07:21:24
|
AGROFIMS/hagrofims
|
https://api.github.com/repos/AGROFIMS/hagrofims
|
opened
|
Add more soil info in Site description
|
medium priority site information
|
Add a section to give info about soil tests done prior to the experiment start, in Site description.
|
1.0
|
Add more soil info in Site description - Add a section to give info about soil tests done prior to the experiment start, in Site description.
|
non_defect
|
add more soil info in site description add a section to give info about soil tests done prior to the experiment start in site description
| 0
|
68,092
| 14,900,463,751
|
IssuesEvent
|
2021-01-21 15:26:33
|
doc-ai/tensorio-webinar
|
https://api.github.com/repos/doc-ai/tensorio-webinar
|
closed
|
CVE-2020-15210 (Medium) detected in TensorIOTensorFlow-2.0.7
|
security vulnerability
|
## CVE-2020-15210 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>TensorIOTensorFlow-2.0.7</b></p></summary>
<p>An unofficial build of TensorFlow for iOS used by TensorIO, supporting inference, evaluation, and training.</p>
<p>Library home page: <a href="https://storage.googleapis.com/tensorio-build/ios/release/2.0/xcodebuild/12B45b/tag/2.0.7/pod/TensorIO-TensorFlow-2.0_7.tar.gz">https://storage.googleapis.com/tensorio-build/ios/release/2.0/xcodebuild/12B45b/tag/2.0.7/pod/TensorIO-TensorFlow-2.0_7.tar.gz</a></p>
<p>Path to dependency file: tensorio-webinar/ios/Podfile.lock</p>
<p>Path to vulnerable library: tensorio-webinar/ios/Podfile.lock</p>
<p>
Dependency Hierarchy:
- TensorIO/TensorFlow-1.2.3 (Root Library)
- :x: **TensorIOTensorFlow-2.0.7** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio-webinar/commit/941f35bff0d8faa262e6aab872f29d5c55955b92">941f35bff0d8faa262e6aab872f29d5c55955b92</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and will release patch releases for all versions between 1.15 and 2.3. We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.
<p>Publish Date: 2020-09-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15210>CVE-2020-15210</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-x9j7-x98r-r4w2">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-x9j7-x98r-r4w2</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.15.4, 2.0.3, 2.1.2, 2.2.1, 2.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"CocoaPods","packageName":"TensorIOTensorFlow","packageVersion":"2.0.7","isTransitiveDependency":true,"dependencyTree":"TensorIO/TensorFlow:1.2.3;TensorIOTensorFlow:2.0.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.15.4, 2.0.3, 2.1.2, 2.2.1, 2.3.1"}],"vulnerabilityIdentifier":"CVE-2020-15210","vulnerabilityDetails":"In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and will release patch releases for all versions between 1.15 and 2.3. We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15210","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-15210 (Medium) detected in TensorIOTensorFlow-2.0.7 - ## CVE-2020-15210 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>TensorIOTensorFlow-2.0.7</b></p></summary>
<p>An unofficial build of TensorFlow for iOS used by TensorIO, supporting inference, evaluation, and training.</p>
<p>Library home page: <a href="https://storage.googleapis.com/tensorio-build/ios/release/2.0/xcodebuild/12B45b/tag/2.0.7/pod/TensorIO-TensorFlow-2.0_7.tar.gz">https://storage.googleapis.com/tensorio-build/ios/release/2.0/xcodebuild/12B45b/tag/2.0.7/pod/TensorIO-TensorFlow-2.0_7.tar.gz</a></p>
<p>Path to dependency file: tensorio-webinar/ios/Podfile.lock</p>
<p>Path to vulnerable library: tensorio-webinar/ios/Podfile.lock</p>
<p>
Dependency Hierarchy:
- TensorIO/TensorFlow-1.2.3 (Root Library)
- :x: **TensorIOTensorFlow-2.0.7** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/doc-ai/tensorio-webinar/commit/941f35bff0d8faa262e6aab872f29d5c55955b92">941f35bff0d8faa262e6aab872f29d5c55955b92</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and will release patch releases for all versions between 1.15 and 2.3. We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.
<p>Publish Date: 2020-09-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15210>CVE-2020-15210</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-x9j7-x98r-r4w2">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-x9j7-x98r-r4w2</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.15.4, 2.0.3, 2.1.2, 2.2.1, 2.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"CocoaPods","packageName":"TensorIOTensorFlow","packageVersion":"2.0.7","isTransitiveDependency":true,"dependencyTree":"TensorIO/TensorFlow:1.2.3;TensorIOTensorFlow:2.0.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.15.4, 2.0.3, 2.1.2, 2.2.1, 2.3.1"}],"vulnerabilityIdentifier":"CVE-2020-15210","vulnerabilityDetails":"In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and will release patch releases for all versions between 1.15 and 2.3. We recommend users to upgrade to TensorFlow 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15210","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in tensoriotensorflow cve medium severity vulnerability vulnerable library tensoriotensorflow an unofficial build of tensorflow for ios used by tensorio supporting inference evaluation and training library home page a href path to dependency file tensorio webinar ios podfile lock path to vulnerable library tensorio webinar ios podfile lock dependency hierarchy tensorio tensorflow root library x tensoriotensorflow vulnerable library found in head commit a href found in base branch master vulnerability details in tensorflow lite before versions and if a tflite saved model uses the same tensor as both input and output of an operator then depending on the operator we can observe a segmentation fault or just memory corruption we have patched the issue in and will release patch releases for all versions between and we recommend users to upgrade to tensorflow or publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in tensorflow lite before versions and if a tflite saved model uses the same tensor as both input and output of an operator then depending on the operator we can observe a segmentation fault or just memory corruption we have patched the issue in and will release patch releases for all versions between and we recommend users to upgrade to tensorflow or vulnerabilityurl
| 0
|
13,709
| 2,776,594,838
|
IssuesEvent
|
2015-05-04 22:42:56
|
umutafacan/bounswe2015group3
|
https://api.github.com/repos/umutafacan/bounswe2015group3
|
reopened
|
SequenceDiagram
|
auto-migrated Priority-Medium Type-Defect
|
```
We will update sequence diagram according to customer meeting 3
```
Original issue reported on code.google.com by `bunyamin...@gmail.com` on 28 Mar 2015 at 4:00
|
1.0
|
SequenceDiagram - ```
We will update sequence diagram according to customer meeting 3
```
Original issue reported on code.google.com by `bunyamin...@gmail.com` on 28 Mar 2015 at 4:00
|
defect
|
sequencediagram we will update sequence diagram according to customer meeting original issue reported on code google com by bunyamin gmail com on mar at
| 1
|
23,479
| 3,830,181,890
|
IssuesEvent
|
2016-03-31 13:46:51
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
NullPointerException when calling Query.getSQL() on an unattached Query
|
C: Functionality P: High R: Fixed T: Defect
|
```
SelectJoinStep<Record> wat = with("a").as(select(
val(1).as("x"),
val("a").as("y")))
.select()
.from(table(name("a")));
wat.getSQL();
```
The CTE above is taken from the docs. If I try to `getSQL` on it, it throws a NPE.
```
Caused by: java.lang.NullPointerException
at org.jooq.impl.AbstractQuery.getSQL(AbstractQuery.java:513)
at org.jooq.impl.AbstractDelegatingQuery.getSQL(AbstractDelegatingQuery.java:107)
at mycode.BaseJooqMapperLookup.getCoreQuery(BaseJooqMapperLookup.java:129)
```
Perhaps this is a usage error but I can use `getSQL` on my various other steps.
|
1.0
|
NullPointerException when calling Query.getSQL() on an unattached Query - ```
SelectJoinStep<Record> wat = with("a").as(select(
val(1).as("x"),
val("a").as("y")))
.select()
.from(table(name("a")));
wat.getSQL();
```
The CTE above is taken from the docs. If I try to `getSQL` on it, it throws a NPE.
```
Caused by: java.lang.NullPointerException
at org.jooq.impl.AbstractQuery.getSQL(AbstractQuery.java:513)
at org.jooq.impl.AbstractDelegatingQuery.getSQL(AbstractDelegatingQuery.java:107)
at mycode.BaseJooqMapperLookup.getCoreQuery(BaseJooqMapperLookup.java:129)
```
Perhaps this is a usage error but I can use `getSQL` on my various other steps.
|
defect
|
nullpointerexception when calling query getsql on an unattached query selectjoinstep wat with a as select val as x val a as y select from table name a wat getsql the cte above is taken from the docs if i try to getsql on it it throws a npe caused by java lang nullpointerexception at org jooq impl abstractquery getsql abstractquery java at org jooq impl abstractdelegatingquery getsql abstractdelegatingquery java at mycode basejooqmapperlookup getcorequery basejooqmapperlookup java perhaps this is a usage error but i can use getsql on my various other steps
| 1
|
45,567
| 12,877,847,078
|
IssuesEvent
|
2020-07-11 13:29:59
|
msofficesvn/msofficesvn
|
https://api.github.com/repos/msofficesvn/msofficesvn
|
closed
|
Not working with MS Office 2010 64 bits
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. Install files as directed
What is the expected output? What do you see instead?
Subversion ribbon in any open documents, or at least in those already
registered with TortoiseSVN.
Subversion ribbon appears only when the template itself is open.
What version of the product are you using? On what operating system?
1.4 on MS Windows 7.
Please provide any additional information below.
Could not check the registry as regedit is restricted by network administrators.
```
Original issue reported on code.google.com by `l...@dutras.org` on 22 May 2013 at 6:35
|
1.0
|
Not working with MS Office 2010 64 bits - ```
What steps will reproduce the problem?
1. Install files as directed
What is the expected output? What do you see instead?
Subversion ribbon in any open documents, or at least in those already
registered with TortoiseSVN.
Subversion ribbon appears only when the template itself is open.
What version of the product are you using? On what operating system?
1.4 on MS Windows 7.
Please provide any additional information below.
Could not check the registry as regedit is restricted by network administrators.
```
Original issue reported on code.google.com by `l...@dutras.org` on 22 May 2013 at 6:35
|
defect
|
not working with ms office bits what steps will reproduce the problem install files as directed what is the expected output what do you see instead subversion ribbon in any open documents or at least in those already registered with tortoisesvn subversion ribbon appears only when the template itself is open what version of the product are you using on what operating system on ms windows please provide any additional information below could not check the registry as regedit is restricted by network administrators original issue reported on code google com by l dutras org on may at
| 1
|
138,045
| 20,270,412,074
|
IssuesEvent
|
2022-02-15 15:44:19
|
USDA-FSA/fsa-style
|
https://api.github.com/repos/USDA-FSA/fsa-style
|
opened
|
Tabs: New Vertical Variants
|
type: feature request type: design P3 source: internal FBCSS type: exploration kind: component
|
### Proposal
Add **Vertical** variants of the [Content Tabs](https://usda-fsa.github.io/fsa-design-system/components/content-tabs/) component, which are currently limited to a **horizontal** listing of tabs.
### Sub-tasks
- [x] Design
- [ ] Prototype (Initially on `feature/tabs-vertical` branch.)
- [ ] Responsive
- [ ] Production Build
- [ ] Responsive
- [ ] Test
### Presumed naming
* `fsa-content-tabs` (current default)
* `fsa-content-tabs--horizontal[@BP]` (default/override)
* `fsa-content-tabs--vertical-left[@BP]`
* `fsa-content-tabs--vertical-right[@BP]`
### Speculative Design
e.g. `fsa-content-tabs--vertical-left`

|
1.0
|
Tabs: New Vertical Variants - ### Proposal
Add **Vertical** variants of the [Content Tabs](https://usda-fsa.github.io/fsa-design-system/components/content-tabs/) component, which are currently limited to a **horizontal** listing of tabs.
### Sub-tasks
- [x] Design
- [ ] Prototype (Initially on `feature/tabs-vertical` branch.)
- [ ] Responsive
- [ ] Production Build
- [ ] Responsive
- [ ] Test
### Presumed naming
* `fsa-content-tabs` (current default)
* `fsa-content-tabs--horizontal[@BP]` (default/override)
* `fsa-content-tabs--vertical-left[@BP]`
* `fsa-content-tabs--vertical-right[@BP]`
### Speculative Design
e.g. `fsa-content-tabs--vertical-left`

|
non_defect
|
tabs new vertical variants proposal add vertical variants of the component which are currently limited to a horizontal listing of tabs sub tasks design prototype initially on feature tabs vertical branch responsive production build responsive test presumed naming fsa content tabs current default fsa content tabs horizontal default override fsa content tabs vertical left fsa content tabs vertical right speculative design e g fsa content tabs vertical left
| 0
|
94,933
| 10,861,964,672
|
IssuesEvent
|
2019-11-14 12:16:40
|
rpm1003/GESPRO_GESTION_DE_VERSIONES
|
https://api.github.com/repos/rpm1003/GESPRO_GESTION_DE_VERSIONES
|
closed
|
Conclusiones y Lineas de trabajo futuras
|
documentation
|
- Plataforma cloud para la sincronización.
- Mejoras del algoritmo
|
1.0
|
Conclusiones y Lineas de trabajo futuras - - Plataforma cloud para la sincronización.
- Mejoras del algoritmo
|
non_defect
|
conclusiones y lineas de trabajo futuras plataforma cloud para la sincronización mejoras del algoritmo
| 0
|
19,522
| 3,218,759,167
|
IssuesEvent
|
2015-10-08 04:35:21
|
pellcorp/tcpmon
|
https://api.github.com/repos/pellcorp/tcpmon
|
closed
|
request or response windows should be independently resizeable
|
auto-migrated Priority-Medium Type-Defect
|
```
should be possible to increase the size of either of them. Best done with a
slider. The table should be smaller than the other two. Check out what happens
when you maximize tcpmon.
```
Original issue reported on code.google.com by `inder123` on 16 May 2011 at 8:14
|
1.0
|
request or response windows should be independently resizeable - ```
should be possible to increase the size of either of them. Best done with a
slider. The table should be smaller than the other two. Check out what happens
when you maximize tcpmon.
```
Original issue reported on code.google.com by `inder123` on 16 May 2011 at 8:14
|
defect
|
request or response windows should be independently resizeable should be possible to increase the size of either of them best done with a slider the table should be smaller than the other two check out what happens when you maximize tcpmon original issue reported on code google com by on may at
| 1
|
45,275
| 12,699,646,205
|
IssuesEvent
|
2020-06-22 15:09:01
|
vim/vim
|
https://api.github.com/repos/vim/vim
|
closed
|
Malformed YAML hogs CPU for a long time
|
Priority-Medium auto-migrated defect runtime
|
```
What steps will reproduce the problem?
1. take puppet (ruby) source code (*) and either paste it into open YAML file,
insert it there with :r or rename the puppet file to *.yml and open it with vi
2. wait for a **long** time while vi spins CPU
......
3. Finally "pkill -9 vi" from another terminal because it's still unresponsive
(*) The problem might be specific to a code where YAML tries to parse regexps
or st. like that. This is a file that causes several second hangup, by adding
characters to 'baz' string the lag increases dramatically:
> cat test.yml
class foo {
bar {
baz => "aaaaaaaaaaaaaaaa+bbbbbbbbbbbbbb+cccccccccccccc/dddddddddddddd+/eeeeeeeeeeeeee+ff/ggggggggggggg/hhhhhhhhhhhhhh/jjjj+kk+lllllllllllll/mmmmmmmmmmmmmm//nnnnnnnnnnnn+ooooooooooooooo=",
}
}
What version of the product are you using? On what operating system?
Ubuntu 14.04.1 LTS - 2:7.4.052-1ubuntu3
Debian GNU/Linux jessie/sid - 2:7.4.430-1
Ubuntu 12.04.5 LTS - 2:7.3.429-2ubuntu2.1 - this one shows much less lag, but
this might be caused by the fact that it's a desktop system unlike previous two.
```
Original issue reported on code.google.com by `krystof1...@gmail.com` on 7 Oct 2014 at 1:42
Attachments:
- [test2.yml](https://storage.googleapis.com/google-code-attachments/vim/issue-263/comment-0/test2.yml)
|
1.0
|
Malformed YAML hogs CPU for a long time - ```
What steps will reproduce the problem?
1. take puppet (ruby) source code (*) and either paste it into open YAML file,
insert it there with :r or rename the puppet file to *.yml and open it with vi
2. wait for a **long** time while vi spins CPU
......
3. Finally "pkill -9 vi" from another terminal because it's still unresponsive
(*) The problem might be specific to a code where YAML tries to parse regexps
or st. like that. This is a file that causes several second hangup, by adding
characters to 'baz' string the lag increases dramatically:
> cat test.yml
class foo {
bar {
baz => "aaaaaaaaaaaaaaaa+bbbbbbbbbbbbbb+cccccccccccccc/dddddddddddddd+/eeeeeeeeeeeeee+ff/ggggggggggggg/hhhhhhhhhhhhhh/jjjj+kk+lllllllllllll/mmmmmmmmmmmmmm//nnnnnnnnnnnn+ooooooooooooooo=",
}
}
What version of the product are you using? On what operating system?
Ubuntu 14.04.1 LTS - 2:7.4.052-1ubuntu3
Debian GNU/Linux jessie/sid - 2:7.4.430-1
Ubuntu 12.04.5 LTS - 2:7.3.429-2ubuntu2.1 - this one shows much less lag, but
this might be caused by the fact that it's a desktop system unlike previous two.
```
Original issue reported on code.google.com by `krystof1...@gmail.com` on 7 Oct 2014 at 1:42
Attachments:
- [test2.yml](https://storage.googleapis.com/google-code-attachments/vim/issue-263/comment-0/test2.yml)
|
defect
|
malformed yaml hogs cpu for a long time what steps will reproduce the problem take puppet ruby source code and either paste it into open yaml file insert it there with r or rename the puppet file to yml and open it with vi wait for a long time while vi spins cpu finally pkill vi from another terminal because it s still unresponsive the problem might be specific to a code where yaml tries to parse regexps or st like that this is a file that causes several second hangup by adding characters to baz string the lag increases dramatically cat test yml class foo bar baz aaaaaaaaaaaaaaaa bbbbbbbbbbbbbb cccccccccccccc dddddddddddddd eeeeeeeeeeeeee ff ggggggggggggg hhhhhhhhhhhhhh jjjj kk lllllllllllll mmmmmmmmmmmmmm nnnnnnnnnnnn ooooooooooooooo what version of the product are you using on what operating system ubuntu lts debian gnu linux jessie sid ubuntu lts this one shows much less lag but this might be caused by the fact that it s a desktop system unlike previous two original issue reported on code google com by gmail com on oct at attachments
| 1
|
40,746
| 5,314,943,119
|
IssuesEvent
|
2017-02-13 16:11:32
|
LLK/scratch-blocks
|
https://api.github.com/repos/LLK/scratch-blocks
|
closed
|
Internet Explorer and Edge testing
|
testing
|
We need to define what version of IE we want to support (IE11? Unfortunately the rest of Scratch probably won't support less than that because of WebGL, WebWorkers, etc.) and start doing some testing for IE and Edge.
Up to this point we've pretty much neglected that. Blockly has very good compatibility, though, so hopefully we won't have too bad of a time.
cc: @jwzimmer @thisandagain
|
1.0
|
Internet Explorer and Edge testing - We need to define what version of IE we want to support (IE11? Unfortunately the rest of Scratch probably won't support less than that because of WebGL, WebWorkers, etc.) and start doing some testing for IE and Edge.
Up to this point we've pretty much neglected that. Blockly has very good compatibility, though, so hopefully we won't have too bad of a time.
cc: @jwzimmer @thisandagain
|
non_defect
|
internet explorer and edge testing we need to define what version of ie we want to support unfortunately the rest of scratch probably won t support less than that because of webgl webworkers etc and start doing some testing for ie and edge up to this point we ve pretty much neglected that blockly has very good compatibility though so hopefully we won t have too bad of a time cc jwzimmer thisandagain
| 0
|
21,147
| 3,462,614,400
|
IssuesEvent
|
2015-12-21 01:50:00
|
donald-w/pojosr
|
https://api.github.com/repos/donald-w/pojosr
|
closed
|
PojoSRBundle does not return ALL resource listings in BundleWiring mode
|
auto-migrated Priority-Medium Type-Defect
|
```
Any reason why de.kalpatec.pojosr.framework.PojoSRBundle.findEntries is only
using the current classpath and not the entire classpath to resolve a resource
listing?
I am doing something like below and rely on the listResources function of the
bundleWiring to return all resource URLs that are included in the project under
test and also its dependent jar files.
bundleWiring.listResources("/com/bla/model", "*.class",
BundleWiring.FINDENTRIES_RECURSE);
Have not looked into the pojosr internals in too much detail but seems to be a
major fail that it does not work as any normal OSGi container does.
Cheers,
Niels
```
Original issue reported on code.google.com by `niels...@gmail.com` on 8 May 2014 at 3:01
|
1.0
|
PojoSRBundle does not return ALL resource listings in BundleWiring mode - ```
Any reason why de.kalpatec.pojosr.framework.PojoSRBundle.findEntries is only
using the current classpath and not the entire classpath to resolve a resource
listing?
I am doing something like below and rely on the listResources function of the
bundleWiring to return all resource URLs that are included in the project under
test and also its dependent jar files.
bundleWiring.listResources("/com/bla/model", "*.class",
BundleWiring.FINDENTRIES_RECURSE);
Have not looked into the pojosr internals in too much detail but seems to be a
major fail that it does not work as any normal OSGi container does.
Cheers,
Niels
```
Original issue reported on code.google.com by `niels...@gmail.com` on 8 May 2014 at 3:01
|
defect
|
pojosrbundle does not return all resource listings in bundlewiring mode any reason why de kalpatec pojosr framework pojosrbundle findentries is only using the current classpath and not the entire classpath to resolve a resource listing i am doing something like below and rely on the listresources function of the bundlewiring to return all resource urls that are included in the project under test and also its dependent jar files bundlewiring listresources com bla model class bundlewiring findentries recurse have not looked into the pojosr internals in too much detail but seems to be a major fail that it does not work as any normal osgi container does cheers niels original issue reported on code google com by niels gmail com on may at
| 1
|
607,354
| 18,780,511,801
|
IssuesEvent
|
2021-11-08 05:43:12
|
TestCentric/testcentric-engine
|
https://api.github.com/repos/TestCentric/testcentric-engine
|
opened
|
Remove .NET Core 2.1 Agent from the engine
|
Breaking Change Feature High Priority
|
.NET Core 2.1 tests will run under .Net Core 3.1 or .NET 5.0 unless the user installs an the .NET Core 2.1 agent as an extension.
|
1.0
|
Remove .NET Core 2.1 Agent from the engine - .NET Core 2.1 tests will run under .Net Core 3.1 or .NET 5.0 unless the user installs an the .NET Core 2.1 agent as an extension.
|
non_defect
|
remove net core agent from the engine net core tests will run under net core or net unless the user installs an the net core agent as an extension
| 0
|
67,379
| 20,961,608,650
|
IssuesEvent
|
2022-03-27 21:48:44
|
abedmaatalla/sipdroid
|
https://api.github.com/repos/abedmaatalla/sipdroid
|
closed
|
Extend TCP Keep Alive intervals
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. Nature of the app
What is the expected output? What do you see instead?
Referencing http://code.google.com/p/sipdroid/wiki/NewStandbyTechnique, typical
TCP timeout values are 30-1440 minutes. The assumption that that sipdroid will
have a long interval between sending keep alive packets for TCP. However, I'm
seeing that it's fixed at 60 seconds still. I searched other issues and found
it mentioned but not posted as an issue. I had assumed that sipdroid would be
using a longer tcp keep alive interval.
What version of the product are you using? On what device/operating system?
Latest 2.9 on a Nexus 4 running 4.2.1.
Which SIP server are you using? What happens with PBXes?
pbxes.com
Which type of network are you using?
tcp connection to pbxes over wifi (nat)
Please provide any additional information below.
I have confirmed with BetterBatteryStats that there is a wakeup event triggered
by sipdroid every 60 seconds.
```
Original issue reported on code.google.com by `zid...@ijib.com` on 24 Jan 2013 at 8:43
|
1.0
|
Extend TCP Keep Alive intervals - ```
What steps will reproduce the problem?
1. Nature of the app
What is the expected output? What do you see instead?
Referencing http://code.google.com/p/sipdroid/wiki/NewStandbyTechnique, typical
TCP timeout values are 30-1440 minutes. The assumption that that sipdroid will
have a long interval between sending keep alive packets for TCP. However, I'm
seeing that it's fixed at 60 seconds still. I searched other issues and found
it mentioned but not posted as an issue. I had assumed that sipdroid would be
using a longer tcp keep alive interval.
What version of the product are you using? On what device/operating system?
Latest 2.9 on a Nexus 4 running 4.2.1.
Which SIP server are you using? What happens with PBXes?
pbxes.com
Which type of network are you using?
tcp connection to pbxes over wifi (nat)
Please provide any additional information below.
I have confirmed with BetterBatteryStats that there is a wakeup event triggered
by sipdroid every 60 seconds.
```
Original issue reported on code.google.com by `zid...@ijib.com` on 24 Jan 2013 at 8:43
|
defect
|
extend tcp keep alive intervals what steps will reproduce the problem nature of the app what is the expected output what do you see instead referencing typical tcp timeout values are minutes the assumption that that sipdroid will have a long interval between sending keep alive packets for tcp however i m seeing that it s fixed at seconds still i searched other issues and found it mentioned but not posted as an issue i had assumed that sipdroid would be using a longer tcp keep alive interval what version of the product are you using on what device operating system latest on a nexus running which sip server are you using what happens with pbxes pbxes com which type of network are you using tcp connection to pbxes over wifi nat please provide any additional information below i have confirmed with betterbatterystats that there is a wakeup event triggered by sipdroid every seconds original issue reported on code google com by zid ijib com on jan at
| 1
|
89,220
| 25,706,620,946
|
IssuesEvent
|
2022-12-07 01:24:15
|
aws/aws-cdk
|
https://api.github.com/repos/aws/aws-cdk
|
closed
|
CodeBuild: when linking existing IAM roles with codebuild steps/stages no error is returned for existing policies with the same name
|
bug @aws-cdk/aws-codebuild needs-triage
|
### Describe the bug
I have two cloudformation stacks that create codepipeline pipelines defining codebuild actions that are performed by the same IAM role.
My stacks are created through the python cdk 2.X
the one stack links an existing IAM role declared like this to a aws_codebuild.PipelineProject
`
code_build_role = iam.Role.from_role_arn(self, "CodeBuildRole", "arn:aws:iam::"+account_id+":role/MyRoleWhatever")
`
And the other stack similarly links the same role to a aws_cdk.pipeline.CodeBuildStep
`
cicd_role = iam.Role.from_role_arn(self, "CodeBuildRole",
"arn:aws:iam::" + account_id + ":role/MyRoleWhatever")`
So the cloudformation template file created creates for the first stack something like this
`
{
"Resources": {
"CodeBuildRolePolicy0442214A": {
"Type": "AWS::IAM::Policy",
...
`
and for the second one again
`
{
"Resources": {
"CodeBuildRolePolicy0442214A": {
"Type": "AWS::IAM::Policy",
....
`
This had as a result that the existing IAM role had attached the same policy by the two stacks so the latest generated stack creation would replace the first applied policy.
**Ofc** I can pick different names which is a solution I could do when I found this issue(didn't figure out the exact issue early enough so I have just created a copy role for the latter stack)
The issue is that for example if you create different stacks that define the same name for pipelines you have an error saying that the pipeline exists so you know you have to change it.
The same issue would have happen for this case: The `cdk deploy` should tell you `this IAM role has already this policy defined.`
### Expected Behavior
When trying to create resources with `cdk deploy` and attaching a policy with the same name to an existing IAM role it should warn you or throw an error saying `this policy with this name already exists` so that you know and change it/fix it.
### Current Behavior
IAM roles declared in the cdk code with the same name in different stacks produce the same policy name and when the resources are created with a cdk deploy the last policy is the one that remains in the role.
### Reproduction Steps
code_build_role = iam.Role.from_role_arn(self, "CodeBuildRole", "arn:aws:iam::"+account_id+":role/MyRoleWhatever")
project_dev_deploy = aws_codebuild.PipelineProject(self, "job_dev",
project_name="job_dev",
build_spec=codebuild.BuildSpec.from_source_filename("buildspec.yml"),
role=code_build_role,
environment={
"build_image": codebuild.LinuxBuildImage.STANDARD_5_0,
"privileged": True
}
)
deploy_dev_action = codepipeline_actions.CodeBuildAction(
action_name="BuildCopyJar",
project=project_dev_deploy,
input=source_artifact,
execute_batch_build=False,
variables_namespace = 'DevVariables'
)
And in another stack
cicd_role = iam.Role.from_role_arn(self, "CodeBuildRole",
"arn:aws:iam::" + account_id + ":role/MyRoleWhatever")
step= CodeBuildStep("Update with latest version production",
env={"SYNTH_BUCKET_NAME": bucket_name,
"GIT_VERSION": git_version,
"TYPE": pipeline_type,
"BUILD_NO": build_no
},
commands=[
"echo $GIT_VERSION $TYPE $BUILD_NO > live_status",
"aws s3 cp live_status s3://${SYNTH_BUCKET_NAME}/versions/live.status.PRODUCTION.txt --acl bucket-owner-full-control"],
role=cicd_role
)
### Possible Solution
I would throw an error if an IAM role has an existing policy with the same name as the one a cdk deployment tries to attach.
### Additional Information/Context
_No response_
### CDK CLI Version
2.49.0 (build 793dd76)
### Framework Version
_No response_
### Node.js Version
v16.15.0
### OS
macos
### Language
Python
### Language Version
_No response_
### Other information
_No response_
|
1.0
|
CodeBuild: when linking existing IAM roles with codebuild steps/stages no error is returned for existing policies with the same name - ### Describe the bug
I have two cloudformation stacks that create codepipeline pipelines defining codebuild actions that are performed by the same IAM role.
My stacks are created through the python cdk 2.X
the one stack links an existing IAM role declared like this to a aws_codebuild.PipelineProject
`
code_build_role = iam.Role.from_role_arn(self, "CodeBuildRole", "arn:aws:iam::"+account_id+":role/MyRoleWhatever")
`
And the other stack similarly links the same role to a aws_cdk.pipeline.CodeBuildStep
`
cicd_role = iam.Role.from_role_arn(self, "CodeBuildRole",
"arn:aws:iam::" + account_id + ":role/MyRoleWhatever")`
So the cloudformation template file created creates for the first stack something like this
`
{
"Resources": {
"CodeBuildRolePolicy0442214A": {
"Type": "AWS::IAM::Policy",
...
`
and for the second one again
`
{
"Resources": {
"CodeBuildRolePolicy0442214A": {
"Type": "AWS::IAM::Policy",
....
`
This had as a result that the existing IAM role had attached the same policy by the two stacks so the latest generated stack creation would replace the first applied policy.
**Ofc** I can pick different names which is a solution I could do when I found this issue(didn't figure out the exact issue early enough so I have just created a copy role for the latter stack)
The issue is that for example if you create different stacks that define the same name for pipelines you have an error saying that the pipeline exists so you know you have to change it.
The same issue would have happen for this case: The `cdk deploy` should tell you `this IAM role has already this policy defined.`
### Expected Behavior
When trying to create resources with `cdk deploy` and attaching a policy with the same name to an existing IAM role it should warn you or throw an error saying `this policy with this name already exists` so that you know and change it/fix it.
### Current Behavior
IAM roles declared in the cdk code with the same name in different stacks produce the same policy name and when the resources are created with a cdk deploy the last policy is the one that remains in the role.
### Reproduction Steps
code_build_role = iam.Role.from_role_arn(self, "CodeBuildRole", "arn:aws:iam::"+account_id+":role/MyRoleWhatever")
project_dev_deploy = aws_codebuild.PipelineProject(self, "job_dev",
project_name="job_dev",
build_spec=codebuild.BuildSpec.from_source_filename("buildspec.yml"),
role=code_build_role,
environment={
"build_image": codebuild.LinuxBuildImage.STANDARD_5_0,
"privileged": True
}
)
deploy_dev_action = codepipeline_actions.CodeBuildAction(
action_name="BuildCopyJar",
project=project_dev_deploy,
input=source_artifact,
execute_batch_build=False,
variables_namespace = 'DevVariables'
)
And in another stack
cicd_role = iam.Role.from_role_arn(self, "CodeBuildRole",
"arn:aws:iam::" + account_id + ":role/MyRoleWhatever")
step= CodeBuildStep("Update with latest version production",
env={"SYNTH_BUCKET_NAME": bucket_name,
"GIT_VERSION": git_version,
"TYPE": pipeline_type,
"BUILD_NO": build_no
},
commands=[
"echo $GIT_VERSION $TYPE $BUILD_NO > live_status",
"aws s3 cp live_status s3://${SYNTH_BUCKET_NAME}/versions/live.status.PRODUCTION.txt --acl bucket-owner-full-control"],
role=cicd_role
)
### Possible Solution
I would throw an error if an IAM role has an existing policy with the same name as the one a cdk deployment tries to attach.
### Additional Information/Context
_No response_
### CDK CLI Version
2.49.0 (build 793dd76)
### Framework Version
_No response_
### Node.js Version
v16.15.0
### OS
macos
### Language
Python
### Language Version
_No response_
### Other information
_No response_
|
non_defect
|
codebuild when linking existing iam roles with codebuild steps stages no error is returned for existing policies with the same name describe the bug i have two cloudformation stacks that create codepipeline pipelines defining codebuild actions that are performed by the same iam role my stacks are created through the python cdk x the one stack links an existing iam role declared like this to a aws codebuild pipelineproject code build role iam role from role arn self codebuildrole arn aws iam account id role myrolewhatever and the other stack similarly links the same role to a aws cdk pipeline codebuildstep cicd role iam role from role arn self codebuildrole arn aws iam account id role myrolewhatever so the cloudformation template file created creates for the first stack something like this resources type aws iam policy and for the second one again resources type aws iam policy this had as a result that the existing iam role had attached the same policy by the two stacks so the latest generated stack creation would replace the first applied policy ofc i can pick different names which is a solution i could do when i found this issue didn t figure out the exact issue early enough so i have just created a copy role for the latter stack the issue is that for example if you create different stacks that define the same name for pipelines you have an error saying that the pipeline exists so you know you have to change it the same issue would have happen for this case the cdk deploy should tell you this iam role has already this policy defined expected behavior when trying to create resources with cdk deploy and attaching a policy with the same name to an existing iam role it should warn you or throw an error saying this policy with this name already exists so that you know and change it fix it current behavior iam roles declared in the cdk code with the same name in different stacks produce the same policy name and when the resources are created with a cdk deploy the last policy is the one that remains in the role reproduction steps code build role iam role from role arn self codebuildrole arn aws iam account id role myrolewhatever project dev deploy aws codebuild pipelineproject self job dev project name job dev build spec codebuild buildspec from source filename buildspec yml role code build role environment build image codebuild linuxbuildimage standard privileged true deploy dev action codepipeline actions codebuildaction action name buildcopyjar project project dev deploy input source artifact execute batch build false variables namespace devvariables and in another stack cicd role iam role from role arn self codebuildrole arn aws iam account id role myrolewhatever step codebuildstep update with latest version production env synth bucket name bucket name git version git version type pipeline type build no build no commands echo git version type build no live status aws cp live status synth bucket name versions live status production txt acl bucket owner full control role cicd role possible solution i would throw an error if an iam role has an existing policy with the same name as the one a cdk deployment tries to attach additional information context no response cdk cli version build framework version no response node js version os macos language python language version no response other information no response
| 0
|
121,936
| 17,672,627,900
|
IssuesEvent
|
2021-08-23 08:23:10
|
AlexRogalskiy/charts
|
https://api.github.com/repos/AlexRogalskiy/charts
|
opened
|
CVE-2019-1010266 (Medium) detected in lodash-2.4.2.tgz
|
security vulnerability
|
## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/dockerfile_lint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- dockerfile_lint-0.3.4.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/8f914dc948e060cb67870c029b330d65dba91ae1">8f914dc948e060cb67870c029b330d65dba91ae1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-1010266 (Medium) detected in lodash-2.4.2.tgz - ## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/dockerfile_lint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- dockerfile_lint-0.3.4.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/8f914dc948e060cb67870c029b330d65dba91ae1">8f914dc948e060cb67870c029b330d65dba91ae1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file charts package json path to vulnerable library charts node modules dockerfile lint node modules lodash package json dependency hierarchy dockerfile lint tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
265,699
| 23,190,819,258
|
IssuesEvent
|
2022-08-01 12:31:02
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/discover/feature_controls/discover_security·ts - discover feature controls discover feature controls security global discover all privileges "before all" hook for "shows discover navlink"
|
failed-test needs-team
|
A test failed on a tracked branch
```
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="discover-dataView-switch-link"])
Wait timed out after 10016ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-11235eea4be5e425/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:906:17
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Proxy.clickByCssSelector (test/functional/services/common/find.ts:368:5)
at TestSubjects.click (test/functional/services/common/test_subjects.ts:104:5)
at DiscoverPageObject.selectIndexPattern (test/functional/page_objects/discover_page.ts:502:5)
at Context.<anonymous> (x-pack/test/functional/apps/discover/feature_controls/discover_security.ts:93:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/19009#0182432e-acc6-4f76-993e-b9a24dc48f82)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/discover/feature_controls/discover_security·ts","test.name":"discover feature controls discover feature controls security global discover all privileges \"before all\" hook for \"shows discover navlink\"","test.failCount":4}} -->
|
1.0
|
Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/discover/feature_controls/discover_security·ts - discover feature controls discover feature controls security global discover all privileges "before all" hook for "shows discover navlink" - A test failed on a tracked branch
```
Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="discover-dataView-switch-link"])
Wait timed out after 10016ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-11235eea4be5e425/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:906:17
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Proxy.clickByCssSelector (test/functional/services/common/find.ts:368:5)
at TestSubjects.click (test/functional/services/common/test_subjects.ts:104:5)
at DiscoverPageObject.selectIndexPattern (test/functional/page_objects/discover_page.ts:502:5)
at Context.<anonymous> (x-pack/test/functional/apps/discover/feature_controls/discover_security.ts:93:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/19009#0182432e-acc6-4f76-993e-b9a24dc48f82)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/discover/feature_controls/discover_security·ts","test.name":"discover feature controls discover feature controls security global discover all privileges \"before all\" hook for \"shows discover navlink\"","test.failCount":4}} -->
|
non_defect
|
failing test chrome x pack ui functional tests x pack test functional apps discover feature controls discover security·ts discover feature controls discover feature controls security global discover all privileges before all hook for shows discover navlink a test failed on a tracked branch error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules selenium webdriver lib webdriver js at processticksandrejections node internal process task queues at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at proxy clickbycssselector test functional services common find ts at testsubjects click test functional services common test subjects ts at discoverpageobject selectindexpattern test functional page objects discover page ts at context x pack test functional apps discover feature controls discover security ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js first failure
| 0
|
61,688
| 14,633,220,562
|
IssuesEvent
|
2020-12-24 01:07:31
|
ioana-nicolae/first
|
https://api.github.com/repos/ioana-nicolae/first
|
opened
|
WS-2020-0218 (High) detected in merge-1.2.0.tgz
|
security vulnerability
|
## WS-2020-0218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: first/angular.js-master/angular.js-master/yarn.lock</p>
<p>Path to vulnerable library: first/angular.js-master/angular.js-master/yarn.lock</p>
<p>
Dependency Hierarchy:
- commitizen-2.9.5.tgz (Root Library)
- find-node-modules-1.0.4.tgz
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.
<p>Publish Date: 2020-10-09
<p>URL: <a href=https://github.com/yeikos/js.merge/pull/38>WS-2020-0218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yeikos/js.merge/pull/38">https://github.com/yeikos/js.merge/pull/38</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: merge - 2.1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"merge","packageVersion":"1.2.0","isTransitiveDependency":true,"dependencyTree":"commitizen:2.9.5;find-node-modules:1.0.4;merge:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"merge - 2.1.0"}],"vulnerabilityIdentifier":"WS-2020-0218","vulnerabilityDetails":"A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.","vulnerabilityUrl":"https://github.com/yeikos/js.merge/pull/38","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2020-0218 (High) detected in merge-1.2.0.tgz - ## WS-2020-0218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: first/angular.js-master/angular.js-master/yarn.lock</p>
<p>Path to vulnerable library: first/angular.js-master/angular.js-master/yarn.lock</p>
<p>
Dependency Hierarchy:
- commitizen-2.9.5.tgz (Root Library)
- find-node-modules-1.0.4.tgz
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.
<p>Publish Date: 2020-10-09
<p>URL: <a href=https://github.com/yeikos/js.merge/pull/38>WS-2020-0218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yeikos/js.merge/pull/38">https://github.com/yeikos/js.merge/pull/38</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: merge - 2.1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"merge","packageVersion":"1.2.0","isTransitiveDependency":true,"dependencyTree":"commitizen:2.9.5;find-node-modules:1.0.4;merge:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"merge - 2.1.0"}],"vulnerabilityIdentifier":"WS-2020-0218","vulnerabilityDetails":"A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.","vulnerabilityUrl":"https://github.com/yeikos/js.merge/pull/38","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
ws high detected in merge tgz ws high severity vulnerability vulnerable library merge tgz merge multiple objects into one optionally creating a new cloned object similar to the jquery extend but more flexible works in node js and the browser library home page a href path to dependency file first angular js master angular js master yarn lock path to vulnerable library first angular js master angular js master yarn lock dependency hierarchy commitizen tgz root library find node modules tgz x merge tgz vulnerable library found in base branch master vulnerability details a prototype pollution vulnerability was found in merge before via the merge recursive function it can be tricked into adding or modifying properties of the object prototype these properties will be present on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution merge isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails a prototype pollution vulnerability was found in merge before via the merge recursive function it can be tricked into adding or modifying properties of the object prototype these properties will be present on all objects vulnerabilityurl
| 0
|
434,172
| 12,515,276,078
|
IssuesEvent
|
2020-06-03 07:21:58
|
canonical-web-and-design/build.snapcraft.io
|
https://api.github.com/repos/canonical-web-and-design/build.snapcraft.io
|
closed
|
[Bug] “Add repos” > “Cancel” > needless table reloading
|
Add repos Priority: Low
|
1\. Go to the Dashboard.
2\. Choose “Add repos”.
3\. Choose “Cancel”.
What happens:
1\. There is a spinner while the table loads.
3\. There is a spinner while the table reloads.
What should happen:
3\. Since you haven’t changed the contents of the table, it shouldn’t need to reload.
Comments:
---
Creator: Robin Winslow
Assignees:
Copied from Trello card: https://trello.com/c/Ja8ZN2Z3/306-bug-add-repos-cancel-needless-table-reloading
Labels: copy to github
|
1.0
|
[Bug] “Add repos” > “Cancel” > needless table reloading - 1\. Go to the Dashboard.
2\. Choose “Add repos”.
3\. Choose “Cancel”.
What happens:
1\. There is a spinner while the table loads.
3\. There is a spinner while the table reloads.
What should happen:
3\. Since you haven’t changed the contents of the table, it shouldn’t need to reload.
Comments:
---
Creator: Robin Winslow
Assignees:
Copied from Trello card: https://trello.com/c/Ja8ZN2Z3/306-bug-add-repos-cancel-needless-table-reloading
Labels: copy to github
|
non_defect
|
“add repos” “cancel” needless table reloading go to the dashboard choose “add repos” choose “cancel” what happens there is a spinner while the table loads there is a spinner while the table reloads what should happen since you haven’t changed the contents of the table it shouldn’t need to reload comments creator robin winslow assignees copied from trello card labels copy to github
| 0
|
61,160
| 17,023,621,207
|
IssuesEvent
|
2021-07-03 02:58:06
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Cannot reload background
|
Component: potlatch (flash editor) Priority: major Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 2.20am, Wednesday, 4th August 2010]**
Potlatch got stuck loading a pair of background tiles. Annoyingly, they were the two particular tiles I needed to trace from. No matter how far I panned in either direction before returning to the locus, the tiles would not load. Even after reloading Potlatch the tiles were still missing.
There needs to be some way of forcing a reload of the background tiles without resorting to extreme measures such as "restart the browser, clear cache, etc."
|
1.0
|
Cannot reload background - **[Submitted to the original trac issue database at 2.20am, Wednesday, 4th August 2010]**
Potlatch got stuck loading a pair of background tiles. Annoyingly, they were the two particular tiles I needed to trace from. No matter how far I panned in either direction before returning to the locus, the tiles would not load. Even after reloading Potlatch the tiles were still missing.
There needs to be some way of forcing a reload of the background tiles without resorting to extreme measures such as "restart the browser, clear cache, etc."
|
defect
|
cannot reload background potlatch got stuck loading a pair of background tiles annoyingly they were the two particular tiles i needed to trace from no matter how far i panned in either direction before returning to the locus the tiles would not load even after reloading potlatch the tiles were still missing there needs to be some way of forcing a reload of the background tiles without resorting to extreme measures such as restart the browser clear cache etc
| 1
|
277,601
| 30,659,858,602
|
IssuesEvent
|
2023-07-25 14:22:40
|
pazhanivel07/openssl_1_0_2
|
https://api.github.com/repos/pazhanivel07/openssl_1_0_2
|
opened
|
CVE-2009-4355 (Medium) detected in opensslOpenSSL_1_0_2
|
Mend: dependency security vulnerability
|
## CVE-2009-4355 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_2</b></p></summary>
<p>
<p>TLS/SSL and crypto library</p>
<p>Library home page: <a href=https://github.com/openssl/openssl.git>https://github.com/openssl/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/openssl_1_0_2/commit/324810317981b91bee177f96efc4d7b59e34525c">324810317981b91bee177f96efc4d7b59e34525c</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/comp/c_zlib.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Memory leak in the zlib_stateful_finish function in crypto/comp/c_zlib.c in OpenSSL 0.9.8l and earlier and 1.0.0 Beta through Beta 4 allows remote attackers to cause a denial of service (memory consumption) via vectors that trigger incorrect calls to the CRYPTO_cleanup_all_ex_data function, as demonstrated by use of SSLv3 and PHP with the Apache HTTP Server, a related issue to CVE-2008-1678.
<p>Publish Date: 2010-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2009-4355>CVE-2009-4355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4355</a></p>
<p>Release Date: 2010-01-14</p>
<p>Fix Resolution: 0.9.8m, 1.0.0beta5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2009-4355 (Medium) detected in opensslOpenSSL_1_0_2 - ## CVE-2009-4355 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_2</b></p></summary>
<p>
<p>TLS/SSL and crypto library</p>
<p>Library home page: <a href=https://github.com/openssl/openssl.git>https://github.com/openssl/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/openssl_1_0_2/commit/324810317981b91bee177f96efc4d7b59e34525c">324810317981b91bee177f96efc4d7b59e34525c</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/comp/c_zlib.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Memory leak in the zlib_stateful_finish function in crypto/comp/c_zlib.c in OpenSSL 0.9.8l and earlier and 1.0.0 Beta through Beta 4 allows remote attackers to cause a denial of service (memory consumption) via vectors that trigger incorrect calls to the CRYPTO_cleanup_all_ex_data function, as demonstrated by use of SSLv3 and PHP with the Apache HTTP Server, a related issue to CVE-2008-1678.
<p>Publish Date: 2010-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2009-4355>CVE-2009-4355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-4355</a></p>
<p>Release Date: 2010-01-14</p>
<p>Fix Resolution: 0.9.8m, 1.0.0beta5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in opensslopenssl cve medium severity vulnerability vulnerable library opensslopenssl tls ssl and crypto library library home page a href found in head commit a href found in base branch main vulnerable source files crypto comp c zlib c vulnerability details memory leak in the zlib stateful finish function in crypto comp c zlib c in openssl and earlier and beta through beta allows remote attackers to cause a denial of service memory consumption via vectors that trigger incorrect calls to the crypto cleanup all ex data function as demonstrated by use of and php with the apache http server a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
81,478
| 30,870,014,080
|
IssuesEvent
|
2023-08-03 10:38:33
|
vector-im/element-x-ios
|
https://api.github.com/repos/vector-im/element-x-ios
|
closed
|
Opening invites locks the app solid
|
T-Defect S-Critical O-Occasional Team: Element X Feature
|
### Steps to reproduce
1. open invites on an account with ~100 pending invites
2. app ui wedges entirely solid for 30s or more
3. invite details get displayed
4. app ui wedges entirely for another 30s
### Outcome
#### What did you expect?
invite ui which does’t wedge
#### What happened instead?
total ui lockup for 30s +
### Your phone model
_No response_
### Operating system version
_No response_
### Application version
254
### Homeserver
_No response_
### Will you send logs?
Yes
|
1.0
|
Opening invites locks the app solid - ### Steps to reproduce
1. open invites on an account with ~100 pending invites
2. app ui wedges entirely solid for 30s or more
3. invite details get displayed
4. app ui wedges entirely for another 30s
### Outcome
#### What did you expect?
invite ui which does’t wedge
#### What happened instead?
total ui lockup for 30s +
### Your phone model
_No response_
### Operating system version
_No response_
### Application version
254
### Homeserver
_No response_
### Will you send logs?
Yes
|
defect
|
opening invites locks the app solid steps to reproduce open invites on an account with pending invites app ui wedges entirely solid for or more invite details get displayed app ui wedges entirely for another outcome what did you expect invite ui which does’t wedge what happened instead total ui lockup for your phone model no response operating system version no response application version homeserver no response will you send logs yes
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.