Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,695
| 21,793,706,881
|
IssuesEvent
|
2022-05-15 09:56:02
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
closed
|
Consider a DummyFactory for data classes
|
enhancement kmock kmock-processor kmock-gradle
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Once KFixture has been progressed further and is ready to release a much easier support for relaxing will be possible.
However to prepare slowly for a probable plugin a first step should be to provide an entry point for data class dummies.
Acceptance Criteria
1. The extension allows via feature flag to opt-in this behaviour but is disabled by default
2. The extension or via Annotation picks up declared dummies
3. The processor will generate a factory which supports project internal data classes in a limited scope (no generics for now)
|
1.0
|
Consider a DummyFactory for data classes - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Once KFixture has been progressed further and is ready to release a much easier support for relaxing will be possible.
However to prepare slowly for a probable plugin a first step should be to provide an entry point for data class dummies.
Acceptance Criteria
1. The extension allows via feature flag to opt-in this behaviour but is disabled by default
2. The extension or via Annotation picks up declared dummies
3. The processor will generate a factory which supports project internal data classes in a limited scope (no generics for now)
|
process
|
consider a dummyfactory for data classes description once kfixture has been progressed further and is ready to release a much easier support for relaxing will be possible however to prepare slowly for a probable plugin a first step should be to provide an entry point for data class dummies acceptance criteria the extension allows via feature flag to opt in this behaviour but is disabled by default the extension or via annotation picks up declared dummies the processor will generate a factory which supports project internal data classes in a limited scope no generics for now
| 1
|
10,085
| 13,044,161,989
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubTimeStringNull` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubTimeStringNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubTimeStringNull` from TiDB -
## Description
Port the scalar function `SubTimeStringNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subtimestringnull from tidb description port the scalar function subtimestringnull from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
25,189
| 2,677,853,447
|
IssuesEvent
|
2015-03-26 04:43:01
|
JukkaL/mypy
|
https://api.github.com/repos/JukkaL/mypy
|
closed
|
Allow __init__ with signature but no return type
|
bug priority
|
Code:
```
class Visitor:
def __init__(self, a: int):
pass
```
Error:
```
x.py: In member "__init__" of class "Visitor":
x.py, line 2: Cannot define return type for "__init__"
```
The return type is `Any` (implicitly), and `Any` should be a valid return type for `__init__`.
This was reported by Guido.
|
1.0
|
Allow __init__ with signature but no return type - Code:
```
class Visitor:
def __init__(self, a: int):
pass
```
Error:
```
x.py: In member "__init__" of class "Visitor":
x.py, line 2: Cannot define return type for "__init__"
```
The return type is `Any` (implicitly), and `Any` should be a valid return type for `__init__`.
This was reported by Guido.
|
non_process
|
allow init with signature but no return type code class visitor def init self a int pass error x py in member init of class visitor x py line cannot define return type for init the return type is any implicitly and any should be a valid return type for init this was reported by guido
| 0
|
770,758
| 27,055,236,027
|
IssuesEvent
|
2023-02-13 15:45:34
|
googleapis/python-iam
|
https://api.github.com/repos/googleapis/python-iam
|
closed
|
Samples tests are failing with `PermissionDenied`
|
api: iam type: bug priority: p2 samples
|
The sample test for `create_deny_policy` is failing with `PermissionDenied`. See build log [here](https://source.cloud.google.com/results/invocations/57ea4d08-0e28-4a9f-b472-ef3db77dc055/log) and stack trace below:
```
create_deny_policy.py:103: in create_deny_policy
result = policies_client.create_policy(request=request).result()
../../google/cloud/iam_v2/services/policies/client.py:787: in create_policy
metadata=metadata,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py:154: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:288: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:190: in retry_target
return target()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "policies/cloudresourcemanager.googleapis.com%2Fprojects%2Fpython-docs-samples-tests/denypolicies"
policy {
...eing deleted has a tag with the value test"
}
}
policy_id: "test-deny-policy-e49f57dd-603b-48b5-bcde-fd81ab1d17f6"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=policies/cloudresourcemanager.googleapis.com%252Fprojects%252Fpython-docs-samples-tests/denypolicies'), ('x-goog-api-client', 'gl-python/3.7.12 grpc/1.50.0 gax/2.10.2 gapic/2.8.2')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Permission iam.googleapis.com/denypolicies.create denied on resource cloudresourcemanager.googleapis.com/projects/python-docs-samples-tests
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:74: PermissionDenied
```
|
1.0
|
Samples tests are failing with `PermissionDenied` - The sample test for `create_deny_policy` is failing with `PermissionDenied`. See build log [here](https://source.cloud.google.com/results/invocations/57ea4d08-0e28-4a9f-b472-ef3db77dc055/log) and stack trace below:
```
create_deny_policy.py:103: in create_deny_policy
result = policies_client.create_policy(request=request).result()
../../google/cloud/iam_v2/services/policies/client.py:787: in create_policy
metadata=metadata,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py:154: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:288: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:190: in retry_target
return target()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "policies/cloudresourcemanager.googleapis.com%2Fprojects%2Fpython-docs-samples-tests/denypolicies"
policy {
...eing deleted has a tag with the value test"
}
}
policy_id: "test-deny-policy-e49f57dd-603b-48b5-bcde-fd81ab1d17f6"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=policies/cloudresourcemanager.googleapis.com%252Fprojects%252Fpython-docs-samples-tests/denypolicies'), ('x-goog-api-client', 'gl-python/3.7.12 grpc/1.50.0 gax/2.10.2 gapic/2.8.2')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Permission iam.googleapis.com/denypolicies.create denied on resource cloudresourcemanager.googleapis.com/projects/python-docs-samples-tests
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:74: PermissionDenied
```
|
non_process
|
samples tests are failing with permissiondenied the sample test for create deny policy is failing with permissiondenied see build log and stack trace below create deny policy py in create deny policy result policies client create policy request request result google cloud iam services policies client py in create policy metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core retry py in retry wrapped func on error on error nox py lib site packages google api core retry py in retry target return target args parent policies cloudresourcemanager googleapis com docs samples tests denypolicies policy eing deleted has a tag with the value test policy id test deny policy bcde kwargs metadata functools wraps callable def error remapped callable args kwargs try return callable args kwargs except grpc rpcerror as exc raise exceptions from grpc error exc from exc e google api core exceptions permissiondenied permission iam googleapis com denypolicies create denied on resource cloudresourcemanager googleapis com projects python docs samples tests nox py lib site packages google api core grpc helpers py permissiondenied
| 0
|
773,562
| 27,161,881,069
|
IssuesEvent
|
2023-02-17 12:33:29
|
DataverseNO/local.dataverse.no
|
https://api.github.com/repos/DataverseNO/local.dataverse.no
|
closed
|
Add Data Privacy Statement to dataverse.no
|
enhancement PRIORITY
|
A link to the DataverseNO Privacy Statement should be added to dataverse.no. See how to in the Dataverse guide: [https://guides.dataverse.org/en/latest/installation/config.html?highlight=footer#applicationprivacypolicyurl](https://guides.dataverse.org/en/latest/installation/config.html?highlight=footer#applicationprivacypolicyurl).
In the command, replace
https://dataverse.org/best-practices/harvard-dataverse-privacy-policy
with
https://site.uit.no/dataverseno/about/policy-framework/access-and-use-policy/
This should also be reflected in the new cloud-based instance of DataverseNO.
Sorry for the late notice!
Thanks!
|
1.0
|
Add Data Privacy Statement to dataverse.no - A link to the DataverseNO Privacy Statement should be added to dataverse.no. See how to in the Dataverse guide: [https://guides.dataverse.org/en/latest/installation/config.html?highlight=footer#applicationprivacypolicyurl](https://guides.dataverse.org/en/latest/installation/config.html?highlight=footer#applicationprivacypolicyurl).
In the command, replace
https://dataverse.org/best-practices/harvard-dataverse-privacy-policy
with
https://site.uit.no/dataverseno/about/policy-framework/access-and-use-policy/
This should also be reflected in the new cloud-based instance of DataverseNO.
Sorry for the late notice!
Thanks!
|
non_process
|
add data privacy statement to dataverse no a link to the dataverseno privacy statement should be added to dataverse no see how to in the dataverse guide in the command replace with this should also be reflected in the new cloud based instance of dataverseno sorry for the late notice thanks
| 0
|
191,362
| 14,594,037,069
|
IssuesEvent
|
2020-12-20 02:51:18
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
rootfs/node-fencing: vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go; 3 LoC
|
fresh test tiny vendored
|
Found a possible issue in [rootfs/node-fencing](https://www.github.com/rootfs/node-fencing) at [vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go#L259-L261)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to pod at line 260 may start a goroutine
[Click here to see the code in its original context.](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go#L259-L261)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, pod := range newPodList(tc.activePods, v1.PodRunning, job) {
podIndexer.Add(&pod)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b78deb66758bdffcf65efe25d2894b6a6343543c
|
1.0
|
rootfs/node-fencing: vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go; 3 LoC -
Found a possible issue in [rootfs/node-fencing](https://www.github.com/rootfs/node-fencing) at [vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go#L259-L261)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to pod at line 260 may start a goroutine
[Click here to see the code in its original context.](https://github.com/rootfs/node-fencing/blob/b78deb66758bdffcf65efe25d2894b6a6343543c/vendor/k8s.io/kubernetes/pkg/controller/job/jobcontroller_test.go#L259-L261)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, pod := range newPodList(tc.activePods, v1.PodRunning, job) {
podIndexer.Add(&pod)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b78deb66758bdffcf65efe25d2894b6a6343543c
|
non_process
|
rootfs node fencing vendor io kubernetes pkg controller job jobcontroller test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to pod at line may start a goroutine click here to show the line s of go which triggered the analyzer go for pod range newpodlist tc activepods podrunning job podindexer add pod leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
21,054
| 28,001,304,207
|
IssuesEvent
|
2023-03-27 11:58:29
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?]
|
whitelisting process
|
**Domains or links**
<!-- Please list below any domains and links listed here which you believe are a false positive. -->
1. www.rt.com
**More Information**
<!-- How did you discover your web site or domain was listed here? -->
1. tried to go to the website, it doesn't belong to me.
**Have you requested removal from other sources?**
<!-- Please include all relevant links to your existing removals / whitelistings. -->
none
...
**Additional context**
<!-- Add any other context about the problem here. -->
I copied your host file into my host file under the assumption that this list is not a political censorship list and not some kind of "truth" list. If this is not the case but it's also a political list and truth list, then that's perfectly fine too, but please make it clear that ideology and censorship is one of the purposes of this block list.
<!--
❗
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
-->
|
1.0
|
[FALSE-POSITIVE?] - **Domains or links**
<!-- Please list below any domains and links listed here which you believe are a false positive. -->
1. www.rt.com
**More Information**
<!-- How did you discover your web site or domain was listed here? -->
1. tried to go to the website, it doesn't belong to me.
**Have you requested removal from other sources?**
<!-- Please include all relevant links to your existing removals / whitelistings. -->
none
...
**Additional context**
<!-- Add any other context about the problem here. -->
I copied your host file into my host file under the assumption that this list is not a political censorship list and not some kind of "truth" list. If this is not the case but it's also a political list and truth list, then that's perfectly fine too, but please make it clear that ideology and censorship is one of the purposes of this block list.
<!--
❗
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
-->
|
process
|
domains or links more information tried to go to the website it doesn t belong to me have you requested removal from other sources none additional context i copied your host file into my host file under the assumption that this list is not a political censorship list and not some kind of truth list if this is not the case but it s also a political list and truth list then that s perfectly fine too but please make it clear that ideology and censorship is one of the purposes of this block list ❗ we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
11,442
| 14,261,856,663
|
IssuesEvent
|
2020-11-20 12:01:15
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
ntr: replication fork relocation to nuclear pore complex
|
New term request PomBase cell cycle and DNA processes community curation
|
We've received a request from community curator Sarah Lambert, to use for PMID:33159083:
name: replication fork relocation to nuclear pore complex
suggested def (cribbed from the paper intro): A cellular process in which a DNA replication fork that has stalled relocates and anchors to a nuclear pore complex, in a poly-SUMO and STUbL-dependent manner, for the time necessary to complete recombination-dependent replication.
exact synonyms: replication fork relocation to NPC; stalled replication fork relocation to nuclear pore complex
I think this should be linked to GO:0031297 ! replication fork processing by is_a or part_of. Part_of might be slightly more biologically accurate, but then I would struggle to suggest a superclass, so I won't complain if the new term is is_a GO:0031297.
Please let me know if you have any problems or questions (which I'll probably have to relay to Sarah).
|
1.0
|
ntr: replication fork relocation to nuclear pore complex - We've received a request from community curator Sarah Lambert, to use for PMID:33159083:
name: replication fork relocation to nuclear pore complex
suggested def (cribbed from the paper intro): A cellular process in which a DNA replication fork that has stalled relocates and anchors to a nuclear pore complex, in a poly-SUMO and STUbL-dependent manner, for the time necessary to complete recombination-dependent replication.
exact synonyms: replication fork relocation to NPC; stalled replication fork relocation to nuclear pore complex
I think this should be linked to GO:0031297 ! replication fork processing by is_a or part_of. Part_of might be slightly more biologically accurate, but then I would struggle to suggest a superclass, so I won't complain if the new term is is_a GO:0031297.
Please let me know if you have any problems or questions (which I'll probably have to relay to Sarah).
|
process
|
ntr replication fork relocation to nuclear pore complex we ve received a request from community curator sarah lambert to use for pmid name replication fork relocation to nuclear pore complex suggested def cribbed from the paper intro a cellular process in which a dna replication fork that has stalled relocates and anchors to a nuclear pore complex in a poly sumo and stubl dependent manner for the time necessary to complete recombination dependent replication exact synonyms replication fork relocation to npc stalled replication fork relocation to nuclear pore complex i think this should be linked to go replication fork processing by is a or part of part of might be slightly more biologically accurate but then i would struggle to suggest a superclass so i won t complain if the new term is is a go please let me know if you have any problems or questions which i ll probably have to relay to sarah
| 1
|
14,746
| 18,017,074,234
|
IssuesEvent
|
2021-09-16 14:57:27
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Change label GO:0061765 modulation by virus of host NIK/NF-kappaB signaling & children
|
multi-species process
|
* GO:0061765 modulation by virus of host NIK/NF-kappaB signaling
-> change to 'modulation by virus of host NIK/NF-kappaB cascade'
* 'GO:0039644 suppression by virus of host NF-kappaB transcription factor activity'
-> change to 'suppression by virus of host NF-kappaB cascade'
* GO:0039652 activation by virus of host NF-kappaB transcription factor activity
-> change to: induction by virus of host NF-kappaB cascade
----
Other issues:
* labels inconsistent:
** NIK/NF-kappaB **signaling** vs NIK/NF-kappaB **cascade**,
** I-kappaB kinase/NF-kappaB vs NIK/NF-kappaB vs NF-kappaB
* induction by virus of host NF-kappaB cascade seem to describe evasion of apoptosis - we should probably change the term label and position to describe this, to avoid inconsistent annotations
@pmasson55
|
1.0
|
Change label GO:0061765 modulation by virus of host NIK/NF-kappaB signaling & children - * GO:0061765 modulation by virus of host NIK/NF-kappaB signaling
-> change to 'modulation by virus of host NIK/NF-kappaB cascade'
* 'GO:0039644 suppression by virus of host NF-kappaB transcription factor activity'
-> change to 'suppression by virus of host NF-kappaB cascade'
* GO:0039652 activation by virus of host NF-kappaB transcription factor activity
-> change to: induction by virus of host NF-kappaB cascade
----
Other issues:
* labels inconsistent:
** NIK/NF-kappaB **signaling** vs NIK/NF-kappaB **cascade**,
** I-kappaB kinase/NF-kappaB vs NIK/NF-kappaB vs NF-kappaB
* induction by virus of host NF-kappaB cascade seem to describe evasion of apoptosis - we should probably change the term label and position to describe this, to avoid inconsistent annotations
@pmasson55
|
process
|
change label go modulation by virus of host nik nf kappab signaling children go modulation by virus of host nik nf kappab signaling change to modulation by virus of host nik nf kappab cascade go suppression by virus of host nf kappab transcription factor activity change to suppression by virus of host nf kappab cascade go activation by virus of host nf kappab transcription factor activity change to induction by virus of host nf kappab cascade other issues labels inconsistent nik nf kappab signaling vs nik nf kappab cascade i kappab kinase nf kappab vs nik nf kappab vs nf kappab induction by virus of host nf kappab cascade seem to describe evasion of apoptosis we should probably change the term label and position to describe this to avoid inconsistent annotations
| 1
|
153,826
| 19,708,617,916
|
IssuesEvent
|
2022-01-13 01:45:41
|
artsking/linux-4.19.72_CVE-2020-14386
|
https://api.github.com/repos/artsking/linux-4.19.72_CVE-2020-14386
|
opened
|
CVE-2020-25220 (High) detected in linux-yoctov5.4.51
|
security vulnerability
|
## CVE-2020-25220 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel 4.9.x before 4.9.233, 4.14.x before 4.14.194, and 4.19.x before 4.19.140 has a use-after-free because skcd->no_refcnt was not considered during a backport of a CVE-2020-14356 patch. This is related to the cgroups feature.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25220>CVE-2020-25220</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v4.9.223,v4.14.194,v4.19.140</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-25220 (High) detected in linux-yoctov5.4.51 - ## CVE-2020-25220 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel 4.9.x before 4.9.233, 4.14.x before 4.14.194, and 4.19.x before 4.19.140 has a use-after-free because skcd->no_refcnt was not considered during a backport of a CVE-2020-14356 patch. This is related to the cgroups feature.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25220>CVE-2020-25220</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25220</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v4.9.223,v4.14.194,v4.19.140</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files vulnerability details the linux kernel x before x before and x before has a use after free because skcd no refcnt was not considered during a backport of a cve patch this is related to the cgroups feature publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
965
| 3,422,103,198
|
IssuesEvent
|
2015-12-08 21:34:43
|
darnir/wget
|
https://api.github.com/repos/darnir/wget
|
closed
|
warc.c:612:5: 'HAVE_UUID_CREATE' is not defined, evaluates to 0
|
Lexical or Preprocessor Issue Wundef
|
warc.c:612:5: warning: 'HAVE_UUID_CREATE' is not defined, evaluates to 0 [-Wundef,Lexical or Preprocessor Issue]
|
1.0
|
warc.c:612:5: 'HAVE_UUID_CREATE' is not defined, evaluates to 0 - warc.c:612:5: warning: 'HAVE_UUID_CREATE' is not defined, evaluates to 0 [-Wundef,Lexical or Preprocessor Issue]
|
process
|
warc c have uuid create is not defined evaluates to warc c warning have uuid create is not defined evaluates to
| 1
|
67,378
| 20,961,608,618
|
IssuesEvent
|
2022-03-27 21:48:43
|
abedmaatalla/sipdroid
|
https://api.github.com/repos/abedmaatalla/sipdroid
|
closed
|
new option for Static wifi IP address
|
Priority-Medium Type-Defect auto-migrated
|
```
According this article, "Sipdroid sends probe packets every minute over WLANs
to detect such changes.". This makes stand time is short under wifi than under
3G.
http://code.google.com/p/sipdroid/wiki/NewStandbyTechnique
but all the time I use wifi at home and my home wifi is static IP address. Can
you add a option in Sipdroid for static IP address, disable "sends probe
packets every minute"?
thanks.
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
The stand time should much longer than under 3G when using static IP address
under wifi.
What version of the product are you using? On what device/operating system?
Sipdroid 2.9 and Android 4.0.4
Which SIP server are you using? What happens with PBXes?
PBXes.org
Which type of network are you using?
Home Wifi and 3G
Please provide any additional information below.
```
Original issue reported on code.google.com by `Sherman....@gmail.com` on 22 Jan 2013 at 9:42
|
1.0
|
new option for Static wifi IP address - ```
According this article, "Sipdroid sends probe packets every minute over WLANs
to detect such changes.". This makes stand time is short under wifi than under
3G.
http://code.google.com/p/sipdroid/wiki/NewStandbyTechnique
but all the time I use wifi at home and my home wifi is static IP address. Can
you add a option in Sipdroid for static IP address, disable "sends probe
packets every minute"?
thanks.
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
The stand time should much longer than under 3G when using static IP address
under wifi.
What version of the product are you using? On what device/operating system?
Sipdroid 2.9 and Android 4.0.4
Which SIP server are you using? What happens with PBXes?
PBXes.org
Which type of network are you using?
Home Wifi and 3G
Please provide any additional information below.
```
Original issue reported on code.google.com by `Sherman....@gmail.com` on 22 Jan 2013 at 9:42
|
non_process
|
new option for static wifi ip address according this article sipdroid sends probe packets every minute over wlans to detect such changes this makes stand time is short under wifi than under but all the time i use wifi at home and my home wifi is static ip address can you add a option in sipdroid for static ip address disable sends probe packets every minute thanks what steps will reproduce the problem what is the expected output what do you see instead the stand time should much longer than under when using static ip address under wifi what version of the product are you using on what device operating system sipdroid and android which sip server are you using what happens with pbxes pbxes org which type of network are you using home wifi and please provide any additional information below original issue reported on code google com by sherman gmail com on jan at
| 0
|
1,299
| 3,838,363,391
|
IssuesEvent
|
2016-04-02 10:37:00
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Topichead with @copy-to on it throws NPE
|
bug obsolete P2 preprocess
|
If I have in the DITA Map something like:
<topichead navtitle="THEAD" copy-to="topics/copyright.dita"/>
The DITA OT throws an error when processing it:
D:\projects\eXml\frameworks\dita\DITA-OT\plugins\org.dita.base\build_preprocess.xml:35: Failed to run pipeline: [DOTJ012F][FATAL] Failed to parse the input file 'flowers.ditamap'. The XML parser reported the following error: :null
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:269)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.Main.runBuild(Main.java:809)
at org.apache.tools.ant.Main.startAnt(Main.java:217)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: org.dita.dost.exception.DITAOTException: [DOTJ012F][FATAL] Failed to parse the input file 'flowers.ditamap'. The XML parser reported the following error: :null
at org.dita.dost.module.GenMapAndTopicListModule.processFile(GenMapAndTopicListModule.java:495)
at org.dita.dost.module.GenMapAndTopicListModule.processWaitList(GenMapAndTopicListModule.java:418)
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:288)
at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:63)
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:266)
... 29 more
Caused by: java.lang.NullPointerException
at org.dita.dost.util.FileUtils.stripFragment(FileUtils.java:698)
at org.dita.dost.util.FileUtils.normalizeDirectory(FileUtils.java:411)
at org.dita.dost.reader.GenListModuleReader.parseAttribute(GenListModuleReader.java:1513)
at org.dita.dost.reader.GenListModuleReader.startElement(GenListModuleReader.java:991)
at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
at org.apache.xerces.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source)
at org.ditang.relaxng.defaults.RelaxNGDefaultsComponent.emptyElement(RelaxNGDefaultsComponent.java:639)
at org.apache.xerces.impl.dtd.XMLDTDValidatorXerces.emptyElement(XMLDTDValidatorXerces.java:852)
at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:260)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1655)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:325)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.ditang.relaxng.defaults.RelaxDefaultsParserConfiguration.parse(RelaxDefaultsParserConfiguration.java:108)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.dita.dost.reader.GenListModuleReader.parse(GenListModuleReader.java:617)
at org.dita.dost.module.GenMapAndTopicListModule.processFile(GenMapAndTopicListModule.java:449)
I started a question here about why the DITA 1.2 specs allows @copy-to on topichead:
https://lists.oasis-open.org/archives/dita-comment/201211/msg00002.html
|
1.0
|
Topichead with @copy-to on it throws NPE - If I have in the DITA Map something like:
<topichead navtitle="THEAD" copy-to="topics/copyright.dita"/>
The DITA OT throws an error when processing it:
D:\projects\eXml\frameworks\dita\DITA-OT\plugins\org.dita.base\build_preprocess.xml:35: Failed to run pipeline: [DOTJ012F][FATAL] Failed to parse the input file 'flowers.ditamap'. The XML parser reported the following error: :null
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:269)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.Main.runBuild(Main.java:809)
at org.apache.tools.ant.Main.startAnt(Main.java:217)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: org.dita.dost.exception.DITAOTException: [DOTJ012F][FATAL] Failed to parse the input file 'flowers.ditamap'. The XML parser reported the following error: :null
at org.dita.dost.module.GenMapAndTopicListModule.processFile(GenMapAndTopicListModule.java:495)
at org.dita.dost.module.GenMapAndTopicListModule.processWaitList(GenMapAndTopicListModule.java:418)
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:288)
at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:63)
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:266)
... 29 more
Caused by: java.lang.NullPointerException
at org.dita.dost.util.FileUtils.stripFragment(FileUtils.java:698)
at org.dita.dost.util.FileUtils.normalizeDirectory(FileUtils.java:411)
at org.dita.dost.reader.GenListModuleReader.parseAttribute(GenListModuleReader.java:1513)
at org.dita.dost.reader.GenListModuleReader.startElement(GenListModuleReader.java:991)
at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
at org.apache.xerces.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source)
at org.ditang.relaxng.defaults.RelaxNGDefaultsComponent.emptyElement(RelaxNGDefaultsComponent.java:639)
at org.apache.xerces.impl.dtd.XMLDTDValidatorXerces.emptyElement(XMLDTDValidatorXerces.java:852)
at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:260)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1655)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:325)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.ditang.relaxng.defaults.RelaxDefaultsParserConfiguration.parse(RelaxDefaultsParserConfiguration.java:108)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.dita.dost.reader.GenListModuleReader.parse(GenListModuleReader.java:617)
at org.dita.dost.module.GenMapAndTopicListModule.processFile(GenMapAndTopicListModule.java:449)
I started a question here about why the DITA 1.2 specs allows @copy-to on topichead:
https://lists.oasis-open.org/archives/dita-comment/201211/msg00002.html
|
process
|
topichead with copy to on it throws npe if i have in the dita map something like the dita ot throws an error when processing it d projects exml frameworks dita dita ot plugins org dita base build preprocess xml failed to run pipeline failed to parse the input file flowers ditamap the xml parser reported the following error null at org dita dost invoker extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant project executetarget project java at org apache tools ant helper defaultexecutor executetargets defaultexecutor java at org apache tools ant project executetargets project java at org apache tools ant main runbuild main java at org apache tools ant main startant main java at org apache tools ant launch launcher run launcher java at org apache tools ant launch launcher main launcher java caused by org dita dost exception ditaotexception failed to parse the input file flowers ditamap the xml parser reported the following error null at org dita dost module genmapandtopiclistmodule processfile genmapandtopiclistmodule java at org dita dost module genmapandtopiclistmodule processwaitlist genmapandtopiclistmodule java at org dita dost module genmapandtopiclistmodule execute genmapandtopiclistmodule java at org dita dost pipeline pipelinefacade execute pipelinefacade java at org dita dost invoker extensibleantinvoker execute extensibleantinvoker java more caused by java lang nullpointerexception at org dita dost util fileutils stripfragment fileutils java at org dita dost util fileutils normalizedirectory fileutils java at org dita dost reader genlistmodulereader parseattribute genlistmodulereader java at org dita dost reader genlistmodulereader startelement genlistmodulereader java at org apache xerces parsers abstractsaxparser startelement unknown source at org apache xerces parsers abstractxmldocumentparser emptyelement unknown source at org ditang relaxng defaults relaxngdefaultscomponent emptyelement relaxngdefaultscomponent java at org apache xerces impl dtd xmldtdvalidatorxerces emptyelement xmldtdvalidatorxerces java at org apache xerces impl xmlnsdocumentscannerimpl scanstartelement xmlnsdocumentscannerimpl java at org apache xerces impl xmldocumentfragmentscannerimpl fragmentcontentdispatcher dispatch xmldocumentfragmentscannerimpl java at org apache xerces impl xmldocumentfragmentscannerimpl scandocument xmldocumentfragmentscannerimpl java at org apache xerces parsers parse unknown source at org ditang relaxng defaults relaxdefaultsparserconfiguration parse relaxdefaultsparserconfiguration java at org apache xerces parsers parse unknown source at org apache xerces parsers xmlparser parse unknown source at org apache xerces parsers abstractsaxparser parse unknown source at org dita dost reader genlistmodulereader parse genlistmodulereader java at org dita dost module genmapandtopiclistmodule processfile genmapandtopiclistmodule java i started a question here about why the dita specs allows copy to on topichead
| 1
|
15,599
| 19,723,317,898
|
IssuesEvent
|
2022-01-13 17:21:47
|
open-telemetry/opentelemetry-collector
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector
|
closed
|
Access headers in processor
|
enhancement priority:p3 spec:trace spec:metrics release:after-ga area:processor
|
**Is your feature request related to a problem? Please describe.**
I would like to access HTTP headers in the processor pipeline and add attribute to span/resource - e.g. tenant ID.
**Describe the solution you'd like**
Read the headers from the context object in a processor. The headers which should be injected into the context could be specified in the receiver config.
**Describe alternatives you've considered**
A custom implementation of Zipkin/OTLP receiver that does that.
**Additional context**
Related issue https://github.com/open-telemetry/opentelemetry-collector/issues/2101
|
1.0
|
Access headers in processor - **Is your feature request related to a problem? Please describe.**
I would like to access HTTP headers in the processor pipeline and add attribute to span/resource - e.g. tenant ID.
**Describe the solution you'd like**
Read the headers from the context object in a processor. The headers which should be injected into the context could be specified in the receiver config.
**Describe alternatives you've considered**
A custom implementation of Zipkin/OTLP receiver that does that.
**Additional context**
Related issue https://github.com/open-telemetry/opentelemetry-collector/issues/2101
|
process
|
access headers in processor is your feature request related to a problem please describe i would like to access http headers in the processor pipeline and add attribute to span resource e g tenant id describe the solution you d like read the headers from the context object in a processor the headers which should be injected into the context could be specified in the receiver config describe alternatives you ve considered a custom implementation of zipkin otlp receiver that does that additional context related issue
| 1
|
355,069
| 25,175,518,664
|
IssuesEvent
|
2022-11-11 08:54:27
|
P0tatoChips/pe
|
https://api.github.com/repos/P0tatoChips/pe
|
opened
|
Unneeded Spacing in heading
|
severity.VeryLow type.DocumentationBug
|

As you can see in the pic, the design heading is indented and is at a weird position.
<!--session: 1668152709074-ffebd4e7-c492-47e8-8e19-ad25fc3b6a31-->
<!--Version: Web v3.4.4-->
|
1.0
|
Unneeded Spacing in heading - 
As you can see in the pic, the design heading is indented and is at a weird position.
<!--session: 1668152709074-ffebd4e7-c492-47e8-8e19-ad25fc3b6a31-->
<!--Version: Web v3.4.4-->
|
non_process
|
unneeded spacing in heading as you can see in the pic the design heading is indented and is at a weird position
| 0
|
334,950
| 30,000,199,123
|
IssuesEvent
|
2023-06-26 08:49:01
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] RoleReferenceIntersectionTests testBuildRoleForListOfRoleReferences failing
|
:Core/Infra/Core >test-failure Team:Core/Infra
|
Seems that that problem from #93395 came back again. (mockito version needs to be updated?)
Fails regularly since June 20th so I will mute the test for now.
**Build scan:**
https://gradle-enterprise.elastic.co/s/pgqrjtsjap53o/tests/:x-pack:plugin:core:test/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests/testBuildRoleForListOfRoleReferences
**Reproduction line:**
```
./gradlew ':x-pack:plugin:core:test' --tests "org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.testBuildRoleForListOfRoleReferences" -Dtests.seed=E48653C010F39548 -Dtests.locale=no-NO -Dtests.timezone=Brazil/West -Druntime.java=21
```
**Applicable branches:**
main, 8.9
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests&tests.test=testBuildRoleForListOfRoleReferences
**Failure excerpt:**
```
org.mockito.exceptions.base.MockitoException:
Cannot call abstract real method on java object!
Calling real methods is only possible when mocking non abstract method.
//correct example:
when(mockOfConcreteClass.nonAbstractMethod()).thenCallRealMethod();
at __randomizedtesting.SeedInfo.seed([E48653C010F39548:DB4390F4402AE4F6]:0)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$0(RoleReferenceIntersection.java:47)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:169)
at org.elasticsearch.action.support.GroupedActionListener.onResponse(GroupedActionListener.java:56)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.lambda$testBuildRoleForListOfRoleReferences$1(RoleReferenceIntersectionTests.java:62)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$1(RoleReferenceIntersection.java:53)
at java.util.ArrayList.forEach(ArrayList.java:1593)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.buildRole(RoleReferenceIntersection.java:53)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.testBuildRoleForListOfRoleReferences(RoleReferenceIntersectionTests.java:66)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.lang.reflect.Method.invoke(Method.java:580)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1583)
```
|
1.0
|
[CI] RoleReferenceIntersectionTests testBuildRoleForListOfRoleReferences failing - Seems that that problem from #93395 came back again. (mockito version needs to be updated?)
Fails regularly since June 20th so I will mute the test for now.
**Build scan:**
https://gradle-enterprise.elastic.co/s/pgqrjtsjap53o/tests/:x-pack:plugin:core:test/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests/testBuildRoleForListOfRoleReferences
**Reproduction line:**
```
./gradlew ':x-pack:plugin:core:test' --tests "org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.testBuildRoleForListOfRoleReferences" -Dtests.seed=E48653C010F39548 -Dtests.locale=no-NO -Dtests.timezone=Brazil/West -Druntime.java=21
```
**Applicable branches:**
main, 8.9
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests&tests.test=testBuildRoleForListOfRoleReferences
**Failure excerpt:**
```
org.mockito.exceptions.base.MockitoException:
Cannot call abstract real method on java object!
Calling real methods is only possible when mocking non abstract method.
//correct example:
when(mockOfConcreteClass.nonAbstractMethod()).thenCallRealMethod();
at __randomizedtesting.SeedInfo.seed([E48653C010F39548:DB4390F4402AE4F6]:0)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$0(RoleReferenceIntersection.java:47)
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:169)
at org.elasticsearch.action.support.GroupedActionListener.onResponse(GroupedActionListener.java:56)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.lambda$testBuildRoleForListOfRoleReferences$1(RoleReferenceIntersectionTests.java:62)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$1(RoleReferenceIntersection.java:53)
at java.util.ArrayList.forEach(ArrayList.java:1593)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.buildRole(RoleReferenceIntersection.java:53)
at org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersectionTests.testBuildRoleForListOfRoleReferences(RoleReferenceIntersectionTests.java:66)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.lang.reflect.Method.invoke(Method.java:580)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1583)
```
|
non_process
|
rolereferenceintersectiontests testbuildroleforlistofrolereferences failing seems that that problem from came back again mockito version needs to be updated fails regularly since june so i will mute the test for now build scan reproduction line gradlew x pack plugin core test tests org elasticsearch xpack core security authz store rolereferenceintersectiontests testbuildroleforlistofrolereferences dtests seed dtests locale no no dtests timezone brazil west druntime java applicable branches main reproduces locally didn t try failure history failure excerpt org mockito exceptions base mockitoexception cannot call abstract real method on java object calling real methods is only possible when mocking non abstract method correct example when mockofconcreteclass nonabstractmethod thencallrealmethod at randomizedtesting seedinfo seed at org elasticsearch xpack core security authz store rolereferenceintersection lambda buildrole rolereferenceintersection java at org elasticsearch action actionlistener onresponse actionlistener java at org elasticsearch action support groupedactionlistener onresponse groupedactionlistener java at org elasticsearch xpack core security authz store rolereferenceintersectiontests lambda testbuildroleforlistofrolereferences rolereferenceintersectiontests java at org elasticsearch xpack core security authz store rolereferenceintersection lambda buildrole rolereferenceintersection java at java util arraylist foreach arraylist java at org elasticsearch xpack core security authz store rolereferenceintersection buildrole rolereferenceintersection java at org elasticsearch xpack core security authz store rolereferenceintersectiontests testbuildroleforlistofrolereferences rolereferenceintersectiontests java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 0
|
5,721
| 30,249,262,233
|
IssuesEvent
|
2023-07-06 19:04:48
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
closed
|
[Question]: SideNav not closing when clicking links
|
type: question ❓ status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 status: needs reproduction
|
### Question for Carbon
**Package**
@carbon/react
**Browser**
Chrome
**Package version**
@carbon/react: 1.19.0
**React version**
18.2.0
Description
As pointed out in [#3666](https://github.com/carbon-design-system/carbon/issues/3666) the sideNav menu is not collapsing when clicking a link element. The issue to close the menu by clicking anywhere on the overlay was solved in [#8296](https://github.com/carbon-design-system/carbon/pull/8296) but can't find any way to achieve the same behaviour as in the https://carbondesignsystem.com/ and automatically close the side menu. Has this issue been solved?
`const MainHeader = () => {
return (
<HeaderContainer
render={({ isSideNavExpanded, onClickSideNavExpand }) => (
<Header aria-label="Header navigation">
<SkipToContent />
<HeaderMenuButton
aria-label={isSideNavExpanded ? "Close menu" : "Open menu"}
onClick={onClickSideNavExpand}
isActive={isSideNavExpanded}
/>
...
<SideNav
aria-label="Side navigation"
expanded={isSideNavExpanded}
isPersistent={false}
onOverlayClick={onClickSideNavExpand}
>
<SideNavItems>
<HeaderSideNavItems>
<HeaderMenuItem element={NavLink} to={`/link1}>
Link1
</HeaderMenuItem>...`
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
True
|
[Question]: SideNav not closing when clicking links - ### Question for Carbon
**Package**
@carbon/react
**Browser**
Chrome
**Package version**
@carbon/react: 1.19.0
**React version**
18.2.0
Description
As pointed out in [#3666](https://github.com/carbon-design-system/carbon/issues/3666) the sideNav menu is not collapsing when clicking a link element. The issue to close the menu by clicking anywhere on the overlay was solved in [#8296](https://github.com/carbon-design-system/carbon/pull/8296) but can't find any way to achieve the same behaviour as in the https://carbondesignsystem.com/ and automatically close the side menu. Has this issue been solved?
`const MainHeader = () => {
return (
<HeaderContainer
render={({ isSideNavExpanded, onClickSideNavExpand }) => (
<Header aria-label="Header navigation">
<SkipToContent />
<HeaderMenuButton
aria-label={isSideNavExpanded ? "Close menu" : "Open menu"}
onClick={onClickSideNavExpand}
isActive={isSideNavExpanded}
/>
...
<SideNav
aria-label="Side navigation"
expanded={isSideNavExpanded}
isPersistent={false}
onOverlayClick={onClickSideNavExpand}
>
<SideNavItems>
<HeaderSideNavItems>
<HeaderMenuItem element={NavLink} to={`/link1}>
Link1
</HeaderMenuItem>...`
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
non_process
|
sidenav not closing when clicking links question for carbon package carbon react browser chrome package version carbon react react version description as pointed out in the sidenav menu is not collapsing when clicking a link element the issue to close the menu by clicking anywhere on the overlay was solved in but can t find any way to achieve the same behaviour as in the and automatically close the side menu has this issue been solved const mainheader return headercontainer render issidenavexpanded onclicksidenavexpand headermenubutton aria label issidenavexpanded close menu open menu onclick onclicksidenavexpand isactive issidenavexpanded sidenav aria label side navigation expanded issidenavexpanded ispersistent false onoverlayclick onclicksidenavexpand code of conduct i agree to follow this project s
| 0
|
404
| 2,848,095,451
|
IssuesEvent
|
2015-05-29 20:44:48
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Atlas post-processor fails with "unexpected EOF" when trying to upload a Vagrant box
|
bug post-processor/atlas
|
Crash log here:
https://gist.github.com/KFishner/613d17334b96f6967fe4
Packer template here:
https://gist.github.com/KFishner/1084057e7eb970c513d8
|
1.0
|
Atlas post-processor fails with "unexpected EOF" when trying to upload a Vagrant box - Crash log here:
https://gist.github.com/KFishner/613d17334b96f6967fe4
Packer template here:
https://gist.github.com/KFishner/1084057e7eb970c513d8
|
process
|
atlas post processor fails with unexpected eof when trying to upload a vagrant box crash log here packer template here
| 1
|
38,290
| 19,090,281,905
|
IssuesEvent
|
2021-11-29 11:17:43
|
ChainSafe/lodestar
|
https://api.github.com/repos/ChainSafe/lodestar
|
closed
|
Optimize fork choice methods that iterate
|
prio5-medium scope-performance
|
<!--NOTE: -->
<!--- General questions should go to the discord chat instead of the issue tracker.-->
**Describe the bug**
Some forkchoice methods are way too inneficient iterating all nodes unnecessarily
**Expected behavior**
Be more optimal
|
True
|
Optimize fork choice methods that iterate - <!--NOTE: -->
<!--- General questions should go to the discord chat instead of the issue tracker.-->
**Describe the bug**
Some forkchoice methods are way too inneficient iterating all nodes unnecessarily
**Expected behavior**
Be more optimal
|
non_process
|
optimize fork choice methods that iterate describe the bug some forkchoice methods are way too inneficient iterating all nodes unnecessarily expected behavior be more optimal
| 0
|
66,892
| 27,618,837,515
|
IssuesEvent
|
2023-03-09 21:47:06
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Check-in with SMO on Moped first impressions 💓
|
Type: Meeting Service: Dev Service: Product Workgroup: SMO Product: Moped Project: Moped v2.0
|
> As a follow-up to our introductory meeting, let's take this time to review Smart Mobility's first impressions of Moped.
>
> We hope to learn about how Moped might be of value to your work, and what changes we could bring to the app to make it best fit your needs.
>
> We'll be prepared to give you an update on our roadmap and talk through what a proper onboarding might look like.
|
2.0
|
Check-in with SMO on Moped first impressions 💓 - > As a follow-up to our introductory meeting, let's take this time to review Smart Mobility's first impressions of Moped.
>
> We hope to learn about how Moped might be of value to your work, and what changes we could bring to the app to make it best fit your needs.
>
> We'll be prepared to give you an update on our roadmap and talk through what a proper onboarding might look like.
|
non_process
|
check in with smo on moped first impressions 💓 as a follow up to our introductory meeting let s take this time to review smart mobility s first impressions of moped we hope to learn about how moped might be of value to your work and what changes we could bring to the app to make it best fit your needs we ll be prepared to give you an update on our roadmap and talk through what a proper onboarding might look like
| 0
|
381,853
| 11,296,802,854
|
IssuesEvent
|
2020-01-17 03:17:59
|
Novusphere/discussions-app
|
https://api.github.com/repos/Novusphere/discussions-app
|
closed
|
Unified ID Wallet Integation
|
enhancement feature high priority
|
In `nsuid.js` is provided
1) How to go from token symbol & contract --> chain id, see getchains
2) How to go from chainid and wallet key --> balance, see getbalance
3) How to do wallet key --> wallet key (pubk to pubk), see transfer
4) How to do wallet key --> eos account (pubk to eos acc) see withdraw
You can ignore create for now, as this is an admin function that normal users won't need ever.
Feel free to try out the CLI by using the provided private keys, or your own, you can easily generate here:
https://nadejde.github.io/eos-token-sale/
-----
# Depositing
If the user isn't logged in / connected with their EOS account, make a "Connect Wallet" button show first. If they are logged in, provide logout and deposit button.
The deposit button is simply a transfer of whatever token is being deposited with the user's wallet (public) key as the memo, i.e.
```js
await eos.transact({
contract: "novusphereio",
name: "transfer",
data: {
from: eos.accountName,
to: "nsuidcntract",
quantity: "5.000 ATMOS",
memo: "EOS8XF6v1SStBMik6DgvvSxo2ZAtytbzqSVwb8zXuTtA11J4v9xWk"
}
})
```
# Transfering / Withdrawing
refer to `fee.flat` and `fee.percent` in https://atmosdb.novusphere.io/unifiedid/p2k
so, `totalSent = sent + fee`
You should give the user either the option to enter the total being sent (default) or the amount they would like to send (and then add the fee on top of it).
How you calculate the fee should be pretty obvious,
`fee = (totalSent * percent) + flat`
`sent = totalSent - fee`
It's possible you end up with `fee = 0` currently depending on the amount being sent. Server side I will utilize min to ensure `totalSent >= min` and if not I'll throw back an error in `result.message` and `result.error` will be true (as usual) when you attempt to relay the transaction (`/relay` endpoint as used in `nsuid.js`)
|
1.0
|
Unified ID Wallet Integation - In `nsuid.js` is provided
1) How to go from token symbol & contract --> chain id, see getchains
2) How to go from chainid and wallet key --> balance, see getbalance
3) How to do wallet key --> wallet key (pubk to pubk), see transfer
4) How to do wallet key --> eos account (pubk to eos acc) see withdraw
You can ignore create for now, as this is an admin function that normal users won't need ever.
Feel free to try out the CLI by using the provided private keys, or your own, you can easily generate here:
https://nadejde.github.io/eos-token-sale/
-----
# Depositing
If the user isn't logged in / connected with their EOS account, make a "Connect Wallet" button show first. If they are logged in, provide logout and deposit button.
The deposit button is simply a transfer of whatever token is being deposited with the user's wallet (public) key as the memo, i.e.
```js
await eos.transact({
contract: "novusphereio",
name: "transfer",
data: {
from: eos.accountName,
to: "nsuidcntract",
quantity: "5.000 ATMOS",
memo: "EOS8XF6v1SStBMik6DgvvSxo2ZAtytbzqSVwb8zXuTtA11J4v9xWk"
}
})
```
# Transfering / Withdrawing
refer to `fee.flat` and `fee.percent` in https://atmosdb.novusphere.io/unifiedid/p2k
so, `totalSent = sent + fee`
You should give the user either the option to enter the total being sent (default) or the amount they would like to send (and then add the fee on top of it).
How you calculate the fee should be pretty obvious,
`fee = (totalSent * percent) + flat`
`sent = totalSent - fee`
It's possible you end up with `fee = 0` currently depending on the amount being sent. Server side I will utilize min to ensure `totalSent >= min` and if not I'll throw back an error in `result.message` and `result.error` will be true (as usual) when you attempt to relay the transaction (`/relay` endpoint as used in `nsuid.js`)
|
non_process
|
unified id wallet integation in nsuid js is provided how to go from token symbol contract chain id see getchains how to go from chainid and wallet key balance see getbalance how to do wallet key wallet key pubk to pubk see transfer how to do wallet key eos account pubk to eos acc see withdraw you can ignore create for now as this is an admin function that normal users won t need ever feel free to try out the cli by using the provided private keys or your own you can easily generate here depositing if the user isn t logged in connected with their eos account make a connect wallet button show first if they are logged in provide logout and deposit button the deposit button is simply a transfer of whatever token is being deposited with the user s wallet public key as the memo i e js await eos transact contract novusphereio name transfer data from eos accountname to nsuidcntract quantity atmos memo transfering withdrawing refer to fee flat and fee percent in so totalsent sent fee you should give the user either the option to enter the total being sent default or the amount they would like to send and then add the fee on top of it how you calculate the fee should be pretty obvious fee totalsent percent flat sent totalsent fee it s possible you end up with fee currently depending on the amount being sent server side i will utilize min to ensure totalsent min and if not i ll throw back an error in result message and result error will be true as usual when you attempt to relay the transaction relay endpoint as used in nsuid js
| 0
|
250,185
| 27,051,864,980
|
IssuesEvent
|
2023-02-13 13:48:22
|
elastic/cloudbeat
|
https://api.github.com/repos/elastic/cloudbeat
|
closed
|
[CI] CloudFormation templates linter
|
Team:Cloud Security 8.8 candidate Vulnerability Management
|
**Motivation**
As decided on a separate ticket (see below), Cloudbeat repository will be in charge of managing CloudFormation templates.
Before we publish the templates to S3 we should assure that they are in the correct form. As part of Cloudbeat's CI, we should validate the structure of the template. Some tools that might help:
- https://github.com/marketplace/actions/cfn-lint-action
- https://github.com/aws-samples/aws-cloudformation-validator
- https://github.com/badsyntax/github-action-aws-cloudformation
**Definition of done**
- [ ] Add CI step to cloudbeat to verify CloudFormation templates
**Out of scope**
- https://github.com/elastic/cloudbeat/issues/698
**Related tasks/epics**
- https://github.com/elastic/security-team/issues/5700
|
True
|
[CI] CloudFormation templates linter - **Motivation**
As decided on a separate ticket (see below), Cloudbeat repository will be in charge of managing CloudFormation templates.
Before we publish the templates to S3 we should assure that they are in the correct form. As part of Cloudbeat's CI, we should validate the structure of the template. Some tools that might help:
- https://github.com/marketplace/actions/cfn-lint-action
- https://github.com/aws-samples/aws-cloudformation-validator
- https://github.com/badsyntax/github-action-aws-cloudformation
**Definition of done**
- [ ] Add CI step to cloudbeat to verify CloudFormation templates
**Out of scope**
- https://github.com/elastic/cloudbeat/issues/698
**Related tasks/epics**
- https://github.com/elastic/security-team/issues/5700
|
non_process
|
cloudformation templates linter motivation as decided on a separate ticket see below cloudbeat repository will be in charge of managing cloudformation templates before we publish the templates to we should assure that they are in the correct form as part of cloudbeat s ci we should validate the structure of the template some tools that might help definition of done add ci step to cloudbeat to verify cloudformation templates out of scope related tasks epics
| 0
|
12,700
| 15,077,883,382
|
IssuesEvent
|
2021-02-05 07:48:10
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Add Cypress to GitHub package registry
|
process: release stage: ready for work type: chore
|
Apparently it's a thing. How many people will use it 🤷♀
Instructions: https://help.github.com/en/articles/configuring-npm-for-use-with-github-package-registry
Add a 👍 here if you use Cypress and would use this.
|
1.0
|
Add Cypress to GitHub package registry - Apparently it's a thing. How many people will use it 🤷♀
Instructions: https://help.github.com/en/articles/configuring-npm-for-use-with-github-package-registry
Add a 👍 here if you use Cypress and would use this.
|
process
|
add cypress to github package registry apparently it s a thing how many people will use it 🤷♀ instructions add a 👍 here if you use cypress and would use this
| 1
|
20,377
| 27,031,431,806
|
IssuesEvent
|
2023-02-12 08:38:48
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
New component: timestamp processor
|
processor/transform
|
### The purpose and use-cases of the new component
The purpose of this processor is to change the timestamp of all data that it processes (logs, traces, metrics) by adding or removing a static time duration.
### Example configuration for the component
```
processors:
timestamp:
offset: "0h"
timestamp/add2h:
offset: "2h"
timestamp/remove3h:
offset: "-3h"
receivers:
nop:
exporters:
nop:
service:
pipelines:
metrics:
receivers: [nop]
processors: [timestamp, timestamp/add2h, timestamp/remove3h]
exporters: [nop]
```
### Telemetry data types supported
all
### Is this a vendor-specific component?
- [ ] This is a vendor-specific component
- [ ] If this is a vendor-specific component, I am proposing to contribute this as a representative of the vendor.
### Sponsor (optional)
_No response_
### Additional context
This was needed at one point as a stop gap solution for one of our PoCs and it's possible the transformprocessor and ottl is a better solution long term. See original discussion https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142
If others find this processor has merit, I can help contribute it to contrib and help maintain it.
The name of the processor might be instead better named "clockskew" - please suggest.
|
1.0
|
New component: timestamp processor - ### The purpose and use-cases of the new component
The purpose of this processor is to change the timestamp of all data that it processes (logs, traces, metrics) by adding or removing a static time duration.
### Example configuration for the component
```
processors:
timestamp:
offset: "0h"
timestamp/add2h:
offset: "2h"
timestamp/remove3h:
offset: "-3h"
receivers:
nop:
exporters:
nop:
service:
pipelines:
metrics:
receivers: [nop]
processors: [timestamp, timestamp/add2h, timestamp/remove3h]
exporters: [nop]
```
### Telemetry data types supported
all
### Is this a vendor-specific component?
- [ ] This is a vendor-specific component
- [ ] If this is a vendor-specific component, I am proposing to contribute this as a representative of the vendor.
### Sponsor (optional)
_No response_
### Additional context
This was needed at one point as a stop gap solution for one of our PoCs and it's possible the transformprocessor and ottl is a better solution long term. See original discussion https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142
If others find this processor has merit, I can help contribute it to contrib and help maintain it.
The name of the processor might be instead better named "clockskew" - please suggest.
|
process
|
new component timestamp processor the purpose and use cases of the new component the purpose of this processor is to change the timestamp of all data that it processes logs traces metrics by adding or removing a static time duration example configuration for the component processors timestamp offset timestamp offset timestamp offset receivers nop exporters nop service pipelines metrics receivers processors exporters telemetry data types supported all is this a vendor specific component this is a vendor specific component if this is a vendor specific component i am proposing to contribute this as a representative of the vendor sponsor optional no response additional context this was needed at one point as a stop gap solution for one of our pocs and it s possible the transformprocessor and ottl is a better solution long term see original discussion if others find this processor has merit i can help contribute it to contrib and help maintain it the name of the processor might be instead better named clockskew please suggest
| 1
|
99,596
| 8,705,233,950
|
IssuesEvent
|
2018-12-05 21:46:22
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test automatic extension host profiling
|
testplan-item
|
Refs: #60332
- [ ] any @mjbvz
- [x] any @isidorn - will test on os x
Complexity: 4
When an extension takes over the extension host process we now start to profile and to point out those extensions. This is how it works:
* profiling starts when the extension host is unresponsive for 3 seconds already
* profiling lasts for 5 seconds or shorter when the extension host responsive again
* when an extension can be identified as heavy hitter, a log message is printed
* also a telemetry event is send
* last when an extension stole 5 seconds or more a silent notification is shown - a minified notification that shows as number in the statusbar
* selecting that notification will ask you file an issue (only when the extension has a issue-url in its package-json)
With an extension doing heavy work, like dump fibonacci number computation, test the following:
* Trigger the expensive code via a command and listener
* Check the log and confirm that your extension get blamed correctly
* Add a repo-url and check that the editor asks you to file an issue
* Test that the silent notification only shows when your extension took 5 seconds or more
* Check that the file-issue command stores the cpu-profile in your home-dir and that the issue mentions that
* Check that the cpu-profile contains no PII
|
1.0
|
Test automatic extension host profiling - Refs: #60332
- [ ] any @mjbvz
- [x] any @isidorn - will test on os x
Complexity: 4
When an extension takes over the extension host process we now start to profile and to point out those extensions. This is how it works:
* profiling starts when the extension host is unresponsive for 3 seconds already
* profiling lasts for 5 seconds or shorter when the extension host responsive again
* when an extension can be identified as heavy hitter, a log message is printed
* also a telemetry event is send
* last when an extension stole 5 seconds or more a silent notification is shown - a minified notification that shows as number in the statusbar
* selecting that notification will ask you file an issue (only when the extension has a issue-url in its package-json)
With an extension doing heavy work, like dump fibonacci number computation, test the following:
* Trigger the expensive code via a command and listener
* Check the log and confirm that your extension get blamed correctly
* Add a repo-url and check that the editor asks you to file an issue
* Test that the silent notification only shows when your extension took 5 seconds or more
* Check that the file-issue command stores the cpu-profile in your home-dir and that the issue mentions that
* Check that the cpu-profile contains no PII
|
non_process
|
test automatic extension host profiling refs any mjbvz any isidorn will test on os x complexity when an extension takes over the extension host process we now start to profile and to point out those extensions this is how it works profiling starts when the extension host is unresponsive for seconds already profiling lasts for seconds or shorter when the extension host responsive again when an extension can be identified as heavy hitter a log message is printed also a telemetry event is send last when an extension stole seconds or more a silent notification is shown a minified notification that shows as number in the statusbar selecting that notification will ask you file an issue only when the extension has a issue url in its package json with an extension doing heavy work like dump fibonacci number computation test the following trigger the expensive code via a command and listener check the log and confirm that your extension get blamed correctly add a repo url and check that the editor asks you to file an issue test that the silent notification only shows when your extension took seconds or more check that the file issue command stores the cpu profile in your home dir and that the issue mentions that check that the cpu profile contains no pii
| 0
|
14,577
| 17,702,948,545
|
IssuesEvent
|
2021-08-25 01:57:11
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - associatedReferences
|
Term - change Class - Occurrence Class - ResourceRelationship non-normative Process - complete
|
## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedReferences
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedReferences
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: (unchanged): A list (concatenated and separated) of identifiers (publication, bibliographic reference, global unique identifier, URI) of literature associated with the Occurrence.
* Usage comments (recommendations regarding content, etc.): **Recommended best practice is to separate the values in a list with space vertical bar space ( | ). Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Note also that the intended usage of the term dcterms:references in Darwin Core when applied to an Occurrence is to point to the definitive source representation of that Occurrence if one is available. Note also that the intended usage of dcterms:bibliographicCitation in Darwin Core when applied to an Occurrence is to provide the preferred way to cite the Occurrence itself.**
* Examples: `http://www.sciencemag.org/cgi/content/abstract/322/5899/261`, `Christopher J. Conroy, Jennifer L. Neuwald. 2008. Phylogeographic study of the California vole, Microtus californicus Journal of Mammalogy, 89(3):755-767.`, `Steven R. Hoofer and Ronald A. Van Den Bussche. 2001. Phylogenetic Relationships of Plecotine Bats and Allies Based on Mitochondrial Ribosomal Sequences. Journal of Mammalogy 82(1):131-137. | Walker, Faith M., Jeffrey T. Foster, Kevin P. Drees, Carol L. Chambers. 2014. Spotted bat (Euderma maculatum) microsatellite discovery using illumina sequencing. Conservation Genetics Resources.`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedReferences-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/UnitReferences
Discussions around changes to relationshipOfResource (#194), around a new term relationshipOfResourceID (#186, #283), and changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedReferences usage notes. Specifically, the convention on list item separation and the reference to ResourceRelationship as an alternative means of capturing these data are recommended. In this case also I think it useful to differentiate the intended uses of other similar terms in Darwin Core.
|
1.0
|
Change term - associatedReferences - ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedReferences
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedReferences
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: (unchanged): A list (concatenated and separated) of identifiers (publication, bibliographic reference, global unique identifier, URI) of literature associated with the Occurrence.
* Usage comments (recommendations regarding content, etc.): **Recommended best practice is to separate the values in a list with space vertical bar space ( | ). Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Note also that the intended usage of the term dcterms:references in Darwin Core when applied to an Occurrence is to point to the definitive source representation of that Occurrence if one is available. Note also that the intended usage of dcterms:bibliographicCitation in Darwin Core when applied to an Occurrence is to provide the preferred way to cite the Occurrence itself.**
* Examples: `http://www.sciencemag.org/cgi/content/abstract/322/5899/261`, `Christopher J. Conroy, Jennifer L. Neuwald. 2008. Phylogeographic study of the California vole, Microtus californicus Journal of Mammalogy, 89(3):755-767.`, `Steven R. Hoofer and Ronald A. Van Den Bussche. 2001. Phylogenetic Relationships of Plecotine Bats and Allies Based on Mitochondrial Ribosomal Sequences. Journal of Mammalogy 82(1):131-137. | Walker, Faith M., Jeffrey T. Foster, Kevin P. Drees, Carol L. Chambers. 2014. Spotted bat (Euderma maculatum) microsatellite discovery using illumina sequencing. Conservation Genetics Resources.`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedReferences-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/UnitReferences
Discussions around changes to relationshipOfResource (#194), around a new term relationshipOfResourceID (#186, #283), and changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedReferences usage notes. Specifically, the convention on list item separation and the reference to ResourceRelationship as an alternative means of capturing these data are recommended. In this case also I think it useful to differentiate the intended uses of other similar terms in Darwin Core.
|
process
|
change term associatedreferences change term submitter john wieczorek justification why is this change necessary consistency and clarity proponents who needs this change everyone current term definition proposed new attributes of the term term name in lowercamelcase associatedreferences organized in class e g location taxon occurrence definition of the term unchanged a list concatenated and separated of identifiers publication bibliographic reference global unique identifier uri of literature associated with the occurrence usage comments recommendations regarding content etc recommended best practice is to separate the values in a list with space vertical bar space note that the resourcerelationship class is an alternative means of representing associations and with more detail note also that the intended usage of the term dcterms references in darwin core when applied to an occurrence is to point to the definitive source representation of that occurrence if one is available note also that the intended usage of dcterms bibliographiccitation in darwin core when applied to an occurrence is to provide the preferred way to cite the occurrence itself examples christopher j conroy jennifer l neuwald phylogeographic study of the california vole microtus californicus journal of mammalogy steven r hoofer and ronald a van den bussche phylogenetic relationships of plecotine bats and allies based on mitochondrial ribosomal sequences journal of mammalogy walker faith m jeffrey t foster kevin p drees carol l chambers spotted bat euderma maculatum microsatellite discovery using illumina sequencing conservation genetics resources refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit unitreferences discussions around changes to relationshipofresource around a new term relationshipofresourceid and changes to associatedoccurrences issue suggest that a clarification should also be made in the associatedreferences usage notes specifically the convention on list item separation and the reference to resourcerelationship as an alternative means of capturing these data are recommended in this case also i think it useful to differentiate the intended uses of other similar terms in darwin core
| 1
|
16,087
| 20,255,890,389
|
IssuesEvent
|
2022-02-14 23:10:35
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
non-intuitive pipeline trigger behaviour
|
devops/prod Pri2 devops-cicd-process/tech
|
"Pipeline completion triggers use the [Default branch for manual and scheduled builds](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-default-branch?view=azure-devops) setting to determine which branch's version of a YAML pipeline's branch filters to evaluate when determining whether to run a pipeline as the result of another pipeline completing. By default this setting points to the default branch of the repository."
... this is really annoying and non-intuitive behaviour for pipelines in the same repo. Is there any way this can be changed to evaluate pipeline yaml from the current branch rather than the default branch? (especially given that it's the pipeline yaml in the current branch that will actually be run)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 86285f72-9e28-da97-59bb-c29eb60f627d
* Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54
* Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml#branch-considerations-for-pipeline-completion-triggers)
* Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/pipeline-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
1.0
|
non-intuitive pipeline trigger behaviour -
"Pipeline completion triggers use the [Default branch for manual and scheduled builds](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-default-branch?view=azure-devops) setting to determine which branch's version of a YAML pipeline's branch filters to evaluate when determining whether to run a pipeline as the result of another pipeline completing. By default this setting points to the default branch of the repository."
... this is really annoying and non-intuitive behaviour for pipelines in the same repo. Is there any way this can be changed to evaluate pipeline yaml from the current branch rather than the default branch? (especially given that it's the pipeline yaml in the current branch that will actually be run)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 86285f72-9e28-da97-59bb-c29eb60f627d
* Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54
* Content: [Configure pipeline triggers - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops&tabs=yaml#branch-considerations-for-pipeline-completion-triggers)
* Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/pipeline-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
process
|
non intuitive pipeline trigger behaviour pipeline completion triggers use the setting to determine which branch s version of a yaml pipeline s branch filters to evaluate when determining whether to run a pipeline as the result of another pipeline completing by default this setting points to the default branch of the repository this is really annoying and non intuitive behaviour for pipelines in the same repo is there any way this can be changed to evaluate pipeline yaml from the current branch rather than the default branch especially given that it s the pipeline yaml in the current branch that will actually be run document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
| 1
|
2,333
| 5,142,704,898
|
IssuesEvent
|
2017-01-12 14:07:29
|
jimbrown75/Permit-Vision-Enhancements
|
https://api.github.com/repos/jimbrown75/Permit-Vision-Enhancements
|
opened
|
Make it mandatory to have a RA for low low risk permits
|
Further discussion (Shell) Must Fix Process Related
|
Since we now have the "Permit Exempted Task", i don't think there is a use case for a Low Low risk permit without any Hazards and Controls, so we should make it mandatory for even low low risk permits to have an RA.
There is nothing in the 8 step process that says we don't need a risk assessment for low low risk permits.
@DANMPV ?
|
1.0
|
Make it mandatory to have a RA for low low risk permits - Since we now have the "Permit Exempted Task", i don't think there is a use case for a Low Low risk permit without any Hazards and Controls, so we should make it mandatory for even low low risk permits to have an RA.
There is nothing in the 8 step process that says we don't need a risk assessment for low low risk permits.
@DANMPV ?
|
process
|
make it mandatory to have a ra for low low risk permits since we now have the permit exempted task i don t think there is a use case for a low low risk permit without any hazards and controls so we should make it mandatory for even low low risk permits to have an ra there is nothing in the step process that says we don t need a risk assessment for low low risk permits danmpv
| 1
|
5,745
| 8,582,915,831
|
IssuesEvent
|
2018-11-13 18:15:26
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
vSphere post-processor "\" escaping issue
|
bug post-processor/vsphere
|
vSphere username follows the following convention: `domain\username`. When creating a VM and exporting it as an OVA using the ovftool through packer, the following error is received:
`Error exporting virtual machine: exit status 1`
...
`Could not lookup host: domain`
After looking at the vSphere post-processor code I think this may be due to the QueryEscape function (in `func escapeWithSpaces(stringToEscape string){...}`)taking the username in as a string surrounded in double quotes rather than a string surrounded in back ticks (correct me if I'm wrong). After playing around with golang myself and the QueryEscape function, I have realised that with double quotes it is unable to escape the backslash, but with the backticks it escapes it just fine.
Can this be altered to make it work?
Packer version = 1.2.3
Host platform = Windows 10
Relevant section of debug output:
**(IN PARTICULAR LINE 9 AND 14)**
```
1. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 Writing VMX to: C:\Users\sarah\AppData\Local\Temp\packer-vmx924776817\packer-vmware-iso.vmx
2. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Opening new ssh session
3. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Starting remote scp process: scp -vt /vmfs/volumes/DATASTORE/packer-vmware-iso
4. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Started SCP session, beginning transfers...
5. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Copying input data into temporary file so we can read the length
6. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] scp: Uploading packer-vmware-iso.vmx: perms=C0644 size=2903
7. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] SCP session complete, closing stdin pipe.
8. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Waiting for SSH session to complete.
9. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] scp stderr (length 158): Could not chdir to home directory /home/local/DOMAIN/user: No such file or directory
10. 2018/10/31 15:03:13 packer.exe: Sink: C0644 2903 packer-vmware-iso.vmx
11. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Opening new ssh session
12. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] starting remote command: vim-cmd vmsvc/reload 156
13. 2018/10/31 15:03:13 ui: ==> vmware-iso: Exporting virtual machine...
14. 2018/10/31 15:03:13 ui: vmware-iso: Executing: ovftool.exe --shaAlgorithm=sha1 --machineOutput --noSSLVerify=true --skipManifestCheck -tt=ova vi://domain\user:****@X.X.X.X/packer-vmware-iso output-vmware-iso
15. 2018/10/31 15:03:15 ui error: ==> vmware-iso: Error exporting virtual machine: exit status 1
16. ==> vmware-iso: ERROR
17. ==> vmware-iso: + <Errors>
18. ==> vmware-iso: + <Error>
19. ==> vmware-iso: + <Type>ovftool.net.lookup</Type>
20. ==> vmware-iso: + <LocalizedMsg>
21. ==> vmware-iso: + Could not lookup host: domain
22. ==> vmware-iso: + </LocalizedMsg>
23. ==> vmware-iso: + <Arg>
24. ==> vmware-iso: + domain
25. ==> vmware-iso: + </Arg>
26. ==> vmware-iso: + </Error>
27. ==> vmware-iso: + </Errors>
28. ==> vmware-iso:
29. ==> vmware-iso: RESULT
30. ==> vmware-iso: + ERROR
```
**Line 9 shows that packer is taking `DOMAIN` and `user` as two separate directories when this should not be the case. `DOMAIN/user` should be a single directory.**
**Line 14 shows the ovftool execution. If you look at `vi://domain\user:****@X.X.X.X/packer-vmware-iso output-vmware-iso` you can see that it has not escaped the backslash in the username as %5C as it should.**
Template to reproduce error:
```
{
"builders": [
{
"type": "vmware-iso",
"iso_url": "http://some/sort/of/path.ISO",
"iso_checksum": "xxxxxxxx",
"iso_checksum_type": "md5",
"boot_wait": "60s",
"http_directory": "http",
"remote_type": "esx5",
"remote_host": "X.X.X.X",
"remote_username": "DOMAIN/username",
"remote_password": "Password-123%",
"remote_datastore": "DATASTORE",
"shutdown_command": "shutdown /s /t 10 /d p:2:4 /c \"Packer Builder\"",
"guest_os_type": "windows8srv-64",
"disk_size": 102400,
"disk_type_id": "thin",
"format": "ova",
"ovftool_options": [
"--shaAlgorithm=sha1",
"--machineOutput"
],
"vmx_data": {
"scsi0.virtualDev": "lsisas1068",
"ethernet0.networkName": "VM Network",
"memSize": "16384",
"numvcpus": "4"
}
}
],
"provisioners": [
{
"type": "windows-restart",
"only": [
"vmware-iso"
]
}
}
]
}
```
|
1.0
|
vSphere post-processor "\" escaping issue - vSphere username follows the following convention: `domain\username`. When creating a VM and exporting it as an OVA using the ovftool through packer, the following error is received:
`Error exporting virtual machine: exit status 1`
...
`Could not lookup host: domain`
After looking at the vSphere post-processor code I think this may be due to the QueryEscape function (in `func escapeWithSpaces(stringToEscape string){...}`)taking the username in as a string surrounded in double quotes rather than a string surrounded in back ticks (correct me if I'm wrong). After playing around with golang myself and the QueryEscape function, I have realised that with double quotes it is unable to escape the backslash, but with the backticks it escapes it just fine.
Can this be altered to make it work?
Packer version = 1.2.3
Host platform = Windows 10
Relevant section of debug output:
**(IN PARTICULAR LINE 9 AND 14)**
```
1. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 Writing VMX to: C:\Users\sarah\AppData\Local\Temp\packer-vmx924776817\packer-vmware-iso.vmx
2. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Opening new ssh session
3. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Starting remote scp process: scp -vt /vmfs/volumes/DATASTORE/packer-vmware-iso
4. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Started SCP session, beginning transfers...
5. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Copying input data into temporary file so we can read the length
6. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] scp: Uploading packer-vmware-iso.vmx: perms=C0644 size=2903
7. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] SCP session complete, closing stdin pipe.
8. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Waiting for SSH session to complete.
9. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] scp stderr (length 158): Could not chdir to home directory /home/local/DOMAIN/user: No such file or directory
10. 2018/10/31 15:03:13 packer.exe: Sink: C0644 2903 packer-vmware-iso.vmx
11. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] Opening new ssh session
12. 2018/10/31 15:03:13 packer.exe: 2018/10/31 15:03:13 [DEBUG] starting remote command: vim-cmd vmsvc/reload 156
13. 2018/10/31 15:03:13 ui: ==> vmware-iso: Exporting virtual machine...
14. 2018/10/31 15:03:13 ui: vmware-iso: Executing: ovftool.exe --shaAlgorithm=sha1 --machineOutput --noSSLVerify=true --skipManifestCheck -tt=ova vi://domain\user:****@X.X.X.X/packer-vmware-iso output-vmware-iso
15. 2018/10/31 15:03:15 ui error: ==> vmware-iso: Error exporting virtual machine: exit status 1
16. ==> vmware-iso: ERROR
17. ==> vmware-iso: + <Errors>
18. ==> vmware-iso: + <Error>
19. ==> vmware-iso: + <Type>ovftool.net.lookup</Type>
20. ==> vmware-iso: + <LocalizedMsg>
21. ==> vmware-iso: + Could not lookup host: domain
22. ==> vmware-iso: + </LocalizedMsg>
23. ==> vmware-iso: + <Arg>
24. ==> vmware-iso: + domain
25. ==> vmware-iso: + </Arg>
26. ==> vmware-iso: + </Error>
27. ==> vmware-iso: + </Errors>
28. ==> vmware-iso:
29. ==> vmware-iso: RESULT
30. ==> vmware-iso: + ERROR
```
**Line 9 shows that packer is taking `DOMAIN` and `user` as two separate directories when this should not be the case. `DOMAIN/user` should be a single directory.**
**Line 14 shows the ovftool execution. If you look at `vi://domain\user:****@X.X.X.X/packer-vmware-iso output-vmware-iso` you can see that it has not escaped the backslash in the username as %5C as it should.**
Template to reproduce error:
```
{
"builders": [
{
"type": "vmware-iso",
"iso_url": "http://some/sort/of/path.ISO",
"iso_checksum": "xxxxxxxx",
"iso_checksum_type": "md5",
"boot_wait": "60s",
"http_directory": "http",
"remote_type": "esx5",
"remote_host": "X.X.X.X",
"remote_username": "DOMAIN/username",
"remote_password": "Password-123%",
"remote_datastore": "DATASTORE",
"shutdown_command": "shutdown /s /t 10 /d p:2:4 /c \"Packer Builder\"",
"guest_os_type": "windows8srv-64",
"disk_size": 102400,
"disk_type_id": "thin",
"format": "ova",
"ovftool_options": [
"--shaAlgorithm=sha1",
"--machineOutput"
],
"vmx_data": {
"scsi0.virtualDev": "lsisas1068",
"ethernet0.networkName": "VM Network",
"memSize": "16384",
"numvcpus": "4"
}
}
],
"provisioners": [
{
"type": "windows-restart",
"only": [
"vmware-iso"
]
}
}
]
}
```
|
process
|
vsphere post processor escaping issue vsphere username follows the following convention domain username when creating a vm and exporting it as an ova using the ovftool through packer the following error is received error exporting virtual machine exit status could not lookup host domain after looking at the vsphere post processor code i think this may be due to the queryescape function in func escapewithspaces stringtoescape string taking the username in as a string surrounded in double quotes rather than a string surrounded in back ticks correct me if i m wrong after playing around with golang myself and the queryescape function i have realised that with double quotes it is unable to escape the backslash but with the backticks it escapes it just fine can this be altered to make it work packer version host platform windows relevant section of debug output in particular line and packer exe writing vmx to c users sarah appdata local temp packer packer vmware iso vmx packer exe opening new ssh session packer exe starting remote scp process scp vt vmfs volumes datastore packer vmware iso packer exe started scp session beginning transfers packer exe copying input data into temporary file so we can read the length packer exe scp uploading packer vmware iso vmx perms size packer exe scp session complete closing stdin pipe packer exe waiting for ssh session to complete packer exe scp stderr length could not chdir to home directory home local domain user no such file or directory packer exe sink packer vmware iso vmx packer exe opening new ssh session packer exe starting remote command vim cmd vmsvc reload ui vmware iso exporting virtual machine ui vmware iso executing ovftool exe shaalgorithm machineoutput nosslverify true skipmanifestcheck tt ova vi domain user x x x x packer vmware iso output vmware iso ui error vmware iso error exporting virtual machine exit status vmware iso error vmware iso vmware iso vmware iso ovftool net lookup vmware iso vmware iso could not lookup host domain vmware iso vmware iso vmware iso domain vmware iso vmware iso vmware iso vmware iso vmware iso result vmware iso error line shows that packer is taking domain and user as two separate directories when this should not be the case domain user should be a single directory line shows the ovftool execution if you look at vi domain user x x x x packer vmware iso output vmware iso you can see that it has not escaped the backslash in the username as as it should template to reproduce error builders type vmware iso iso url iso checksum xxxxxxxx iso checksum type boot wait http directory http remote type remote host x x x x remote username domain username remote password password remote datastore datastore shutdown command shutdown s t d p c packer builder guest os type disk size disk type id thin format ova ovftool options shaalgorithm machineoutput vmx data virtualdev networkname vm network memsize numvcpus provisioners type windows restart only vmware iso
| 1
|
57,634
| 6,551,943,616
|
IssuesEvent
|
2017-09-05 16:23:28
|
NetsBlox/NetsBlox
|
https://api.github.com/repos/NetsBlox/NetsBlox
|
closed
|
Client animation test failing
|
bug minor testing
|
The client test making sure it only animates if the stage is selected is failing. It should make sure the stage is not selected initially.
|
1.0
|
Client animation test failing - The client test making sure it only animates if the stage is selected is failing. It should make sure the stage is not selected initially.
|
non_process
|
client animation test failing the client test making sure it only animates if the stage is selected is failing it should make sure the stage is not selected initially
| 0
|
6,930
| 6,676,480,240
|
IssuesEvent
|
2017-10-05 06:00:29
|
jorgegil96/All-NBA
|
https://api.github.com/repos/jorgegil96/All-NBA
|
closed
|
Change login WebView to open browser
|
enhancement security
|
Currently, when going to _profile_, an activity with a webview is opened where the user enters his Reddit credentials to log in.
A WebView should not be used because the user cannot know if the site being displayed is actually Reddit and not a fake one. Instead, the user should be redirected to the actual web browser (chrome) and after logging to Reddit redirected back to the application.
|
True
|
Change login WebView to open browser - Currently, when going to _profile_, an activity with a webview is opened where the user enters his Reddit credentials to log in.
A WebView should not be used because the user cannot know if the site being displayed is actually Reddit and not a fake one. Instead, the user should be redirected to the actual web browser (chrome) and after logging to Reddit redirected back to the application.
|
non_process
|
change login webview to open browser currently when going to profile an activity with a webview is opened where the user enters his reddit credentials to log in a webview should not be used because the user cannot know if the site being displayed is actually reddit and not a fake one instead the user should be redirected to the actual web browser chrome and after logging to reddit redirected back to the application
| 0
|
456,054
| 13,136,311,772
|
IssuesEvent
|
2020-08-07 05:44:31
|
teamforus/general
|
https://api.github.com/repos/teamforus/general
|
reopened
|
CMS: editing texts, adding images/videos to webshop
|
Approval: Granted Epic Priority: Must have Type: Improvement Proposal project-100
|
Learn more about improvement proposals: https://bit.ly/2xLJT3R
## Current situation
Every webshop implementation requires customisation. This has different perspectives: technical, and content-wise. This CMS is about trying to solve implementation bottlenecks while being able to fulfil customers need more quickly and flexible by allowing them to edit content themselves.
### Texts and images:
Right now, texts and images are not adjustable by the customer. We need to do a release for each textual update, requiring a lot of coordination.
### Configuration:
Right now, configuration needs to happen manually in the database, therefore it needs to always happen on a monday morning, requiring a lot of coordination and creating some risks.
## Desired situation
The most important elements of the implementation; text, images, video's and configuration, should be editable from a dashboard. We will start with a small and desired scope, and work from there to make the CMS fit the customer needs, starting with the possibility to add their own texts images and video's
## Plan
We will implement the CMS in two itterations.
### First itteration
[View the explainer video](https://drive.google.com/a/forus.io/file/d/1QHlK88VMi9mkk4AygRArdCQEPW9aIceD/view?usp=sharing) | [ View the figma prototype](https://www.figma.com/file/N3p59HvuJoKr7RgT5AYAVW/CMS?node-id=1731%3A67856)
### First iteration:
[View the explainer video](https://drive.google.com/a/forus.io/file/d/1ANYsr_Ma2ps_DRb59kXeUTgKlLJ1y7z5/view?usp=sharing) | [ View the figma prototype](https://www.figma.com/file/N3p59HvuJoKr7RgT5AYAVW/CMS?node-id=1732%3A71922)
Dependancy: #87
|
1.0
|
CMS: editing texts, adding images/videos to webshop - Learn more about improvement proposals: https://bit.ly/2xLJT3R
## Current situation
Every webshop implementation requires customisation. This has different perspectives: technical, and content-wise. This CMS is about trying to solve implementation bottlenecks while being able to fulfil customers need more quickly and flexible by allowing them to edit content themselves.
### Texts and images:
Right now, texts and images are not adjustable by the customer. We need to do a release for each textual update, requiring a lot of coordination.
### Configuration:
Right now, configuration needs to happen manually in the database, therefore it needs to always happen on a monday morning, requiring a lot of coordination and creating some risks.
## Desired situation
The most important elements of the implementation; text, images, video's and configuration, should be editable from a dashboard. We will start with a small and desired scope, and work from there to make the CMS fit the customer needs, starting with the possibility to add their own texts images and video's
## Plan
We will implement the CMS in two itterations.
### First itteration
[View the explainer video](https://drive.google.com/a/forus.io/file/d/1QHlK88VMi9mkk4AygRArdCQEPW9aIceD/view?usp=sharing) | [ View the figma prototype](https://www.figma.com/file/N3p59HvuJoKr7RgT5AYAVW/CMS?node-id=1731%3A67856)
### First iteration:
[View the explainer video](https://drive.google.com/a/forus.io/file/d/1ANYsr_Ma2ps_DRb59kXeUTgKlLJ1y7z5/view?usp=sharing) | [ View the figma prototype](https://www.figma.com/file/N3p59HvuJoKr7RgT5AYAVW/CMS?node-id=1732%3A71922)
Dependancy: #87
|
non_process
|
cms editing texts adding images videos to webshop learn more about improvement proposals current situation every webshop implementation requires customisation this has different perspectives technical and content wise this cms is about trying to solve implementation bottlenecks while being able to fulfil customers need more quickly and flexible by allowing them to edit content themselves texts and images right now texts and images are not adjustable by the customer we need to do a release for each textual update requiring a lot of coordination configuration right now configuration needs to happen manually in the database therefore it needs to always happen on a monday morning requiring a lot of coordination and creating some risks desired situation the most important elements of the implementation text images video s and configuration should be editable from a dashboard we will start with a small and desired scope and work from there to make the cms fit the customer needs starting with the possibility to add their own texts images and video s plan we will implement the cms in two itterations first itteration first iteration dependancy
| 0
|
387,038
| 26,711,002,850
|
IssuesEvent
|
2023-01-27 23:58:32
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
opened
|
Docs: S3Cluster Table Engine
|
comp-documentation
|
**Describe the issue**
The S3 Table Engine also has an S3Cluster Table Engine option, which is not currently documented. We please need to add a S3Cluster Table Engine page under Engines->Table Engines->Integrations->S3Cluster (just under the S3 Table Engine)
(alternately, we could include the S3Cluster Table Engine on the S3 Table Engine page, if that approach better adheres to our docs standard https://clickhouse.com/docs/en/engines/table-engines/integrations/s3 )
**Additional context**
Note: the s3Cluster Table Function has its own page https://clickhouse.com/docs/en/sql-reference/table-functions/s3Cluster/ , so again not sure if S3Cluster Table Engine deserves its own page or alternately if should be included on the S3 Table Engine page
|
1.0
|
Docs: S3Cluster Table Engine - **Describe the issue**
The S3 Table Engine also has an S3Cluster Table Engine option, which is not currently documented. We please need to add a S3Cluster Table Engine page under Engines->Table Engines->Integrations->S3Cluster (just under the S3 Table Engine)
(alternately, we could include the S3Cluster Table Engine on the S3 Table Engine page, if that approach better adheres to our docs standard https://clickhouse.com/docs/en/engines/table-engines/integrations/s3 )
**Additional context**
Note: the s3Cluster Table Function has its own page https://clickhouse.com/docs/en/sql-reference/table-functions/s3Cluster/ , so again not sure if S3Cluster Table Engine deserves its own page or alternately if should be included on the S3 Table Engine page
|
non_process
|
docs table engine describe the issue the table engine also has an table engine option which is not currently documented we please need to add a table engine page under engines table engines integrations just under the table engine alternately we could include the table engine on the table engine page if that approach better adheres to our docs standard additional context note the table function has its own page so again not sure if table engine deserves its own page or alternately if should be included on the table engine page
| 0
|
11,748
| 14,583,386,220
|
IssuesEvent
|
2020-12-18 13:55:35
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Make display of "no records found" consistent across the participant manager
|
Bug P2 Participant manager Process: Tested QA Process: Tested dev UI
|
Sometimes the 'no records found' text is left justified, other times it is centered, and here it is right justified. This should be consistent across the application.

|
2.0
|
Make display of "no records found" consistent across the participant manager - Sometimes the 'no records found' text is left justified, other times it is centered, and here it is right justified. This should be consistent across the application.

|
process
|
make display of no records found consistent across the participant manager sometimes the no records found text is left justified other times it is centered and here it is right justified this should be consistent across the application
| 1
|
207,799
| 23,495,764,277
|
IssuesEvent
|
2022-08-18 01:04:48
|
LingalaShalini/openjpeg-2.3.0_before_fix
|
https://api.github.com/repos/LingalaShalini/openjpeg-2.3.0_before_fix
|
closed
|
CVE-2022-34266 (Medium) detected in openjpegv2.3.0 - autoclosed
|
security vulnerability
|
## CVE-2022-34266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openjpegv2.3.0</b></p></summary>
<p>
<p>Official repository of the OpenJPEG project</p>
<p>Library home page: <a href=https://github.com/uclouvain/openjpeg.git>https://github.com/uclouvain/openjpeg.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/LingalaShalini/openjpeg-2.3.0_before_fix/commit/3501163dd1d68645efcce586f29683574a46c95f">3501163dd1d68645efcce586f29683574a46c95f</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/thirdparty/libtiff/tif_dirread.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The libtiff-4.0.3-35.amzn2.0.1 package for LibTIFF on Amazon Linux 2 allows attackers to cause a denial of service (application crash), a different vulnerability than CVE-2022-0562. When processing a malicious TIFF file, an invalid range may be passed as an argument to the memset() function within TIFFFetchStripThing() in tif_dirread.c. This will cause TIFFFetchStripThing() to segfault after use of an uninitialized resource.
<p>Publish Date: 2022-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-34266>CVE-2022-34266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://alas.aws.amazon.com/AL2/ALAS-2022-1814.html">https://alas.aws.amazon.com/AL2/ALAS-2022-1814.html</a></p>
<p>Release Date: 2022-07-19</p>
<p>Fix Resolution: v4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-34266 (Medium) detected in openjpegv2.3.0 - autoclosed - ## CVE-2022-34266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openjpegv2.3.0</b></p></summary>
<p>
<p>Official repository of the OpenJPEG project</p>
<p>Library home page: <a href=https://github.com/uclouvain/openjpeg.git>https://github.com/uclouvain/openjpeg.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/LingalaShalini/openjpeg-2.3.0_before_fix/commit/3501163dd1d68645efcce586f29683574a46c95f">3501163dd1d68645efcce586f29683574a46c95f</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/thirdparty/libtiff/tif_dirread.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The libtiff-4.0.3-35.amzn2.0.1 package for LibTIFF on Amazon Linux 2 allows attackers to cause a denial of service (application crash), a different vulnerability than CVE-2022-0562. When processing a malicious TIFF file, an invalid range may be passed as an argument to the memset() function within TIFFFetchStripThing() in tif_dirread.c. This will cause TIFFFetchStripThing() to segfault after use of an uninitialized resource.
<p>Publish Date: 2022-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-34266>CVE-2022-34266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://alas.aws.amazon.com/AL2/ALAS-2022-1814.html">https://alas.aws.amazon.com/AL2/ALAS-2022-1814.html</a></p>
<p>Release Date: 2022-07-19</p>
<p>Fix Resolution: v4.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in autoclosed cve medium severity vulnerability vulnerable library official repository of the openjpeg project library home page a href found in head commit a href found in base branch master vulnerable source files thirdparty libtiff tif dirread c vulnerability details the libtiff package for libtiff on amazon linux allows attackers to cause a denial of service application crash a different vulnerability than cve when processing a malicious tiff file an invalid range may be passed as an argument to the memset function within tifffetchstripthing in tif dirread c this will cause tifffetchstripthing to segfault after use of an uninitialized resource publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
389,574
| 26,821,503,208
|
IssuesEvent
|
2023-02-02 09:48:37
|
iti-ict/wakamiti
|
https://api.github.com/repos/iti-ict/wakamiti
|
opened
|
Incluir documentación por versión
|
documentation enhancement
|
Sería interesante incluir que la documentación de wakamiti, tanto en el core como en los plugins, estuviera disponible por versión. Un desplegable con las versiones disponibles y que se pueda seleccionar, ya que actualmente solo incluye la última versión.
|
1.0
|
Incluir documentación por versión - Sería interesante incluir que la documentación de wakamiti, tanto en el core como en los plugins, estuviera disponible por versión. Un desplegable con las versiones disponibles y que se pueda seleccionar, ya que actualmente solo incluye la última versión.
|
non_process
|
incluir documentación por versión sería interesante incluir que la documentación de wakamiti tanto en el core como en los plugins estuviera disponible por versión un desplegable con las versiones disponibles y que se pueda seleccionar ya que actualmente solo incluye la última versión
| 0
|
244,723
| 20,692,541,963
|
IssuesEvent
|
2022-03-11 02:55:11
|
PalisadoesFoundation/talawa
|
https://api.github.com/repos/PalisadoesFoundation/talawa
|
closed
|
Views: Create tests for edit_profile_page.dart
|
good first issue test points 01
|
The Talawa code base needs to be 100% reliable. This means we need to have 100% test code coverage.
Tests need to be written for file `lib/views/after_auth_screens/profile/edit_profile_page.dart`
- When complete, all methods, classes and/or functions in the file will need to be tested.
- These tests must be placed in a single file with the name `test/views/after_auth_screens/profile_test/edit_profile_page_test.dart`. You may need to create the appropriate directory structure to do this.
### IMPORTANT:
Please refer to the parent issue on how to implement these tests correctly:
- https://github.com/PalisadoesFoundation/talawa/issues/1217
### PR Acceptance Criteria
- When complete this file must show **100%** coverage when merged into the code base. This will be clearly visible when you submit your PR.
- [The current code coverage for the file can be found here](https://codecov.io/gh/PalisadoesFoundation/talawa/tree/develop/lib/views/after_auth_screens/profile/). If the file isn't found in this directory, or there is a 404 error, then tests have not been created.
- The PR will show a report for the code coverage for the file you have added. You can use that as a guide.
- You can verify your own code coverage by creating an account at [Codecov.io](https://codecov.io)
|
1.0
|
Views: Create tests for edit_profile_page.dart - The Talawa code base needs to be 100% reliable. This means we need to have 100% test code coverage.
Tests need to be written for file `lib/views/after_auth_screens/profile/edit_profile_page.dart`
- When complete, all methods, classes and/or functions in the file will need to be tested.
- These tests must be placed in a single file with the name `test/views/after_auth_screens/profile_test/edit_profile_page_test.dart`. You may need to create the appropriate directory structure to do this.
### IMPORTANT:
Please refer to the parent issue on how to implement these tests correctly:
- https://github.com/PalisadoesFoundation/talawa/issues/1217
### PR Acceptance Criteria
- When complete this file must show **100%** coverage when merged into the code base. This will be clearly visible when you submit your PR.
- [The current code coverage for the file can be found here](https://codecov.io/gh/PalisadoesFoundation/talawa/tree/develop/lib/views/after_auth_screens/profile/). If the file isn't found in this directory, or there is a 404 error, then tests have not been created.
- The PR will show a report for the code coverage for the file you have added. You can use that as a guide.
- You can verify your own code coverage by creating an account at [Codecov.io](https://codecov.io)
|
non_process
|
views create tests for edit profile page dart the talawa code base needs to be reliable this means we need to have test code coverage tests need to be written for file lib views after auth screens profile edit profile page dart when complete all methods classes and or functions in the file will need to be tested these tests must be placed in a single file with the name test views after auth screens profile test edit profile page test dart you may need to create the appropriate directory structure to do this important please refer to the parent issue on how to implement these tests correctly pr acceptance criteria when complete this file must show coverage when merged into the code base this will be clearly visible when you submit your pr if the file isn t found in this directory or there is a error then tests have not been created the pr will show a report for the code coverage for the file you have added you can use that as a guide you can verify your own code coverage by creating an account at
| 0
|
38,882
| 10,261,137,166
|
IssuesEvent
|
2019-08-22 09:08:42
|
ShaikASK/Testing
|
https://api.github.com/repos/ShaikASK/Testing
|
closed
|
QA /Production : Safari : New Hire : Emails tab /Activities tab : Extra white space is being displayed against email tab and activities tab
|
Activities Beta Release #5 Build#4 Defect Emails Initiate On boarding New Hire P3
|
Steps To Replicate
1.Launch the URL
2.Sign in as HR admin
3.Create a New Hire and Save it
4.Initiate the above created New Hire
5.Complete the onboarding process from candidate side
6.Check the Emails tab and Activities tab for above initiated New Hire
Experienced Behavior : Observed that Extra white space is being displayed against email tab and activities tab (Refer Screen Shot)
Expected Behavior :Ensure that Extra white space should not be displayed against email tab and activities tab
Emails Tab :

Activities Tab :

|
1.0
|
QA /Production : Safari : New Hire : Emails tab /Activities tab : Extra white space is being displayed against email tab and activities tab - Steps To Replicate
1.Launch the URL
2.Sign in as HR admin
3.Create a New Hire and Save it
4.Initiate the above created New Hire
5.Complete the onboarding process from candidate side
6.Check the Emails tab and Activities tab for above initiated New Hire
Experienced Behavior : Observed that Extra white space is being displayed against email tab and activities tab (Refer Screen Shot)
Expected Behavior :Ensure that Extra white space should not be displayed against email tab and activities tab
Emails Tab :

Activities Tab :

|
non_process
|
qa production safari new hire emails tab activities tab extra white space is being displayed against email tab and activities tab steps to replicate launch the url sign in as hr admin create a new hire and save it initiate the above created new hire complete the onboarding process from candidate side check the emails tab and activities tab for above initiated new hire experienced behavior observed that extra white space is being displayed against email tab and activities tab refer screen shot expected behavior ensure that extra white space should not be displayed against email tab and activities tab emails tab activities tab
| 0
|
233,892
| 19,084,048,487
|
IssuesEvent
|
2021-11-29 01:56:39
|
DnD-Montreal/session-tome
|
https://api.github.com/repos/DnD-Montreal/session-tome
|
closed
|
Add cypress test for user registration and login
|
task test
|
## Description
Write E2E Cypress tests for user registration and login.
## Possible Implementation
- Create a user through the registration form
- Log into a user's account through the login form
|
1.0
|
Add cypress test for user registration and login - ## Description
Write E2E Cypress tests for user registration and login.
## Possible Implementation
- Create a user through the registration form
- Log into a user's account through the login form
|
non_process
|
add cypress test for user registration and login description write cypress tests for user registration and login possible implementation create a user through the registration form log into a user s account through the login form
| 0
|
4,320
| 10,917,612,138
|
IssuesEvent
|
2019-11-21 15:27:47
|
maSchoeller/JimnyTainment
|
https://api.github.com/repos/maSchoeller/JimnyTainment
|
closed
|
Project distribution and Namespace naming
|
V0.1 milestone program architecture
|
# Project distribution
General distribution of the application into 4 projects:
- **JimnyTainment.UI.View**
_The project contains the actual user interface._
- **JimnyTainment.UI.ViewModel**
_Contains the control logic for the user interface._
- **JimnyTainment.Lib**
_Contains various services that are used by the control logic, e.g. analysis classes of autometadata or configuration management, etc..._
- JimnyTainment.Lib.Analysis
- JimnyTainment.Lib.Storage
- JimnyTainment.Lib.Logging
- **JimnyTainment.Drivers**
_Contains various drivers used by the services and ViewModels_
- JimnyTainment.Drivers.OBD2
- JimnyTainment.Drivers.Audio
- JimnyTainment.Drivers.IO
- JimnyTainment.Drivers.Camera

In general this would be an Arichtetcure proposal for project. This proposal would not blow up the application too much, but still divide it up well.
@langmario @pafinkbeiner
If you still have namespaces you want to add you can simply write them into the issue or what you think about it/how you would do it;)
|
1.0
|
Project distribution and Namespace naming - # Project distribution
General distribution of the application into 4 projects:
- **JimnyTainment.UI.View**
_The project contains the actual user interface._
- **JimnyTainment.UI.ViewModel**
_Contains the control logic for the user interface._
- **JimnyTainment.Lib**
_Contains various services that are used by the control logic, e.g. analysis classes of autometadata or configuration management, etc..._
- JimnyTainment.Lib.Analysis
- JimnyTainment.Lib.Storage
- JimnyTainment.Lib.Logging
- **JimnyTainment.Drivers**
_Contains various drivers used by the services and ViewModels_
- JimnyTainment.Drivers.OBD2
- JimnyTainment.Drivers.Audio
- JimnyTainment.Drivers.IO
- JimnyTainment.Drivers.Camera

In general this would be an Arichtetcure proposal for project. This proposal would not blow up the application too much, but still divide it up well.
@langmario @pafinkbeiner
If you still have namespaces you want to add you can simply write them into the issue or what you think about it/how you would do it;)
|
non_process
|
project distribution and namespace naming project distribution general distribution of the application into projects jimnytainment ui view the project contains the actual user interface jimnytainment ui viewmodel contains the control logic for the user interface jimnytainment lib contains various services that are used by the control logic e g analysis classes of autometadata or configuration management etc jimnytainment lib analysis jimnytainment lib storage jimnytainment lib logging jimnytainment drivers contains various drivers used by the services and viewmodels jimnytainment drivers jimnytainment drivers audio jimnytainment drivers io jimnytainment drivers camera in general this would be an arichtetcure proposal for project this proposal would not blow up the application too much but still divide it up well langmario pafinkbeiner if you still have namespaces you want to add you can simply write them into the issue or what you think about it how you would do it
| 0
|
15,637
| 19,808,603,178
|
IssuesEvent
|
2022-01-19 09:46:41
|
Blazebit/blaze-persistence
|
https://api.github.com/repos/Blazebit/blaze-persistence
|
closed
|
NPE in entity view annotation processor when @PostLoad is used
|
kind: bug worth: high component: entity-view-annotation-processor
|
According to a user report, `ImplementationClassWriter#1877` throws a NPE with version 1.6.4. when having a `@PostLoad` annotated method in an entity view. It seems we used `getPostCreate` accidently whereas it should have been `getPostLoad` on that line.
|
1.0
|
NPE in entity view annotation processor when @PostLoad is used - According to a user report, `ImplementationClassWriter#1877` throws a NPE with version 1.6.4. when having a `@PostLoad` annotated method in an entity view. It seems we used `getPostCreate` accidently whereas it should have been `getPostLoad` on that line.
|
process
|
npe in entity view annotation processor when postload is used according to a user report implementationclasswriter throws a npe with version when having a postload annotated method in an entity view it seems we used getpostcreate accidently whereas it should have been getpostload on that line
| 1
|
18,792
| 24,698,096,784
|
IssuesEvent
|
2022-10-19 13:32:46
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
revert pat to install pkg instead of build from source
|
enhancement in process
|
Building from source was done when 64bit Pi OS was released. Now that there are 64bit versions of Pat, there's really no need to build from source and installing the package from the Pat site will speed the install. There may also be an issue with building from source on 64bit machines. See [this post](https://groups.io/g/KM4ACK-Pi/topic/93699667)
|
1.0
|
revert pat to install pkg instead of build from source - Building from source was done when 64bit Pi OS was released. Now that there are 64bit versions of Pat, there's really no need to build from source and installing the package from the Pat site will speed the install. There may also be an issue with building from source on 64bit machines. See [this post](https://groups.io/g/KM4ACK-Pi/topic/93699667)
|
process
|
revert pat to install pkg instead of build from source building from source was done when pi os was released now that there are versions of pat there s really no need to build from source and installing the package from the pat site will speed the install there may also be an issue with building from source on machines see
| 1
|
19,223
| 25,358,617,032
|
IssuesEvent
|
2022-11-20 16:34:33
|
streamnative/pulsar-spark
|
https://api.github.com/repos/streamnative/pulsar-spark
|
closed
|
[BUG] Spark can't start read stream- NullPointerException in pulsar-spark-connector V3.1.1.2
|
type/bug compute/data-processing
|
**Describe the bug**
While trying to start spark structured streaming read stream, connector throws NullPointerException.
```
pulsar-connector jar - “io.streamnative.connectors:pulsar-spark-connector_2.12:3.1.1.2”
PySpark version = 3.1.2
python version = 3.7
```
Code snippet throwing error is-
```
eventsDF = spark_session.readStream \
.format("pulsar") \
.option("service.url", service_url) \
.option("admin.url", admin_url) \
.option("topics", topic) \
.option("subscriptionprefix", "nlu-test") \
.load() \
.selectExpr("CAST(value AS STRING)") \
.select(from_json("value",schema_item).alias("event")) \
.select("event.*").repartition(partitions)
```
Error Trace is -
```
Traceback (most recent call last):
File "/Users/ashi/MyDev/workspace/src/post_call_etl_job/main_local.py", line 361, in <module>
compute_df = pci.events_queueDF(spark, topic_compute, schema, partition_count)
File "/Users/ashi/MyDev/workspace/src/post_call_etl_job/main_local.py", line 102, in events_queueDF
.option("subscriptionprefix", "nlu-test") \
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/pyspark/sql/streaming.py", line 482, in load
return self._df(self._jreader.load())
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o63.load.
: java.lang.NullPointerException
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getPulsarSchema(PulsarMetadataReader.scala:170)
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getSchema(PulsarMetadataReader.scala:164)
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getAndCheckCompatible(PulsarMetadataReader.scala:148)
at org.apache.spark.sql.pulsar.PulsarProvider.$anonfun$sourceSchema$2(PulsarProvider.scala:71)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2622)
at org.apache.spark.sql.pulsar.PulsarProvider.sourceSchema(PulsarProvider.scala:70)
at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:236)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:117)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:117)
at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:33)
at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:219)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Run apache pulsar docker container (apachepulsar/pulsar:2.6.1)
2. Use pulsar python client to send message to persistent topic.
3. Start spark streaming app. I load pulsar connector library in spark app code.
4. As soon as it tries to parse options and create data steam, I get error.
**Expected behavior**
Spark should be able to create read stream.
@nlu90 - for your kind attention.
|
1.0
|
[BUG] Spark can't start read stream- NullPointerException in pulsar-spark-connector V3.1.1.2 - **Describe the bug**
While trying to start spark structured streaming read stream, connector throws NullPointerException.
```
pulsar-connector jar - “io.streamnative.connectors:pulsar-spark-connector_2.12:3.1.1.2”
PySpark version = 3.1.2
python version = 3.7
```
Code snippet throwing error is-
```
eventsDF = spark_session.readStream \
.format("pulsar") \
.option("service.url", service_url) \
.option("admin.url", admin_url) \
.option("topics", topic) \
.option("subscriptionprefix", "nlu-test") \
.load() \
.selectExpr("CAST(value AS STRING)") \
.select(from_json("value",schema_item).alias("event")) \
.select("event.*").repartition(partitions)
```
Error Trace is -
```
Traceback (most recent call last):
File "/Users/ashi/MyDev/workspace/src/post_call_etl_job/main_local.py", line 361, in <module>
compute_df = pci.events_queueDF(spark, topic_compute, schema, partition_count)
File "/Users/ashi/MyDev/workspace/src/post_call_etl_job/main_local.py", line 102, in events_queueDF
.option("subscriptionprefix", "nlu-test") \
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/pyspark/sql/streaming.py", line 482, in load
return self._df(self._jreader.load())
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/Users/ashi/MyDev/workspace/.venv/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o63.load.
: java.lang.NullPointerException
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getPulsarSchema(PulsarMetadataReader.scala:170)
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getSchema(PulsarMetadataReader.scala:164)
at org.apache.spark.sql.pulsar.PulsarMetadataReader.getAndCheckCompatible(PulsarMetadataReader.scala:148)
at org.apache.spark.sql.pulsar.PulsarProvider.$anonfun$sourceSchema$2(PulsarProvider.scala:71)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2622)
at org.apache.spark.sql.pulsar.PulsarProvider.sourceSchema(PulsarProvider.scala:70)
at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:236)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:117)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:117)
at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:33)
at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:219)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Run apache pulsar docker container (apachepulsar/pulsar:2.6.1)
2. Use pulsar python client to send message to persistent topic.
3. Start spark streaming app. I load pulsar connector library in spark app code.
4. As soon as it tries to parse options and create data steam, I get error.
**Expected behavior**
Spark should be able to create read stream.
@nlu90 - for your kind attention.
|
process
|
spark can t start read stream nullpointerexception in pulsar spark connector describe the bug while trying to start spark structured streaming read stream connector throws nullpointerexception pulsar connector jar “io streamnative connectors pulsar spark connector ” pyspark version python version code snippet throwing error is eventsdf spark session readstream format pulsar option service url service url option admin url admin url option topics topic option subscriptionprefix nlu test load selectexpr cast value as string select from json value schema item alias event select event repartition partitions error trace is traceback most recent call last file users ashi mydev workspace src post call etl job main local py line in compute df pci events queuedf spark topic compute schema partition count file users ashi mydev workspace src post call etl job main local py line in events queuedf option subscriptionprefix nlu test file users ashi mydev workspace venv lib site packages pyspark sql streaming py line in load return self df self jreader load file users ashi mydev workspace venv lib site packages java gateway py line in call answer self gateway client self target id self name file users ashi mydev workspace venv lib site packages pyspark sql utils py line in deco return f a kw file users ashi mydev workspace venv lib site packages protocol py line in get return value format target id name value protocol an error occurred while calling load java lang nullpointerexception at org apache spark sql pulsar pulsarmetadatareader getpulsarschema pulsarmetadatareader scala at org apache spark sql pulsar pulsarmetadatareader getschema pulsarmetadatareader scala at org apache spark sql pulsar pulsarmetadatareader getandcheckcompatible pulsarmetadatareader scala at org apache spark sql pulsar pulsarprovider anonfun sourceschema pulsarprovider scala at org apache spark util utils trywithresource utils scala at org apache spark sql pulsar pulsarprovider sourceschema pulsarprovider scala at org apache spark sql execution datasources datasource sourceschema datasource scala at org apache spark sql execution datasources datasource sourceinfo lzycompute datasource scala at org apache spark sql execution datasources datasource sourceinfo datasource scala at org apache spark sql execution streaming streamingrelation apply streamingrelation scala at org apache spark sql streaming datastreamreader loadinternal datastreamreader scala at org apache spark sql streaming datastreamreader load datastreamreader scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java to reproduce steps to reproduce the behavior run apache pulsar docker container apachepulsar pulsar use pulsar python client to send message to persistent topic start spark streaming app i load pulsar connector library in spark app code as soon as it tries to parse options and create data steam i get error expected behavior spark should be able to create read stream for your kind attention
| 1
|
16,580
| 21,625,404,041
|
IssuesEvent
|
2022-05-05 00:57:36
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_fs (__main__.TestMultiprocessing)
|
module: multiprocessing module: flaky-tests skipped
|
Platforms: asan, linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_fs%2C%20TestMultiprocessing) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6297446390).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
|
1.0
|
DISABLED test_fs (__main__.TestMultiprocessing) - Platforms: asan, linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_fs%2C%20TestMultiprocessing) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/6297446390).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 3 green.
|
process
|
disabled test fs main testmultiprocessing platforms asan linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green
| 1
|
2,266
| 2,589,945,020
|
IssuesEvent
|
2015-02-18 16:03:41
|
klassebe/klasse-wp-poll-survey
|
https://api.github.com/repos/klassebe/klasse-wp-poll-survey
|
closed
|
Add php to handle sort order changes
|
All testmodi feature guiRevision
|
These changes must be passed on to all other versions that are part of the same test collection (parent)
|
1.0
|
Add php to handle sort order changes - These changes must be passed on to all other versions that are part of the same test collection (parent)
|
non_process
|
add php to handle sort order changes these changes must be passed on to all other versions that are part of the same test collection parent
| 0
|
102,511
| 12,805,739,227
|
IssuesEvent
|
2020-07-03 08:07:31
|
amalto/platform6-ui-components
|
https://api.github.com/repos/amalto/platform6-ui-components
|
closed
|
Create a form component
|
Design: Atom Web component enhancement
|
The component must :
* add the attribute "Novalidate" to prevent automatic validation by the browser.
* find all the "inputs" of the form, native or not.
* validate the form data when submitting it.
|
1.0
|
Create a form component - The component must :
* add the attribute "Novalidate" to prevent automatic validation by the browser.
* find all the "inputs" of the form, native or not.
* validate the form data when submitting it.
|
non_process
|
create a form component the component must add the attribute novalidate to prevent automatic validation by the browser find all the inputs of the form native or not validate the form data when submitting it
| 0
|
6,468
| 2,848,025,911
|
IssuesEvent
|
2015-05-29 20:20:57
|
isenseDev/rSENSE
|
https://api.github.com/repos/isenseDev/rSENSE
|
closed
|
Add Indication on Map Markers of Data Sets that Contain Photos
|
In Testing UI
|
**General description:** When a data set includes a picture that is visible on the map, it'd be nice to add some type of indication it exists on the markers. Maybe a font awesome photo icon of some sort? Like a camera.
**live/dev/localhost:** live
**iSENSE Version:** v6.3
**Logged in (Y or N):** N
**Admin (Y or N):** N
**OS:** Macintosh
**Browser/Version:** Chrome 41.0.2272.89
**Steps to Reproduce:**
|
1.0
|
Add Indication on Map Markers of Data Sets that Contain Photos - **General description:** When a data set includes a picture that is visible on the map, it'd be nice to add some type of indication it exists on the markers. Maybe a font awesome photo icon of some sort? Like a camera.
**live/dev/localhost:** live
**iSENSE Version:** v6.3
**Logged in (Y or N):** N
**Admin (Y or N):** N
**OS:** Macintosh
**Browser/Version:** Chrome 41.0.2272.89
**Steps to Reproduce:**
|
non_process
|
add indication on map markers of data sets that contain photos general description when a data set includes a picture that is visible on the map it d be nice to add some type of indication it exists on the markers maybe a font awesome photo icon of some sort like a camera live dev localhost live isense version logged in y or n n admin y or n n os macintosh browser version chrome steps to reproduce
| 0
|
145,413
| 19,339,414,694
|
IssuesEvent
|
2021-12-15 01:29:01
|
hydrogen-dev/molecule-quickstart-app
|
https://api.github.com/repos/hydrogen-dev/molecule-quickstart-app
|
opened
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz
|
security vulnerability
|
## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: molecule-quickstart-app/package.json</p>
<p>Path to vulnerable library: molecule-quickstart-app/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-html","packageVersion":"0.0.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;webpack-dev-server:3.2.1;ansi-html:0.0.7","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23424","vulnerabilityDetails":"This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: molecule-quickstart-app/package.json</p>
<p>Path to vulnerable library: molecule-quickstart-app/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-html","packageVersion":"0.0.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;webpack-dev-server:3.2.1;ansi-html:0.0.7","isMinimumFixVersionAvailable":false,"isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23424","vulnerabilityDetails":"This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file molecule quickstart app package json path to vulnerable library molecule quickstart app node modules ansi html package json dependency hierarchy react scripts tgz root library webpack dev server tgz x ansi html tgz vulnerable library found in base branch master vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts webpack dev server ansi html isminimumfixversionavailable false isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time vulnerabilityurl
| 0
|
18,396
| 24,532,612,834
|
IssuesEvent
|
2022-10-11 17:46:28
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Future: extract Tailwind support to a community maintained repo
|
process
|
With the rapid developments in new and future CSS and seeing how Tailwind has struggled mightily to maintain relevance in the face of it (the `:has` pseudo-class alone has the potential to revolutionize styling, as does cascade layers, new color functions, container queries, and so forth)—even going so far as to invent a new fake-CSS-language which is [some of the nastiest mumbo-jumbo I've ever seen](https://tailwindcss.com/docs/adding-custom-styles#using-arbitrary-values)—I have come to the conclusion that Tailwind is a net negative for the web in general and should be avoided pretty much entirely except for very specific "rapid prototyping" use cases. I [once wrote a viral article](https://www.spicyweb.dev/why-tailwind-isnt-for-me/) about the dangers of Tailwind, and rather than the new JIT making it more appealing, it gave TW creators license to give into their worst impulses. (Again, it boggles my mind that the bizarro arbitrary values language linked to from above ever made it a production release.)
I've also found supporting the Tailwind JIT to be difficult in Bridgetown, and while v1.1 will support it better than v1.0, I have zero interest in maintaining this, and it's increasingly hard to stomach promoting Tailwind on the Bridgetown website at all.
Thus I'm proposing to extract Tailwind support out to a separate community maintained repo for Bridgetown 2.0 when that's released end of 2022 or early 2023. By then, evergreen browsers should have universally shipped a whole slew of new CSS features which I'll be quite happy to promote. I also think focusing on Open Props, Shoelace, and other projects which utilize "Use the Platform™️" methodologies—rather than attempting to replace them—will service Bridgetown users much better. Heck, at this point I'd rather you kick it old-school and use Bootstrap and Sass, rather than use Tailwind which will isolate you from the world of modern/vanilla CSS.
I'm open to feedback that this is simply a terrible idea and that I'm using my bully-pulpit in an adverse way, but I feel fairly strongly about this point so it'll have to be pretty compelling feedback. 🙂 I'd also love to compile a list of articles/courses/etc which help people learn how to write vanilla CSS and how to create design systems and use web component libraries, so that we have a rich and attractive set of recommendations of what to use _instead_ of Tailwind.
|
1.0
|
Future: extract Tailwind support to a community maintained repo - With the rapid developments in new and future CSS and seeing how Tailwind has struggled mightily to maintain relevance in the face of it (the `:has` pseudo-class alone has the potential to revolutionize styling, as does cascade layers, new color functions, container queries, and so forth)—even going so far as to invent a new fake-CSS-language which is [some of the nastiest mumbo-jumbo I've ever seen](https://tailwindcss.com/docs/adding-custom-styles#using-arbitrary-values)—I have come to the conclusion that Tailwind is a net negative for the web in general and should be avoided pretty much entirely except for very specific "rapid prototyping" use cases. I [once wrote a viral article](https://www.spicyweb.dev/why-tailwind-isnt-for-me/) about the dangers of Tailwind, and rather than the new JIT making it more appealing, it gave TW creators license to give into their worst impulses. (Again, it boggles my mind that the bizarro arbitrary values language linked to from above ever made it a production release.)
I've also found supporting the Tailwind JIT to be difficult in Bridgetown, and while v1.1 will support it better than v1.0, I have zero interest in maintaining this, and it's increasingly hard to stomach promoting Tailwind on the Bridgetown website at all.
Thus I'm proposing to extract Tailwind support out to a separate community maintained repo for Bridgetown 2.0 when that's released end of 2022 or early 2023. By then, evergreen browsers should have universally shipped a whole slew of new CSS features which I'll be quite happy to promote. I also think focusing on Open Props, Shoelace, and other projects which utilize "Use the Platform™️" methodologies—rather than attempting to replace them—will service Bridgetown users much better. Heck, at this point I'd rather you kick it old-school and use Bootstrap and Sass, rather than use Tailwind which will isolate you from the world of modern/vanilla CSS.
I'm open to feedback that this is simply a terrible idea and that I'm using my bully-pulpit in an adverse way, but I feel fairly strongly about this point so it'll have to be pretty compelling feedback. 🙂 I'd also love to compile a list of articles/courses/etc which help people learn how to write vanilla CSS and how to create design systems and use web component libraries, so that we have a rich and attractive set of recommendations of what to use _instead_ of Tailwind.
|
process
|
future extract tailwind support to a community maintained repo with the rapid developments in new and future css and seeing how tailwind has struggled mightily to maintain relevance in the face of it the has pseudo class alone has the potential to revolutionize styling as does cascade layers new color functions container queries and so forth —even going so far as to invent a new fake css language which is have come to the conclusion that tailwind is a net negative for the web in general and should be avoided pretty much entirely except for very specific rapid prototyping use cases i about the dangers of tailwind and rather than the new jit making it more appealing it gave tw creators license to give into their worst impulses again it boggles my mind that the bizarro arbitrary values language linked to from above ever made it a production release i ve also found supporting the tailwind jit to be difficult in bridgetown and while will support it better than i have zero interest in maintaining this and it s increasingly hard to stomach promoting tailwind on the bridgetown website at all thus i m proposing to extract tailwind support out to a separate community maintained repo for bridgetown when that s released end of or early by then evergreen browsers should have universally shipped a whole slew of new css features which i ll be quite happy to promote i also think focusing on open props shoelace and other projects which utilize use the platform™️ methodologies—rather than attempting to replace them—will service bridgetown users much better heck at this point i d rather you kick it old school and use bootstrap and sass rather than use tailwind which will isolate you from the world of modern vanilla css i m open to feedback that this is simply a terrible idea and that i m using my bully pulpit in an adverse way but i feel fairly strongly about this point so it ll have to be pretty compelling feedback 🙂 i d also love to compile a list of articles courses etc which help people learn how to write vanilla css and how to create design systems and use web component libraries so that we have a rich and attractive set of recommendations of what to use instead of tailwind
| 1
|
181,651
| 14,072,794,870
|
IssuesEvent
|
2020-11-04 02:52:06
|
hiltonjp/journey
|
https://api.github.com/repos/hiltonjp/journey
|
closed
|
Unit Testing: Boss battle classes
|
testing
|
- [x] HealthBelowCondition
- [x] AttackEngine
- [x] ~~Boss~~ All the complex interactions are delegated to the attack engine anyway.
- [x] BossWeakSpot
- [x] ~~MiniEye~~ Specific to a single boss battle. Probably not worth the effort.
- [x] ~~MegaEye~~ Specific to a single boss battle. Probably not worth the effort.
|
1.0
|
Unit Testing: Boss battle classes - - [x] HealthBelowCondition
- [x] AttackEngine
- [x] ~~Boss~~ All the complex interactions are delegated to the attack engine anyway.
- [x] BossWeakSpot
- [x] ~~MiniEye~~ Specific to a single boss battle. Probably not worth the effort.
- [x] ~~MegaEye~~ Specific to a single boss battle. Probably not worth the effort.
|
non_process
|
unit testing boss battle classes healthbelowcondition attackengine boss all the complex interactions are delegated to the attack engine anyway bossweakspot minieye specific to a single boss battle probably not worth the effort megaeye specific to a single boss battle probably not worth the effort
| 0
|
37,168
| 5,104,891,123
|
IssuesEvent
|
2017-01-05 03:48:31
|
AeroScripts/QuestieDev
|
https://api.github.com/repos/AeroScripts/QuestieDev
|
closed
|
Quest Item tooltip error.
|
bug resolved test again
|
Questie raise error on tooltip on quest item when item withdraw from bank.
How to make this error,
1. Store quest item in bank.
2. withdraw from bank and mouse over quest item.
3. Get error in QuestieNotes.lua line 396.
I changed in QuestieNotes.lua line 395,
> if QuestieHandledQuests[k] then --< CHANGED
local logid = Questie:GetQuestIdFromHash(k);
QSelect_QuestLogEntry(logid);
local desc, typ, done = QGet_QuestLogLeaderBoard(m[1]['objectiveid']);
local indx = findLast(desc, ":");
local countstr = string.sub(desc, indx+2);
--GameTooltip:AddLine(" " .. name .. ": " .. countstr, 1, 1, 0.2)
Questie_TooltipCache[cacheKey]['lines'][lineIndex+1] = {
['color'] = {1, 1, 0.2},
['data'] = " " .. name .. ": " .. countstr
};
Questie_TooltipCache[cacheKey]['lineCount'] = lineIndex + 2;
p = true;
mi = true;
end --< CHANGED
|
1.0
|
Quest Item tooltip error. - Questie raise error on tooltip on quest item when item withdraw from bank.
How to make this error,
1. Store quest item in bank.
2. withdraw from bank and mouse over quest item.
3. Get error in QuestieNotes.lua line 396.
I changed in QuestieNotes.lua line 395,
> if QuestieHandledQuests[k] then --< CHANGED
local logid = Questie:GetQuestIdFromHash(k);
QSelect_QuestLogEntry(logid);
local desc, typ, done = QGet_QuestLogLeaderBoard(m[1]['objectiveid']);
local indx = findLast(desc, ":");
local countstr = string.sub(desc, indx+2);
--GameTooltip:AddLine(" " .. name .. ": " .. countstr, 1, 1, 0.2)
Questie_TooltipCache[cacheKey]['lines'][lineIndex+1] = {
['color'] = {1, 1, 0.2},
['data'] = " " .. name .. ": " .. countstr
};
Questie_TooltipCache[cacheKey]['lineCount'] = lineIndex + 2;
p = true;
mi = true;
end --< CHANGED
|
non_process
|
quest item tooltip error questie raise error on tooltip on quest item when item withdraw from bank how to make this error store quest item in bank withdraw from bank and mouse over quest item get error in questienotes lua line i changed in questienotes lua line if questiehandledquests then changed local logid questie getquestidfromhash k qselect questlogentry logid local desc typ done qget questlogleaderboard m local indx findlast desc local countstr string sub desc indx gametooltip addline name countstr questie tooltipcache name countstr questie tooltipcache lineindex p true mi true end changed
| 0
|
18,672
| 24,590,723,883
|
IssuesEvent
|
2022-10-14 01:45:50
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
awk processor decode and assign
|
question processors
|
I want to decode the string into base64 in awk processor and assign it to a variable,just like this:
```
#! /bin/bash
astr=MjAyMi0xMC0xMiAxMDoyMTowNg==
bstr=`echo -n $astr|base64 -d`
echo "$bstr"
```
Decode astr with base64 and assign it to bstr
How to implement the above function in awk processor?
|
1.0
|
awk processor decode and assign - I want to decode the string into base64 in awk processor and assign it to a variable,just like this:
```
#! /bin/bash
astr=MjAyMi0xMC0xMiAxMDoyMTowNg==
bstr=`echo -n $astr|base64 -d`
echo "$bstr"
```
Decode astr with base64 and assign it to bstr
How to implement the above function in awk processor?
|
process
|
awk processor decode and assign i want to decode the string into in awk processor and assign it to a variable,just like this bin bash astr bstr echo n astr d echo bstr decode astr with and assign it to bstr how to implement the above function in awk processor?
| 1
|
21,014
| 27,958,013,238
|
IssuesEvent
|
2023-03-24 13:47:24
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Unable to start editing of mesh layer: Face is flat
|
Feedback Processing Bug Mesh
|
### What is the bug or the crash?
I want to edit a mesh layer but get the error message
>Unable to start editing of mesh layer "meshlayername": Face XXX is flat
where XXX is the number of the face.
### Steps to reproduce the issue
First, create a mesh (2dm file) using the "TIN Mesh Creation" tool. Input vector layers are some points.
I then want to edit this mesh but get the error message
>Unable to start editing of mesh layer "meshlayername": Face XXX is flat
I am not sure, what "face is flat" actually means? They are all triangular faces so should all be flat by definition?
Anyway, I tried manually deleting the offending face from the mesh by deleting the relevant line from the 2dm file, and this does seem to work in principle, but is very cumbersome because
* face numbers in the 2dm file are different from those used by QGIS
* QGIS only outputs the first offending face it finds. After editing the 2dm file manually, loading it again in QGIS and trying to edit again, I get the number of the next offending face from QGIS. I gave up after manually deleting about a dozen faces.
Possible fixes I can think of, in order of preference:
* The "TIN Mesh Creation" should not create meshes that are not editable in the first place
* QGIS could provide a tool to "fix" a mesh in order to make it editable
* QGIS should output the complete list of all offending faces at once, so that manually fixing the 2dm file is less cumbersome
### Versions
QGIS version | 3.22.12-Białowieża | QGIS code revision | b8534cb1
-- | -- | -- | --
Qt version | 5.15.3
Python version | 3.9.5
GDAL/OGR version | 3.5.2
PROJ version | 9.1.0
EPSG Registry database version | v10.074 (2022-08-01)
GEOS version | 3.10.3-CAPI-1.16.1
SQLite version | 3.39.4
PDAL version | 2.4.3
PostgreSQL client version | 14.3
SpatiaLite version | 5.0.1
QWT version | 6.1.6
QScintilla2 version | 2.13.1
OS version | Windows 10 Version 2009
| | |
Active Python plugins
DataPlotly | 3.9.2
profiletool | 4.2.2
quick_map_services | 0.19.33
wbt_for_qgis | 1.0.7
db_manager | 0.1.20
grassprovider | 2.12.99
MetaSearch | 0.3.5
processing | 2.12.99
sagaprovider | 2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Unable to start editing of mesh layer: Face is flat - ### What is the bug or the crash?
I want to edit a mesh layer but get the error message
>Unable to start editing of mesh layer "meshlayername": Face XXX is flat
where XXX is the number of the face.
### Steps to reproduce the issue
First, create a mesh (2dm file) using the "TIN Mesh Creation" tool. Input vector layers are some points.
I then want to edit this mesh but get the error message
>Unable to start editing of mesh layer "meshlayername": Face XXX is flat
I am not sure, what "face is flat" actually means? They are all triangular faces so should all be flat by definition?
Anyway, I tried manually deleting the offending face from the mesh by deleting the relevant line from the 2dm file, and this does seem to work in principle, but is very cumbersome because
* face numbers in the 2dm file are different from those used by QGIS
* QGIS only outputs the first offending face it finds. After editing the 2dm file manually, loading it again in QGIS and trying to edit again, I get the number of the next offending face from QGIS. I gave up after manually deleting about a dozen faces.
Possible fixes I can think of, in order of preference:
* The "TIN Mesh Creation" should not create meshes that are not editable in the first place
* QGIS could provide a tool to "fix" a mesh in order to make it editable
* QGIS should output the complete list of all offending faces at once, so that manually fixing the 2dm file is less cumbersome
### Versions
QGIS version | 3.22.12-Białowieża | QGIS code revision | b8534cb1
-- | -- | -- | --
Qt version | 5.15.3
Python version | 3.9.5
GDAL/OGR version | 3.5.2
PROJ version | 9.1.0
EPSG Registry database version | v10.074 (2022-08-01)
GEOS version | 3.10.3-CAPI-1.16.1
SQLite version | 3.39.4
PDAL version | 2.4.3
PostgreSQL client version | 14.3
SpatiaLite version | 5.0.1
QWT version | 6.1.6
QScintilla2 version | 2.13.1
OS version | Windows 10 Version 2009
| | |
Active Python plugins
DataPlotly | 3.9.2
profiletool | 4.2.2
quick_map_services | 0.19.33
wbt_for_qgis | 1.0.7
db_manager | 0.1.20
grassprovider | 2.12.99
MetaSearch | 0.3.5
processing | 2.12.99
sagaprovider | 2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
unable to start editing of mesh layer face is flat what is the bug or the crash i want to edit a mesh layer but get the error message unable to start editing of mesh layer meshlayername face xxx is flat where xxx is the number of the face steps to reproduce the issue first create a mesh file using the tin mesh creation tool input vector layers are some points i then want to edit this mesh but get the error message unable to start editing of mesh layer meshlayername face xxx is flat i am not sure what face is flat actually means they are all triangular faces so should all be flat by definition anyway i tried manually deleting the offending face from the mesh by deleting the relevant line from the file and this does seem to work in principle but is very cumbersome because face numbers in the file are different from those used by qgis qgis only outputs the first offending face it finds after editing the file manually loading it again in qgis and trying to edit again i get the number of the next offending face from qgis i gave up after manually deleting about a dozen faces possible fixes i can think of in order of preference the tin mesh creation should not create meshes that are not editable in the first place qgis could provide a tool to fix a mesh in order to make it editable qgis should output the complete list of all offending faces at once so that manually fixing the file is less cumbersome versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins dataplotly profiletool quick map services wbt for qgis db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
9,378
| 12,375,056,763
|
IssuesEvent
|
2020-05-19 03:29:18
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `GreatestReal` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the function `GreatestReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `GreatestReal` from TiDB -
## Description
Port the function `GreatestReal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function greatestreal from tidb description port the function greatestreal from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
252,741
| 8,041,138,463
|
IssuesEvent
|
2018-07-31 01:09:17
|
Verseghy/website_backend
|
https://api.github.com/repos/Verseghy/website_backend
|
closed
|
Add checkCaching() function to TestsBase
|
Priority: Low Type: Maintenance
|
This function should execute a query twice with two dates and check for response codes `200 ok` and `304 not modified`
Usage: `$this->checkCaching($endpoint, $params)`
It should use who dates:
```php
$farDate = 'Mon, 4 Jan 2100 00:00:00';
$oldDate = 'Mon, 5 Jan 1970 00:00:00';
```
|
1.0
|
Add checkCaching() function to TestsBase - This function should execute a query twice with two dates and check for response codes `200 ok` and `304 not modified`
Usage: `$this->checkCaching($endpoint, $params)`
It should use who dates:
```php
$farDate = 'Mon, 4 Jan 2100 00:00:00';
$oldDate = 'Mon, 5 Jan 1970 00:00:00';
```
|
non_process
|
add checkcaching function to testsbase this function should execute a query twice with two dates and check for response codes ok and not modified usage this checkcaching endpoint params it should use who dates php fardate mon jan olddate mon jan
| 0
|
15,527
| 19,703,290,628
|
IssuesEvent
|
2022-01-12 18:53:54
|
googleapis/nodejs-service-control
|
https://api.github.com/repos/googleapis/nodejs-service-control
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'service-control' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'service-control' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname service control invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
21,987
| 30,483,343,143
|
IssuesEvent
|
2023-07-17 22:31:55
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.16.35 has 2 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.35",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.35/src/robloxpy.py:106",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.35/src/robloxpy.py:113",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmphjrhlxml/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.16.35 has 2 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.35",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.35/src/robloxpy.py:106",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.35/src/robloxpy.py:113",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmphjrhlxml/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmphjrhlxml roblox pyc
| 1
|
9,273
| 12,302,181,633
|
IssuesEvent
|
2020-05-11 16:32:46
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Include 3rd party licenses and source in images
|
P1 enhancement process
|
**Problem**
Requirements for Google Cloud Marketplace inclusion mandate that Docker images include the licenses for all 3rd party dependencies (including transitive dependencies). Additionally, for CDDL, GPL and LGPL we may have to include the source code of dependencies. For CDDL/GPL, all our dependencies have the classpath exception so we're confirming if we still have to.
Above applies to Java. Since NPM is an approved package manager and already includes license text in node_modules downloaded by default, we don't have to do either.
**Solution**
We can either do it manually or automatically:
- Manual: Use maven-license-plugin to download licenses, check into a well known path and configure jib to include as a resource
- Automated: Hook maven-license-plugin to appropriate maven phase to download license before building image, configure jib to include build time path as resource
Either approach has to meet the following requirement:
```
There are two common practices for meeting this requirement.
You may use a single concatenated document that lists every direct and transitive third-party dependency, and provides the copyright notices and license text for each. If following this approach, it is fine to omit purely-duplicate licenses provided that the components subject to the license are clearly identified with their individual copyright notices included where the component is listed.
The second common practice is to bundle together all the individual LICENSE (and NOTICE, where applicable) text files from the upstream distributions into a THIRD_PARTY_NOTICES folder.
```
**Alternatives**
None
**Additional Context**
|
1.0
|
Include 3rd party licenses and source in images - **Problem**
Requirements for Google Cloud Marketplace inclusion mandate that Docker images include the licenses for all 3rd party dependencies (including transitive dependencies). Additionally, for CDDL, GPL and LGPL we may have to include the source code of dependencies. For CDDL/GPL, all our dependencies have the classpath exception so we're confirming if we still have to.
Above applies to Java. Since NPM is an approved package manager and already includes license text in node_modules downloaded by default, we don't have to do either.
**Solution**
We can either do it manually or automatically:
- Manual: Use maven-license-plugin to download licenses, check into a well known path and configure jib to include as a resource
- Automated: Hook maven-license-plugin to appropriate maven phase to download license before building image, configure jib to include build time path as resource
Either approach has to meet the following requirement:
```
There are two common practices for meeting this requirement.
You may use a single concatenated document that lists every direct and transitive third-party dependency, and provides the copyright notices and license text for each. If following this approach, it is fine to omit purely-duplicate licenses provided that the components subject to the license are clearly identified with their individual copyright notices included where the component is listed.
The second common practice is to bundle together all the individual LICENSE (and NOTICE, where applicable) text files from the upstream distributions into a THIRD_PARTY_NOTICES folder.
```
**Alternatives**
None
**Additional Context**
|
process
|
include party licenses and source in images problem requirements for google cloud marketplace inclusion mandate that docker images include the licenses for all party dependencies including transitive dependencies additionally for cddl gpl and lgpl we may have to include the source code of dependencies for cddl gpl all our dependencies have the classpath exception so we re confirming if we still have to above applies to java since npm is an approved package manager and already includes license text in node modules downloaded by default we don t have to do either solution we can either do it manually or automatically manual use maven license plugin to download licenses check into a well known path and configure jib to include as a resource automated hook maven license plugin to appropriate maven phase to download license before building image configure jib to include build time path as resource either approach has to meet the following requirement there are two common practices for meeting this requirement you may use a single concatenated document that lists every direct and transitive third party dependency and provides the copyright notices and license text for each if following this approach it is fine to omit purely duplicate licenses provided that the components subject to the license are clearly identified with their individual copyright notices included where the component is listed the second common practice is to bundle together all the individual license and notice where applicable text files from the upstream distributions into a third party notices folder alternatives none additional context
| 1
|
3,458
| 6,544,149,754
|
IssuesEvent
|
2017-09-03 12:22:50
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
opened
|
ethprice initialization
|
apps-ethPrice status-inprocess type-bug
|
Remove ~/.quickBlocks folder (i.e. brand new user)
Run test cases for ethprice (they all fail)
|
1.0
|
ethprice initialization - Remove ~/.quickBlocks folder (i.e. brand new user)
Run test cases for ethprice (they all fail)
|
process
|
ethprice initialization remove quickblocks folder i e brand new user run test cases for ethprice they all fail
| 1
|
3,364
| 6,493,490,941
|
IssuesEvent
|
2017-08-21 17:19:57
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
add F-P link between GO:0016926 protein desumoylation and GO:0016929 SUMO-specific protease activity
|
cellular processes missing parentage
|
GO:0016926 protein desumoylation
--part_of SUMO-specific protease activity
|
1.0
|
add F-P link between GO:0016926 protein desumoylation and GO:0016929 SUMO-specific protease activity -
GO:0016926 protein desumoylation
--part_of SUMO-specific protease activity
|
process
|
add f p link between go protein desumoylation and go sumo specific protease activity go protein desumoylation part of sumo specific protease activity
| 1
|
138,343
| 11,199,315,002
|
IssuesEvent
|
2020-01-03 18:21:58
|
sharkwouter/minigalaxy
|
https://api.github.com/repos/sharkwouter/minigalaxy
|
closed
|
Games not detected if their installation folder isn't their exact title
|
bug needs testing
|
The GOG installers that you download off their website omit certain characters when creating the installation folder. The characters I have found that have been omitted are : and -. There are quite possibly more characters but I don't have any examples in my library. As these characters don't exist in the folder but do in the game title they aren't detected by Minigalaxy on startup.
|
1.0
|
Games not detected if their installation folder isn't their exact title - The GOG installers that you download off their website omit certain characters when creating the installation folder. The characters I have found that have been omitted are : and -. There are quite possibly more characters but I don't have any examples in my library. As these characters don't exist in the folder but do in the game title they aren't detected by Minigalaxy on startup.
|
non_process
|
games not detected if their installation folder isn t their exact title the gog installers that you download off their website omit certain characters when creating the installation folder the characters i have found that have been omitted are and there are quite possibly more characters but i don t have any examples in my library as these characters don t exist in the folder but do in the game title they aren t detected by minigalaxy on startup
| 0
|
28,193
| 11,598,804,034
|
IssuesEvent
|
2020-02-25 00:11:06
|
fufunoyu/WebGoat-Legacy
|
https://api.github.com/repos/fufunoyu/WebGoat-Legacy
|
opened
|
CVE-2019-20330 (High) detected in jackson-databind-2.0.4.jar
|
security vulnerability
|
## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: downloadResource_ca8595bc-07eb-448a-8047-41739f7ca848/20200225000922/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/WebGoat-Legacy/commit/b4beca3f8389336252da3405dedeb0bc0523e51f">b4beca3f8389336252da3405dedeb0bc0523e51f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2">https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.2</p>
</p>
</details>
<p></p>
|
True
|
CVE-2019-20330 (High) detected in jackson-databind-2.0.4.jar - ## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: downloadResource_ca8595bc-07eb-448a-8047-41739f7ca848/20200225000922/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/WebGoat-Legacy/commit/b4beca3f8389336252da3405dedeb0bc0523e51f">b4beca3f8389336252da3405dedeb0bc0523e51f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2">https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.2</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file tmp ws scm webgoat legacy pom xml path to vulnerable library downloadresource jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before lacks certain net sf ehcache blocking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind
| 0
|
7,134
| 16,659,847,456
|
IssuesEvent
|
2021-06-06 06:16:29
|
OndrejSzekely/metron
|
https://api.github.com/repos/OndrejSzekely/metron
|
opened
|
Add Hydra config framework
|
Metron architecture
|
Moving from [Dynaconf](https://dynaconf.readthedocs.io/en/docs_223/) configuration framework to Facebook's [Hydra](https://hydra.cc/) config tool. This allows to have much broader configuration options, which more fit into our scenario.
|
1.0
|
Add Hydra config framework - Moving from [Dynaconf](https://dynaconf.readthedocs.io/en/docs_223/) configuration framework to Facebook's [Hydra](https://hydra.cc/) config tool. This allows to have much broader configuration options, which more fit into our scenario.
|
non_process
|
add hydra config framework moving from configuration framework to facebook s config tool this allows to have much broader configuration options which more fit into our scenario
| 0
|
17,605
| 23,427,746,542
|
IssuesEvent
|
2022-08-14 16:44:44
|
vortexntnu/Vortex-CV
|
https://api.github.com/repos/vortexntnu/Vortex-CV
|
closed
|
Create an image processing "core" ROS node (cv_utils)
|
Future feature Image Processing Utility
|
**Time estimate:** 5-7 hours
**Description of task:**
Write a template ROS node for image processing that is a clonable code module. It should be a base 'stripped-down' version of a ROS program meant for image processing testing and development. The main purpose is to have an easy way to set up image processing nodes/modules for testing with rosbags and live camera feed (via ROS). A part of the package should be a quick-start guide on how to use ROS for image manipulation with rosbags/ros-wrappers.
|
1.0
|
Create an image processing "core" ROS node (cv_utils) - **Time estimate:** 5-7 hours
**Description of task:**
Write a template ROS node for image processing that is a clonable code module. It should be a base 'stripped-down' version of a ROS program meant for image processing testing and development. The main purpose is to have an easy way to set up image processing nodes/modules for testing with rosbags and live camera feed (via ROS). A part of the package should be a quick-start guide on how to use ROS for image manipulation with rosbags/ros-wrappers.
|
process
|
create an image processing core ros node cv utils time estimate hours description of task write a template ros node for image processing that is a clonable code module it should be a base stripped down version of a ros program meant for image processing testing and development the main purpose is to have an easy way to set up image processing nodes modules for testing with rosbags and live camera feed via ros a part of the package should be a quick start guide on how to use ros for image manipulation with rosbags ros wrappers
| 1
|
13,662
| 16,383,867,970
|
IssuesEvent
|
2021-05-17 07:58:52
|
trpo2021/cw-ip-011_keyboardninja
|
https://api.github.com/repos/trpo2021/cw-ip-011_keyboardninja
|
opened
|
Transfer the program to windows
|
bug in process
|
Due to the incorrect operation of the program under Linux, it was decided to create an application for the Windows platform.
|
1.0
|
Transfer the program to windows - Due to the incorrect operation of the program under Linux, it was decided to create an application for the Windows platform.
|
process
|
transfer the program to windows due to the incorrect operation of the program under linux it was decided to create an application for the windows platform
| 1
|
4,209
| 7,176,092,690
|
IssuesEvent
|
2018-01-31 08:51:06
|
nuclio/nuclio
|
https://api.github.com/repos/nuclio/nuclio
|
closed
|
Support platform configuration, inject to processors
|
area/processor
|
A platform level configuration mechanism should be introduced to allow central configuration of:
Logging
Metrics
All processors should be injected this configuration
|
1.0
|
Support platform configuration, inject to processors - A platform level configuration mechanism should be introduced to allow central configuration of:
Logging
Metrics
All processors should be injected this configuration
|
process
|
support platform configuration inject to processors a platform level configuration mechanism should be introduced to allow central configuration of logging metrics all processors should be injected this configuration
| 1
|
8,372
| 3,164,468,351
|
IssuesEvent
|
2015-09-21 04:21:41
|
commercialhaskell/stack
|
https://api.github.com/repos/commercialhaskell/stack
|
closed
|
LGPL licensing restrictions on Windows because of integer-gmp
|
component: documentation
|
Default version of GHC on Windows produces binaries with integer-gmp linked statically. As integer-gmp's license is LGPL, it means that resulting binaries [should be provided with source code or object files](http://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic). I guess that's not a thing everyone would want.
On UNIXes integer-gmp is linked dynamically, so these restrictions don't apply.
So, in order to fix that we can either:
* Provide a version of GHC which builds Windows executables with integer-gmp linked dynamically (haven't tried, but seems like [it's possible](http://haskell.forkio.com/gmpwindows)).
* Provide a version of GHC [with integer-simple wired in](https://code.google.com/p/hmpfr/wiki/GHCWithoutGMP) instead of integer-gmp (some packages have problems, but overall works good for me).
* Or at least write a big red warning on stack's homepage that your Haskell software on Windows has to be essentially a free software.
Additional Links:
[ReplacingGMPNotes](https://ghc.haskell.org/trac/ghc/wiki/ReplacingGMPNotes)
UPDATE:
* More up-to-date link: [Design/IntegerGmp2](https://ghc.haskell.org/trac/ghc/wiki/Design/IntegerGmp2)
* Clarification, just to avoid confusion: license of integer-gmp (the Haskell library) is BSD3, but it uses C library [The GNU Multiple Precision Arithmetic Library](https://gmplib.org/) which is LGPL.
|
1.0
|
LGPL licensing restrictions on Windows because of integer-gmp - Default version of GHC on Windows produces binaries with integer-gmp linked statically. As integer-gmp's license is LGPL, it means that resulting binaries [should be provided with source code or object files](http://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic). I guess that's not a thing everyone would want.
On UNIXes integer-gmp is linked dynamically, so these restrictions don't apply.
So, in order to fix that we can either:
* Provide a version of GHC which builds Windows executables with integer-gmp linked dynamically (haven't tried, but seems like [it's possible](http://haskell.forkio.com/gmpwindows)).
* Provide a version of GHC [with integer-simple wired in](https://code.google.com/p/hmpfr/wiki/GHCWithoutGMP) instead of integer-gmp (some packages have problems, but overall works good for me).
* Or at least write a big red warning on stack's homepage that your Haskell software on Windows has to be essentially a free software.
Additional Links:
[ReplacingGMPNotes](https://ghc.haskell.org/trac/ghc/wiki/ReplacingGMPNotes)
UPDATE:
* More up-to-date link: [Design/IntegerGmp2](https://ghc.haskell.org/trac/ghc/wiki/Design/IntegerGmp2)
* Clarification, just to avoid confusion: license of integer-gmp (the Haskell library) is BSD3, but it uses C library [The GNU Multiple Precision Arithmetic Library](https://gmplib.org/) which is LGPL.
|
non_process
|
lgpl licensing restrictions on windows because of integer gmp default version of ghc on windows produces binaries with integer gmp linked statically as integer gmp s license is lgpl it means that resulting binaries i guess that s not a thing everyone would want on unixes integer gmp is linked dynamically so these restrictions don t apply so in order to fix that we can either provide a version of ghc which builds windows executables with integer gmp linked dynamically haven t tried but seems like provide a version of ghc instead of integer gmp some packages have problems but overall works good for me or at least write a big red warning on stack s homepage that your haskell software on windows has to be essentially a free software additional links update more up to date link clarification just to avoid confusion license of integer gmp the haskell library is but it uses c library which is lgpl
| 0
|
14,500
| 17,604,292,690
|
IssuesEvent
|
2021-08-17 15:13:32
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[Processing][needs-docs] Add incremental field: add modulo option (Request in QGIS)
|
Processing Alg 3.22
|
### Request for documentation
From pull request QGIS/qgis#44354
Author: @lbartoletti
QGIS version: 3.22
** [Processing][needs-docs] Add incremental field: add modulo option**
### PR Description:
## Description
This algorithm allows to add a column with an integer that will be incremented
from START to the limit, with the possibility of grouping to resume at the
value of START following the group.
FME also offers another option (well, an other transformer) which is called
modulo counter.
This option will reset the counter to the starting value if the modulo value is
reached... 0 indicates that we don't use the modulo option.
### Commits tagged with [need-docs] or [FEATURE]
"[Processing]\n\n\n\n Add incremental field: add modulo option\n\nThis algorithm allows to add a column with an integer that will be incremented\nfrom START to the limit, with the possibility of grouping to resume at the\nvalue of START following the group.\n\nFME also offers another option (actually another transformer) which is called\nmodulo counter.\n\nThis option will reset the counter to the starting value if the modulo value is\nreached...\n\n0 indicates that we don't use the modulo option."
|
1.0
|
[Processing][needs-docs] Add incremental field: add modulo option (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#44354
Author: @lbartoletti
QGIS version: 3.22
** [Processing][needs-docs] Add incremental field: add modulo option**
### PR Description:
## Description
This algorithm allows to add a column with an integer that will be incremented
from START to the limit, with the possibility of grouping to resume at the
value of START following the group.
FME also offers another option (well, an other transformer) which is called
modulo counter.
This option will reset the counter to the starting value if the modulo value is
reached... 0 indicates that we don't use the modulo option.
### Commits tagged with [need-docs] or [FEATURE]
"[Processing]\n\n\n\n Add incremental field: add modulo option\n\nThis algorithm allows to add a column with an integer that will be incremented\nfrom START to the limit, with the possibility of grouping to resume at the\nvalue of START following the group.\n\nFME also offers another option (actually another transformer) which is called\nmodulo counter.\n\nThis option will reset the counter to the starting value if the modulo value is\nreached...\n\n0 indicates that we don't use the modulo option."
|
process
|
add incremental field add modulo option request in qgis request for documentation from pull request qgis qgis author lbartoletti qgis version add incremental field add modulo option pr description description this algorithm allows to add a column with an integer that will be incremented from start to the limit with the possibility of grouping to resume at the value of start following the group fme also offers another option well an other transformer which is called modulo counter this option will reset the counter to the starting value if the modulo value is reached indicates that we don t use the modulo option commits tagged with or n n n n add incremental field add modulo option n nthis algorithm allows to add a column with an integer that will be incremented nfrom start to the limit with the possibility of grouping to resume at the nvalue of start following the group n nfme also offers another option actually another transformer which is called nmodulo counter n nthis option will reset the counter to the starting value if the modulo value is nreached n indicates that we don t use the modulo option
| 1
|
22,755
| 32,075,639,229
|
IssuesEvent
|
2023-09-25 10:48:41
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
reprepbuild 1.2.0 has 3 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/reprepbuild
https://inspector.pypi.io/project/reprepbuild
```{
"dependency": "reprepbuild",
"version": "1.2.0",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:77",
"code": " cp = subprocess.run(\n [f\"{args.latex}\", \"-recorder\", \"-interaction=batchmode\", \"-draftmode\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n ...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:94",
"code": " cp = subprocess.run(\n [f\"{args.bibtex}\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n check=False,\n cwd=workdir,\n ...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:130",
"code": " cp = subprocess.run(\n [f\"{args.latex}\", \"-recorder\", \"-interaction=batchmode\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n check=F...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmph383_wwi/reprepbuild"
}
}```
|
1.0
|
reprepbuild 1.2.0 has 3 GuardDog issues - https://pypi.org/project/reprepbuild
https://inspector.pypi.io/project/reprepbuild
```{
"dependency": "reprepbuild",
"version": "1.2.0",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:77",
"code": " cp = subprocess.run(\n [f\"{args.latex}\", \"-recorder\", \"-interaction=batchmode\", \"-draftmode\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n ...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:94",
"code": " cp = subprocess.run(\n [f\"{args.bibtex}\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n check=False,\n cwd=workdir,\n ...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "RepRepBuild-1.2.0/src/reprepbuild/scripts/latex.py:130",
"code": " cp = subprocess.run(\n [f\"{args.latex}\", \"-recorder\", \"-interaction=batchmode\", stem],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n check=F...\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmph383_wwi/reprepbuild"
}
}```
|
process
|
reprepbuild has guarddog issues dependency reprepbuild version result issues errors results silent process execution location reprepbuild src reprepbuild scripts latex py code cp subprocess run n n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location reprepbuild src reprepbuild scripts latex py code cp subprocess run n n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n check false n cwd workdir n n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location reprepbuild src reprepbuild scripts latex py code cp subprocess run n n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n check f n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp wwi reprepbuild
| 1
|
6,260
| 9,218,632,320
|
IssuesEvent
|
2019-03-11 13:50:07
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR and ontology revisions for plant immunity terms
|
PomBase multi-species process viruses and symbionts
|
Attention
@lreiser (TAIR)
@ebakker2 (TAIR)
@tberardini (TAIR)
@CuzickA (PHI_Base)
As part of the PHI-Base project we are building and ontology to represent and annotate pathogen host interaction phenotypes. These terms will be logically defined using GO terms to describe 'normal' processes . We will also provide GO annotations for pathogen species and occasionaly for host species.
As part of the tool development I was curating this paper:
PMID:20601497 Activation of an Arabidopsis resistance protein is specified by the in planta association of its leucine-rich repeat domain with the cognate oomycete effector.
Authors | Krasileva KV, Dahlbeck D, Staskawicz BJ
And we encounterd a few problems with the GO terms we would to use to describe the Arabidopsis protein.
Background for GO editors
Plants have evolved a multilevel innate immune system to protect them against infection by a diverse range of pathogens, including viruses, bacteria, fungi, oomycetes, and nematodes. Despite the great evolutionary distance among phytopathogens, the out- come of the plant–pathogen interactions is controlled by the same principles: the ability of the pathogen to suppress the plant immune system to establish infection, and the ability of plants to recognize the presence of a pathogen and to induce immune responses that restrict pathogen growth.
Plants innate immune systems have 2 main components:
1. First line of defense: PAMP- triggered immunity (PTI)
Upon association with PAMPs, the pattern-recognition receptors activate a downstream mitogen-activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses
To interfere with PTI effector proteins that are delivered into and function within the host plant cells
2. Second line of defense: Effector-triggered immunity
https://en.wikipedia.org/wiki/Effector-triggered_immunity
The second layer of plant immunity depends on the ability of the plant to recognize these pathogen- derived effectors and trigger a robust resistance response that normally culminates in a hypersensitive cell death response (HR)
Effector-triggered immunity is mediated by a large group of structurally related intracellular innate immune receptors encoded by resistance (R) genes.
RPP1, MOLECULAR FUNCTION
So RPP1 is an "intracellular signalling receptor" for the "pathogen-derived effectors"
We couldn't find anything suitable for this under "signaling receptor activity"
https://www.ebi.ac.uk/QuickGO/term/GO:0038023
SUGGEST
NTR:
intracellular innate receptor activity
synonym related resistance gene
synonym related receptor recognition
Combining with an effector entity produced by a pathogen in order to activate effector-triggered immunity in a host.
PMID: 21490299 is one paper that uses this term to describe such receptors, however the term might not be specific enough to describe effector-triggered immunity.
It would be OK to make the activity definition more general if necessary (we can link the activity to the process to capture this detail).
Notes/Comments:
- "intracellular innate receptor activity" may exist in other contexts, in which case this would need to be revised?
- It seems reasonable to describe receptors based on their ligands (i.e specifically a 'pathogen effector' , but note that pathogen 'effectors are fairly heterogeneous).
- we tried to make this generic for effector-triggered immunity
- "intracellular innate receptor activity" most slosely matched the community descriptions we could find. Generally these types of receptors are just called "resistance genes" which is not informative about the function.
RPP1 BIOLOGICAL PROCESS
~We looked for the term to describe the process
"Effector-triggered immunity" and eventually located
GO:0080185 effector dependent induction by symbiont of host immune response
which is *very nicely defined*
Any process that involves recognition of an effector, and by which an organism activates, maintains or increases the frequency, rate or extent of the immune response of the host organism; the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat. The host is defined as the larger of the organisms involved in a symbiotic interaction. Effectors are proteins secreted into the host cell by pathogenic microbes, presumably to alter host immune response signaling. The best characterized effectors are bacterial effectors delivered into the host cell by type III secretion system (TTSS). Effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance or R proteins) and subsequent activation of host immune response. PMID:16497589
but is there any reason why the primary term name can't be
"Effector-triggered immunity"? 2,440,000 results
"effector dependent induction by symbiont of host immune response" About 223 results
presumably all from GO.
I very nearly missed this term. And it only has 5 annotations which is no surprise!~
transferred to
https://github.com/geneontology/go-ontology/issues/17014
|
1.0
|
NTR and ontology revisions for plant immunity terms - Attention
@lreiser (TAIR)
@ebakker2 (TAIR)
@tberardini (TAIR)
@CuzickA (PHI_Base)
As part of the PHI-Base project we are building and ontology to represent and annotate pathogen host interaction phenotypes. These terms will be logically defined using GO terms to describe 'normal' processes . We will also provide GO annotations for pathogen species and occasionaly for host species.
As part of the tool development I was curating this paper:
PMID:20601497 Activation of an Arabidopsis resistance protein is specified by the in planta association of its leucine-rich repeat domain with the cognate oomycete effector.
Authors | Krasileva KV, Dahlbeck D, Staskawicz BJ
And we encounterd a few problems with the GO terms we would to use to describe the Arabidopsis protein.
Background for GO editors
Plants have evolved a multilevel innate immune system to protect them against infection by a diverse range of pathogens, including viruses, bacteria, fungi, oomycetes, and nematodes. Despite the great evolutionary distance among phytopathogens, the out- come of the plant–pathogen interactions is controlled by the same principles: the ability of the pathogen to suppress the plant immune system to establish infection, and the ability of plants to recognize the presence of a pathogen and to induce immune responses that restrict pathogen growth.
Plants innate immune systems have 2 main components:
1. First line of defense: PAMP- triggered immunity (PTI)
Upon association with PAMPs, the pattern-recognition receptors activate a downstream mitogen-activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses
To interfere with PTI effector proteins that are delivered into and function within the host plant cells
2. Second line of defense: Effector-triggered immunity
https://en.wikipedia.org/wiki/Effector-triggered_immunity
The second layer of plant immunity depends on the ability of the plant to recognize these pathogen- derived effectors and trigger a robust resistance response that normally culminates in a hypersensitive cell death response (HR)
Effector-triggered immunity is mediated by a large group of structurally related intracellular innate immune receptors encoded by resistance (R) genes.
RPP1, MOLECULAR FUNCTION
So RPP1 is an "intracellular signalling receptor" for the "pathogen-derived effectors"
We couldn't find anything suitable for this under "signaling receptor activity"
https://www.ebi.ac.uk/QuickGO/term/GO:0038023
SUGGEST
NTR:
intracellular innate receptor activity
synonym related resistance gene
synonym related receptor recognition
Combining with an effector entity produced by a pathogen in order to activate effector-triggered immunity in a host.
PMID: 21490299 is one paper that uses this term to describe such receptors, however the term might not be specific enough to describe effector-triggered immunity.
It would be OK to make the activity definition more general if necessary (we can link the activity to the process to capture this detail).
Notes/Comments:
- "intracellular innate receptor activity" may exist in other contexts, in which case this would need to be revised?
- It seems reasonable to describe receptors based on their ligands (i.e specifically a 'pathogen effector' , but note that pathogen 'effectors are fairly heterogeneous).
- we tried to make this generic for effector-triggered immunity
- "intracellular innate receptor activity" most slosely matched the community descriptions we could find. Generally these types of receptors are just called "resistance genes" which is not informative about the function.
RPP1 BIOLOGICAL PROCESS
~We looked for the term to describe the process
"Effector-triggered immunity" and eventually located
GO:0080185 effector dependent induction by symbiont of host immune response
which is *very nicely defined*
Any process that involves recognition of an effector, and by which an organism activates, maintains or increases the frequency, rate or extent of the immune response of the host organism; the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat. The host is defined as the larger of the organisms involved in a symbiotic interaction. Effectors are proteins secreted into the host cell by pathogenic microbes, presumably to alter host immune response signaling. The best characterized effectors are bacterial effectors delivered into the host cell by type III secretion system (TTSS). Effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance or R proteins) and subsequent activation of host immune response. PMID:16497589
but is there any reason why the primary term name can't be
"Effector-triggered immunity"? 2,440,000 results
"effector dependent induction by symbiont of host immune response" About 223 results
presumably all from GO.
I very nearly missed this term. And it only has 5 annotations which is no surprise!~
transferred to
https://github.com/geneontology/go-ontology/issues/17014
|
process
|
ntr and ontology revisions for plant immunity terms attention lreiser tair tair tberardini tair cuzicka phi base as part of the phi base project we are building and ontology to represent and annotate pathogen host interaction phenotypes these terms will be logically defined using go terms to describe normal processes we will also provide go annotations for pathogen species and occasionaly for host species as part of the tool development i was curating this paper pmid activation of an arabidopsis resistance protein is specified by the in planta association of its leucine rich repeat domain with the cognate oomycete effector authors krasileva kv dahlbeck d staskawicz bj and we encounterd a few problems with the go terms we would to use to describe the arabidopsis protein background for go editors plants have evolved a multilevel innate immune system to protect them against infection by a diverse range of pathogens including viruses bacteria fungi oomycetes and nematodes despite the great evolutionary distance among phytopathogens the out come of the plant–pathogen interactions is controlled by the same principles the ability of the pathogen to suppress the plant immune system to establish infection and the ability of plants to recognize the presence of a pathogen and to induce immune responses that restrict pathogen growth plants innate immune systems have main components first line of defense pamp triggered immunity pti upon association with pamps the pattern recognition receptors activate a downstream mitogen activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses to interfere with pti effector proteins that are delivered into and function within the host plant cells second line of defense effector triggered immunity the second layer of plant immunity depends on the ability of the plant to recognize these pathogen derived effectors and trigger a robust resistance response that normally culminates in a hypersensitive cell death response hr effector triggered immunity is mediated by a large group of structurally related intracellular innate immune receptors encoded by resistance r genes molecular function so is an intracellular signalling receptor for the pathogen derived effectors we couldn t find anything suitable for this under signaling receptor activity suggest ntr intracellular innate receptor activity synonym related resistance gene synonym related receptor recognition combining with an effector entity produced by a pathogen in order to activate effector triggered immunity in a host pmid is one paper that uses this term to describe such receptors however the term might not be specific enough to describe effector triggered immunity it would be ok to make the activity definition more general if necessary we can link the activity to the process to capture this detail notes comments intracellular innate receptor activity may exist in other contexts in which case this would need to be revised it seems reasonable to describe receptors based on their ligands i e specifically a pathogen effector but note that pathogen effectors are fairly heterogeneous we tried to make this generic for effector triggered immunity intracellular innate receptor activity most slosely matched the community descriptions we could find generally these types of receptors are just called resistance genes which is not informative about the function biological process we looked for the term to describe the process effector triggered immunity and eventually located go effector dependent induction by symbiont of host immune response which is very nicely defined any process that involves recognition of an effector and by which an organism activates maintains or increases the frequency rate or extent of the immune response of the host organism the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat the host is defined as the larger of the organisms involved in a symbiotic interaction effectors are proteins secreted into the host cell by pathogenic microbes presumably to alter host immune response signaling the best characterized effectors are bacterial effectors delivered into the host cell by type iii secretion system ttss effector triggered immunity eti involves the direct or indirect recognition of an effector protein by the host for example through plant resistance or r proteins and subsequent activation of host immune response pmid but is there any reason why the primary term name can t be effector triggered immunity results effector dependent induction by symbiont of host immune response about results presumably all from go i very nearly missed this term and it only has annotations which is no surprise transferred to
| 1
|
5,528
| 8,387,502,850
|
IssuesEvent
|
2018-10-09 00:49:00
|
w3c/transitions
|
https://api.github.com/repos/w3c/transitions
|
opened
|
Joint publication and obsoleting/superseding
|
Process Issue
|
When proposing to obsolete/supersede a specification that is joint publication, we need to check with the other organization before moving forward, e.g. #87
|
1.0
|
Joint publication and obsoleting/superseding - When proposing to obsolete/supersede a specification that is joint publication, we need to check with the other organization before moving forward, e.g. #87
|
process
|
joint publication and obsoleting superseding when proposing to obsolete supersede a specification that is joint publication we need to check with the other organization before moving forward e g
| 1
|
7,363
| 10,509,472,642
|
IssuesEvent
|
2019-09-27 11:05:01
|
Remosy/DropTheGame
|
https://api.github.com/repos/Remosy/DropTheGame
|
closed
|
Ah...Illposted IRL...Bless me
|
Inprocessing
|
1. Implement MLS magnitude least square
2. Test Grad-CAM
3. Implement GAIL
4. Test GAIL
|
1.0
|
Ah...Illposted IRL...Bless me - 1. Implement MLS magnitude least square
2. Test Grad-CAM
3. Implement GAIL
4. Test GAIL
|
process
|
ah illposted irl bless me implement mls magnitude least square test grad cam implement gail test gail
| 1
|
101,584
| 16,516,333,191
|
IssuesEvent
|
2021-05-26 10:04:47
|
kijunb33/a
|
https://api.github.com/repos/kijunb33/a
|
opened
|
CVE-2020-9484 (High) detected in tomcat-embed-core-7.0.90.jar
|
security vulnerability
|
## CVE-2020-9484 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to vulnerable library: a/tomcat-embed-core-7.0.90.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/a/commits/229cd2769ee7af8875f909c851082d25a15c6701">229cd2769ee7af8875f909c851082d25a15c6701</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using Apache Tomcat versions 10.0.0-M1 to 10.0.0-M4, 9.0.0.M1 to 9.0.34, 8.5.0 to 8.5.54 and 7.0.0 to 7.0.103 if a) an attacker is able to control the contents and name of a file on the server; and b) the server is configured to use the PersistenceManager with a FileStore; and c) the PersistenceManager is configured with sessionAttributeValueClassNameFilter="null" (the default unless a SecurityManager is used) or a sufficiently lax filter to allow the attacker provided object to be deserialized; and d) the attacker knows the relative file path from the storage location used by FileStore to the file the attacker has control over; then, using a specifically crafted request, the attacker will be able to trigger remote code execution via deserialization of the file under their control. Note that all of conditions a) to d) must be true for the attack to succeed.
<p>Publish Date: 2020-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484>CVE-2020-9484</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484</a></p>
<p>Release Date: 2020-05-20</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.104,8.5.55,9.0.35,10.0.0-M5,org.apache.tomcat:tomcat-catalina:7.0.104,8.5.55,9.0.35,10.0.0-M5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-9484 (High) detected in tomcat-embed-core-7.0.90.jar - ## CVE-2020-9484 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to vulnerable library: a/tomcat-embed-core-7.0.90.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/a/commits/229cd2769ee7af8875f909c851082d25a15c6701">229cd2769ee7af8875f909c851082d25a15c6701</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using Apache Tomcat versions 10.0.0-M1 to 10.0.0-M4, 9.0.0.M1 to 9.0.34, 8.5.0 to 8.5.54 and 7.0.0 to 7.0.103 if a) an attacker is able to control the contents and name of a file on the server; and b) the server is configured to use the PersistenceManager with a FileStore; and c) the PersistenceManager is configured with sessionAttributeValueClassNameFilter="null" (the default unless a SecurityManager is used) or a sufficiently lax filter to allow the attacker provided object to be deserialized; and d) the attacker knows the relative file path from the storage location used by FileStore to the file the attacker has control over; then, using a specifically crafted request, the attacker will be able to trigger remote code execution via deserialization of the file under their control. Note that all of conditions a) to d) must be true for the attack to succeed.
<p>Publish Date: 2020-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484>CVE-2020-9484</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484</a></p>
<p>Release Date: 2020-05-20</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.104,8.5.55,9.0.35,10.0.0-M5,org.apache.tomcat:tomcat-catalina:7.0.104,8.5.55,9.0.35,10.0.0-M5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to vulnerable library a tomcat embed core jar dependency hierarchy x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when using apache tomcat versions to to to and to if a an attacker is able to control the contents and name of a file on the server and b the server is configured to use the persistencemanager with a filestore and c the persistencemanager is configured with sessionattributevalueclassnamefilter null the default unless a securitymanager is used or a sufficiently lax filter to allow the attacker provided object to be deserialized and d the attacker knows the relative file path from the storage location used by filestore to the file the attacker has control over then using a specifically crafted request the attacker will be able to trigger remote code execution via deserialization of the file under their control note that all of conditions a to d must be true for the attack to succeed publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat catalina step up your open source security game with whitesource
| 0
|
12,118
| 14,740,669,904
|
IssuesEvent
|
2021-01-07 09:27:08
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
URGENT! SAB can't add account under the customer site 071 billings
|
anc-process anp-urgent ant-bug
|
In GitLab by @kdjstudios on Nov 26, 2018, 15:18
**Submitted by:** <sarah.king@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6151073
**Server:** Internal
**Client/Site:** Billings
**Account:** URGENT! SAB can't add account under the customer site 071 billings
**Issue:**
I am trying to add an account under a customer in SAB and it is not letting me all it says when I click on the correct button is “we’re sorry, but something went wrong” can this be fixed asap so I can add my missing accounts so I can have my billing started please?
|
1.0
|
URGENT! SAB can't add account under the customer site 071 billings - In GitLab by @kdjstudios on Nov 26, 2018, 15:18
**Submitted by:** <sarah.king@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6151073
**Server:** Internal
**Client/Site:** Billings
**Account:** URGENT! SAB can't add account under the customer site 071 billings
**Issue:**
I am trying to add an account under a customer in SAB and it is not letting me all it says when I click on the correct button is “we’re sorry, but something went wrong” can this be fixed asap so I can add my missing accounts so I can have my billing started please?
|
process
|
urgent sab can t add account under the customer site billings in gitlab by kdjstudios on nov submitted by helpdesk server internal client site billings account urgent sab can t add account under the customer site billings issue i am trying to add an account under a customer in sab and it is not letting me all it says when i click on the correct button is “we’re sorry but something went wrong” can this be fixed asap so i can add my missing accounts so i can have my billing started please
| 1
|
623,801
| 19,679,798,644
|
IssuesEvent
|
2022-01-11 15:45:10
|
godaddy-wordpress/coblocks
|
https://api.github.com/repos/godaddy-wordpress/coblocks
|
closed
|
ISBAT use the Map block with React hooks
|
[Type] Bug [Priority] High
|
### Describe the bug:
<!-- A clear and concise description of what the bug is, using ISBAT format "I should be able to..." -->
Recent [refactor of Map block to React hooks](https://github.com/godaddy-wordpress/coblocks/pull/2010) did not cover the case of API key use. Standard block behavior is to use a simple Iframe and source the Google API. The API key method is to use a 3rd party package `react-google-maps` for its Google Maps Implementation. The way in which the component functions is not compatible with React Hooks and only functions in this context within class-based components. Part of the issue has to do with the fact that `react-google-maps` was written using a now legacy version of React and has features that are now deprecated, specifically context.
With the [Performance Metrics PR, the Map block has been reverted back to a Class-based component](https://github.com/godaddy-wordpress/coblocks/pull/2031/commits/aebbc78e21b081edb9260b870b5bb03a0c9615a9). We must investigate what it will take to refactor the Google Maps API key functionality into a state where we have functionality with hooks. I posit that perhaps we should consider looking for an alternate (more up-to-date) or write our own solution to be housed within CoBlocks codebase.
- [The block should be able to deprecate from the following block markup](https://github.com/godaddy-wordpress/coblocks/issues/1596)
```
<!-- wp:coblocks/map {"address":"Scottsdale Arizona","pinned":true,"height":534,"align":"full"} -->
<div style="min-height:534px" data-map-attr="/qaddress/q:/scottsdale arizona/q||/qlat/q:/qundefined/q||/qlng/q:/qundefined/q||/qskin/q:/qstandard/q||/qzoom/q:/q12/q||/qiconSize/q:/q36/q||/qmapTypeControl/q:/qtrue/q||/qzoomControl/q:/qtrue/q||/qstreetViewControl/q:/qtrue/q||/qfullscreenControl/q:/qtrue/q" class="wp-block-coblocks-map alignfull"><iframe title="Google Map" frameborder="0" style="width:100%;min-height:534px" src="https://www.google.com/maps?q=scottsdale%20arizona&language=ja&output=embed&hl=%s&z=12"></iframe></div>
<!-- /wp:coblocks/map -->
```
- [IS not use jQuery in the Map block refactor](https://github.com/godaddy-wordpress/coblocks/issues/1365)
- [The Map block is affected by specific conditional user permissions that can affect UX. This is a bit of an outstanding issue that also affects the Gist block but it should be resolved for the Map block here.](https://github.com/godaddy-wordpress/coblocks/issues/175) - We might be able to resolve this issuse by using an `oembed` as suggested in the linked thread. We might also need to go more of a SSR route but I am not positve either way. That research is part of the refactors.
|
1.0
|
ISBAT use the Map block with React hooks - ### Describe the bug:
<!-- A clear and concise description of what the bug is, using ISBAT format "I should be able to..." -->
Recent [refactor of Map block to React hooks](https://github.com/godaddy-wordpress/coblocks/pull/2010) did not cover the case of API key use. Standard block behavior is to use a simple Iframe and source the Google API. The API key method is to use a 3rd party package `react-google-maps` for its Google Maps Implementation. The way in which the component functions is not compatible with React Hooks and only functions in this context within class-based components. Part of the issue has to do with the fact that `react-google-maps` was written using a now legacy version of React and has features that are now deprecated, specifically context.
With the [Performance Metrics PR, the Map block has been reverted back to a Class-based component](https://github.com/godaddy-wordpress/coblocks/pull/2031/commits/aebbc78e21b081edb9260b870b5bb03a0c9615a9). We must investigate what it will take to refactor the Google Maps API key functionality into a state where we have functionality with hooks. I posit that perhaps we should consider looking for an alternate (more up-to-date) or write our own solution to be housed within CoBlocks codebase.
- [The block should be able to deprecate from the following block markup](https://github.com/godaddy-wordpress/coblocks/issues/1596)
```
<!-- wp:coblocks/map {"address":"Scottsdale Arizona","pinned":true,"height":534,"align":"full"} -->
<div style="min-height:534px" data-map-attr="/qaddress/q:/scottsdale arizona/q||/qlat/q:/qundefined/q||/qlng/q:/qundefined/q||/qskin/q:/qstandard/q||/qzoom/q:/q12/q||/qiconSize/q:/q36/q||/qmapTypeControl/q:/qtrue/q||/qzoomControl/q:/qtrue/q||/qstreetViewControl/q:/qtrue/q||/qfullscreenControl/q:/qtrue/q" class="wp-block-coblocks-map alignfull"><iframe title="Google Map" frameborder="0" style="width:100%;min-height:534px" src="https://www.google.com/maps?q=scottsdale%20arizona&language=ja&output=embed&hl=%s&z=12"></iframe></div>
<!-- /wp:coblocks/map -->
```
- [IS not use jQuery in the Map block refactor](https://github.com/godaddy-wordpress/coblocks/issues/1365)
- [The Map block is affected by specific conditional user permissions that can affect UX. This is a bit of an outstanding issue that also affects the Gist block but it should be resolved for the Map block here.](https://github.com/godaddy-wordpress/coblocks/issues/175) - We might be able to resolve this issuse by using an `oembed` as suggested in the linked thread. We might also need to go more of a SSR route but I am not positve either way. That research is part of the refactors.
|
non_process
|
isbat use the map block with react hooks describe the bug recent did not cover the case of api key use standard block behavior is to use a simple iframe and source the google api the api key method is to use a party package react google maps for its google maps implementation the way in which the component functions is not compatible with react hooks and only functions in this context within class based components part of the issue has to do with the fact that react google maps was written using a now legacy version of react and has features that are now deprecated specifically context with the we must investigate what it will take to refactor the google maps api key functionality into a state where we have functionality with hooks i posit that perhaps we should consider looking for an alternate more up to date or write our own solution to be housed within coblocks codebase iframe title google map frameborder style width min height src we might be able to resolve this issuse by using an oembed as suggested in the linked thread we might also need to go more of a ssr route but i am not positve either way that research is part of the refactors
| 0
|
291,236
| 25,131,709,465
|
IssuesEvent
|
2022-11-09 15:34:52
|
yesidc/toolbox
|
https://api.github.com/repos/yesidc/toolbox
|
closed
|
Overview page: "contact us" in prominent place?
|
question GUI 1st Prototype usability test
|
Should we add a structure element to the overview page for reaching out to us / DLL?
|
1.0
|
Overview page: "contact us" in prominent place? - Should we add a structure element to the overview page for reaching out to us / DLL?
|
non_process
|
overview page contact us in prominent place should we add a structure element to the overview page for reaching out to us dll
| 0
|
15,772
| 19,915,730,983
|
IssuesEvent
|
2022-01-25 22:22:44
|
swig/swig
|
https://api.github.com/repos/swig/swig
|
opened
|
-includeall replacement?
|
preprocessor
|
The current `-includeall` seems of very limited use in practice.
It seems to be intended to handle the case where a library has multiple headers which are essentially an implementation detail with one public header including the others. C/C++ code using the library should explicitly include only the one public header, but to wrap it with SWIG you either need to explicitly `%include` all the headers which are implementation details (which is brittle to such headers getting added, removed and renamed in new releases) or use `-includeall`.
It's big flaw though is that it will try to follow any `#include`, and library headers are likely to include at least one system header somewhere (that's especially true for C++ libraries). Even if the current version of the library headers don't, it's brittle to a new version of the library adding `#include <stdio.h>` or similar somewhere.
A tempting idea is to discriminate `<...>` and `"..."` but usage isn't consistent enough for that to actually help - a quick grep in the headers installed on my system finds counterexamples in both directions - here's a random example of each:
```
minizip/ioapi.h:#include "stdint.h"
eigen3/Eigen/KLUSupport:#include <Eigen/SparseCore>
```
Taking a step back, there are a couple of common patterns here:
* public header `/usr/include/foo.h` and the implementation headers in `/usr/include/foo/`
* public header `/usr/include/foo/foo.h` (or e.g. `foo/main.h`) and the implementation headers in `/usr/include/foo/` (or all in a subdirectory of that)
So a way to say `-includeallunder=/usr/include/foo` or perhaps better as an attribute to `%include` like `%include(followunder=/usr/include/foo) "foo.h"` would be a lot more useful.
I think this would probably need to quietly ignore a header that can't be found relative to the directory of the including header. SWIG could be taught all the standard header names perhaps but that seems a maintenance nightmare once you get into OS-specific system headers.
Thoughts?
|
1.0
|
-includeall replacement? - The current `-includeall` seems of very limited use in practice.
It seems to be intended to handle the case where a library has multiple headers which are essentially an implementation detail with one public header including the others. C/C++ code using the library should explicitly include only the one public header, but to wrap it with SWIG you either need to explicitly `%include` all the headers which are implementation details (which is brittle to such headers getting added, removed and renamed in new releases) or use `-includeall`.
It's big flaw though is that it will try to follow any `#include`, and library headers are likely to include at least one system header somewhere (that's especially true for C++ libraries). Even if the current version of the library headers don't, it's brittle to a new version of the library adding `#include <stdio.h>` or similar somewhere.
A tempting idea is to discriminate `<...>` and `"..."` but usage isn't consistent enough for that to actually help - a quick grep in the headers installed on my system finds counterexamples in both directions - here's a random example of each:
```
minizip/ioapi.h:#include "stdint.h"
eigen3/Eigen/KLUSupport:#include <Eigen/SparseCore>
```
Taking a step back, there are a couple of common patterns here:
* public header `/usr/include/foo.h` and the implementation headers in `/usr/include/foo/`
* public header `/usr/include/foo/foo.h` (or e.g. `foo/main.h`) and the implementation headers in `/usr/include/foo/` (or all in a subdirectory of that)
So a way to say `-includeallunder=/usr/include/foo` or perhaps better as an attribute to `%include` like `%include(followunder=/usr/include/foo) "foo.h"` would be a lot more useful.
I think this would probably need to quietly ignore a header that can't be found relative to the directory of the including header. SWIG could be taught all the standard header names perhaps but that seems a maintenance nightmare once you get into OS-specific system headers.
Thoughts?
|
process
|
includeall replacement the current includeall seems of very limited use in practice it seems to be intended to handle the case where a library has multiple headers which are essentially an implementation detail with one public header including the others c c code using the library should explicitly include only the one public header but to wrap it with swig you either need to explicitly include all the headers which are implementation details which is brittle to such headers getting added removed and renamed in new releases or use includeall it s big flaw though is that it will try to follow any include and library headers are likely to include at least one system header somewhere that s especially true for c libraries even if the current version of the library headers don t it s brittle to a new version of the library adding include or similar somewhere a tempting idea is to discriminate and but usage isn t consistent enough for that to actually help a quick grep in the headers installed on my system finds counterexamples in both directions here s a random example of each minizip ioapi h include stdint h eigen klusupport include taking a step back there are a couple of common patterns here public header usr include foo h and the implementation headers in usr include foo public header usr include foo foo h or e g foo main h and the implementation headers in usr include foo or all in a subdirectory of that so a way to say includeallunder usr include foo or perhaps better as an attribute to include like include followunder usr include foo foo h would be a lot more useful i think this would probably need to quietly ignore a header that can t be found relative to the directory of the including header swig could be taught all the standard header names perhaps but that seems a maintenance nightmare once you get into os specific system headers thoughts
| 1
|
444,311
| 31,032,228,967
|
IssuesEvent
|
2023-08-10 13:11:28
|
risc0/risc0
|
https://api.github.com/repos/risc0/risc0
|
opened
|
Clarify expectations for build/run times
|
documentation
|
Initial build times are quite long, and users currently don't have enough information available to set expectations about how long things should take.
We should add some notes to help users set appropriate expectations.
|
1.0
|
Clarify expectations for build/run times - Initial build times are quite long, and users currently don't have enough information available to set expectations about how long things should take.
We should add some notes to help users set appropriate expectations.
|
non_process
|
clarify expectations for build run times initial build times are quite long and users currently don t have enough information available to set expectations about how long things should take we should add some notes to help users set appropriate expectations
| 0
|
11,622
| 14,484,344,417
|
IssuesEvent
|
2020-12-10 16:13:32
|
GoogleCloudPlatform/java-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
|
closed
|
hBase Bigtable uses Unsafe, eventually will be deprecated.
|
api: bigtable priority: p2 samples type: process
|
Found in #4036
```
------------------------------------------------------------
- testing bigtable/beam/helloworld
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
------------------------------------------------------------
- testing bigtable/beam/keyviz-art
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
------------------------------------------------------------
- testing bigtable/hbase/snippets
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
```
|
1.0
|
hBase Bigtable uses Unsafe, eventually will be deprecated. - Found in #4036
```
------------------------------------------------------------
- testing bigtable/beam/helloworld
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
------------------------------------------------------------
- testing bigtable/beam/keyviz-art
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
------------------------------------------------------------
- testing bigtable/hbase/snippets
------------------------------------------------------------
[ERROR] WARNING: An illegal reflective access operation has occurred
[ERROR] WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/root/.m2/repository/org/apache/hbase/hbase-shaded-client/1.4.12/hbase-shaded-client-1.4.12.jar) to method java.nio.Bits.unaligned()
[ERROR] WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
[ERROR] WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[ERROR] WARNING: All illegal access operations will be denied in a future release
Testing completed.
```
|
process
|
hbase bigtable uses unsafe eventually will be deprecated found in testing bigtable beam helloworld warning an illegal reflective access operation has occurred warning illegal reflective access by org apache hadoop hbase util unsafeavailchecker file root repository org apache hbase hbase shaded client hbase shaded client jar to method java nio bits unaligned warning please consider reporting this to the maintainers of org apache hadoop hbase util unsafeavailchecker warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release testing completed testing bigtable beam keyviz art warning an illegal reflective access operation has occurred warning illegal reflective access by org apache hadoop hbase util unsafeavailchecker file root repository org apache hbase hbase shaded client hbase shaded client jar to method java nio bits unaligned warning please consider reporting this to the maintainers of org apache hadoop hbase util unsafeavailchecker warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release testing completed testing bigtable hbase snippets warning an illegal reflective access operation has occurred warning illegal reflective access by org apache hadoop hbase util unsafeavailchecker file root repository org apache hbase hbase shaded client hbase shaded client jar to method java nio bits unaligned warning please consider reporting this to the maintainers of org apache hadoop hbase util unsafeavailchecker warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release testing completed
| 1
|
9,933
| 3,984,703,931
|
IssuesEvent
|
2016-05-07 10:56:46
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Failure to encode Joomla base path
|
No Code Attached Yet
|
This was tested on a CentOS 7 server (apache 2.4.6, php 5.4.16) using a recent (post 3.5.0-beta) Joomla version from the master branch (specifically 11a1462). Really, though, the version shouldn't matter much.
To test:
With the web server's document root being '/var/www/html', create the necessary directories and install Joomla at '/var/www/html/foo/b a%20 \r2'. In other words:
```
# mkdir -p '/var/www/html/foo/b a%20 \r2'
# cd '/var/www/html/foo/b a%20 \r2'
# wget https://github.com/joomla/joomla-cms/archive/11a14629fce671670399ec7775caed4e7b5b92c1.zip
# unzip *.zip && rm *.zip -f
#
```
Now attempt to access Joomla at `http://example.com/foo/b%20a%2520%20%5Cr2/installation/index.php`.
The page loads, but none of the related page resources are found or loaded.
Here's a look at some of the links to those resources in the HTML source for the page:
```
<link href="/foo/b a%20 /r2/installation/favicon.ico" rel="shortcut icon" type="image/vnd.microsoft.icon" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap.min.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap-responsive.min.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap-extended.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/installation/template/css/template.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/chosen.css" type="text/css" />
<script src="/foo/b a%20 \r2/media/jui/js/jquery.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/jquery-noconflict.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/jquery-migrate.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/html5fallback.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/bootstrap.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/chosen.jquery.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/mootools-core.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/core.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/mootools-more.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/punycode.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/validate.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/installation/template/js/installation.js" type="text/javascript"></script>
```
The problem is that Joomla's base path in those links has not been URL % encoded. They contain `/foo/b a%20 \r2`, whereas the should contain `/foo/b%20a%2520%20%5Cr2`.
|
1.0
|
Failure to encode Joomla base path - This was tested on a CentOS 7 server (apache 2.4.6, php 5.4.16) using a recent (post 3.5.0-beta) Joomla version from the master branch (specifically 11a1462). Really, though, the version shouldn't matter much.
To test:
With the web server's document root being '/var/www/html', create the necessary directories and install Joomla at '/var/www/html/foo/b a%20 \r2'. In other words:
```
# mkdir -p '/var/www/html/foo/b a%20 \r2'
# cd '/var/www/html/foo/b a%20 \r2'
# wget https://github.com/joomla/joomla-cms/archive/11a14629fce671670399ec7775caed4e7b5b92c1.zip
# unzip *.zip && rm *.zip -f
#
```
Now attempt to access Joomla at `http://example.com/foo/b%20a%2520%20%5Cr2/installation/index.php`.
The page loads, but none of the related page resources are found or loaded.
Here's a look at some of the links to those resources in the HTML source for the page:
```
<link href="/foo/b a%20 /r2/installation/favicon.ico" rel="shortcut icon" type="image/vnd.microsoft.icon" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap.min.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap-responsive.min.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/bootstrap-extended.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/installation/template/css/template.css" type="text/css" />
<link rel="stylesheet" href="/foo/b a%20 \r2/media/jui/css/chosen.css" type="text/css" />
<script src="/foo/b a%20 \r2/media/jui/js/jquery.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/jquery-noconflict.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/jquery-migrate.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/html5fallback.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/bootstrap.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/jui/js/chosen.jquery.min.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/mootools-core.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/core.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/mootools-more.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/punycode.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/media/system/js/validate.js" type="text/javascript"></script>
<script src="/foo/b a%20 \r2/installation/template/js/installation.js" type="text/javascript"></script>
```
The problem is that Joomla's base path in those links has not been URL % encoded. They contain `/foo/b a%20 \r2`, whereas the should contain `/foo/b%20a%2520%20%5Cr2`.
|
non_process
|
failure to encode joomla base path this was tested on a centos server apache php using a recent post beta joomla version from the master branch specifically really though the version shouldn t matter much to test with the web server s document root being var www html create the necessary directories and install joomla at var www html foo b a in other words mkdir p var www html foo b a cd var www html foo b a wget unzip zip rm zip f now attempt to access joomla at the page loads but none of the related page resources are found or loaded here s a look at some of the links to those resources in the html source for the page the problem is that joomla s base path in those links has not been url encoded they contain foo b a whereas the should contain foo b
| 0
|
57,157
| 11,714,392,573
|
IssuesEvent
|
2020-03-09 12:15:36
|
github/vscode-codeql
|
https://api.github.com/repos/github/vscode-codeql
|
opened
|
Fix dependency of queryserver client on compiled code
|
code cleanup
|
Make a version of the interface `DisposableObject` that doesn't pull in arbitrary extra runtime code from `vscode`, to eliminate dependency of `queryserver-client.ts` on `'semmle-vscode-utils/out/disposable-object'`, while allowing unit tests to pass. See conversation on https://github.com/github/vscode-codeql/pull/173 for more details.
|
1.0
|
Fix dependency of queryserver client on compiled code - Make a version of the interface `DisposableObject` that doesn't pull in arbitrary extra runtime code from `vscode`, to eliminate dependency of `queryserver-client.ts` on `'semmle-vscode-utils/out/disposable-object'`, while allowing unit tests to pass. See conversation on https://github.com/github/vscode-codeql/pull/173 for more details.
|
non_process
|
fix dependency of queryserver client on compiled code make a version of the interface disposableobject that doesn t pull in arbitrary extra runtime code from vscode to eliminate dependency of queryserver client ts on semmle vscode utils out disposable object while allowing unit tests to pass see conversation on for more details
| 0
|
16,763
| 21,937,010,254
|
IssuesEvent
|
2022-05-23 14:34:23
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Remove setting of `CalcJob` specific inputs from `Process` base class
|
priority/quality-of-life type/refactoring topic/calc-jobs topic/processes
|
Implementation details of the `CalcJob` are implemented directly in the `Process` base class. Specifically, the `metadata.options` and `metadata.computer` are dealt with in `_setup_metadata`. Also `_setup_inputs` is guilty of this.
|
1.0
|
Remove setting of `CalcJob` specific inputs from `Process` base class - Implementation details of the `CalcJob` are implemented directly in the `Process` base class. Specifically, the `metadata.options` and `metadata.computer` are dealt with in `_setup_metadata`. Also `_setup_inputs` is guilty of this.
|
process
|
remove setting of calcjob specific inputs from process base class implementation details of the calcjob are implemented directly in the process base class specifically the metadata options and metadata computer are dealt with in setup metadata also setup inputs is guilty of this
| 1
|
9,344
| 12,345,836,710
|
IssuesEvent
|
2020-05-15 09:39:26
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky internal test: e2e spec_isolation fails [firefox] - fails [firefox]
|
browser: firefox process: flaky test stage: needs investigating type: chore
|
### Current behavior:
There's a test that occasionally fails in our internal tests in Firefox.
**Circle Failure**: https://circleci.com/gh/cypress-io/cypress/323487
**Test Code**: https://github.com/cypress-io/cypress/blob/develop/packages/server/test/e2e/5_spec_isolation_spec.coffee#L216:L216
<img width="860" alt="Screen Shot 2020-05-13 at 5 38 33 PM" src="https://user-images.githubusercontent.com/1271364/81805335-aceed400-9540-11ea-8064-c68922bb38ff.png">
```
1) e2e spec_isolation fails [firefox]:
AssertionError: expected 700 to be within 543..693
at expectDurationWithin (test/e2e/5_spec_isolation_spec.coffee:42:19)
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:88:5
at Array.forEach (<anonymous>:null:null)
at expectRunsToHaveCorrectStats (test/e2e/5_spec_isolation_spec.coffee:78:8)
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:227:11
at /root/cypress/node_modules/jsonfile/index.js:43:5
at /root/cypress/node_modules/graceful-fs/graceful-fs.js:123:16
at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:61:3)
From previous event:
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:178:10
at copyDirItems (/root/cypress/node_modules/fs-extra/lib/copy/copy.js:151:21)
at /root/cypress/node_modules/fs-extra/lib/copy/copy.js:163:14
at /root/cypress/node_modules/graceful-fs/polyfills.js:243:20
at FSReqCallback.oncomplete (fs.js:146:23)
From previous event:
at Object.onRun (test/e2e/5_spec_isolation_spec.coffee:174:8)
at Context.<anonymous> (test/support/helpers/e2e.js:344:22)
```
### Versions
4.5.0
|
1.0
|
Flaky internal test: e2e spec_isolation fails [firefox] - fails [firefox] - ### Current behavior:
There's a test that occasionally fails in our internal tests in Firefox.
**Circle Failure**: https://circleci.com/gh/cypress-io/cypress/323487
**Test Code**: https://github.com/cypress-io/cypress/blob/develop/packages/server/test/e2e/5_spec_isolation_spec.coffee#L216:L216
<img width="860" alt="Screen Shot 2020-05-13 at 5 38 33 PM" src="https://user-images.githubusercontent.com/1271364/81805335-aceed400-9540-11ea-8064-c68922bb38ff.png">
```
1) e2e spec_isolation fails [firefox]:
AssertionError: expected 700 to be within 543..693
at expectDurationWithin (test/e2e/5_spec_isolation_spec.coffee:42:19)
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:88:5
at Array.forEach (<anonymous>:null:null)
at expectRunsToHaveCorrectStats (test/e2e/5_spec_isolation_spec.coffee:78:8)
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:227:11
at /root/cypress/node_modules/jsonfile/index.js:43:5
at /root/cypress/node_modules/graceful-fs/graceful-fs.js:123:16
at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:61:3)
From previous event:
at /root/cypress/packages/server/test/e2e/5_spec_isolation_spec.coffee:178:10
at copyDirItems (/root/cypress/node_modules/fs-extra/lib/copy/copy.js:151:21)
at /root/cypress/node_modules/fs-extra/lib/copy/copy.js:163:14
at /root/cypress/node_modules/graceful-fs/polyfills.js:243:20
at FSReqCallback.oncomplete (fs.js:146:23)
From previous event:
at Object.onRun (test/e2e/5_spec_isolation_spec.coffee:174:8)
at Context.<anonymous> (test/support/helpers/e2e.js:344:22)
```
### Versions
4.5.0
|
process
|
flaky internal test spec isolation fails fails current behavior there s a test that occasionally fails in our internal tests in firefox circle failure test code img width alt screen shot at pm src spec isolation fails assertionerror expected to be within at expectdurationwithin test spec isolation spec coffee at root cypress packages server test spec isolation spec coffee at array foreach null null at expectrunstohavecorrectstats test spec isolation spec coffee at root cypress packages server test spec isolation spec coffee at root cypress node modules jsonfile index js at root cypress node modules graceful fs graceful fs js at fsreqcallback readfileafterclose internal fs read file context js from previous event at root cypress packages server test spec isolation spec coffee at copydiritems root cypress node modules fs extra lib copy copy js at root cypress node modules fs extra lib copy copy js at root cypress node modules graceful fs polyfills js at fsreqcallback oncomplete fs js from previous event at object onrun test spec isolation spec coffee at context test support helpers js versions
| 1
|
445,111
| 12,826,335,845
|
IssuesEvent
|
2020-07-06 16:22:30
|
Niall7459/KiteBoard-Documentation
|
https://api.github.com/repos/Niall7459/KiteBoard-Documentation
|
closed
|
Method to set player's scoreboard group and/or custom triggers
|
Active Developer API Priority: Medium Suggestion
|
<!-- Please don't touch them -->
[Wiki]: https://github.com/Niall7459/KiteBoard-Documentation/wiki
[download]: https://www.spigotmc.org/resources/13694/
[issue-page]: https://github.com/Niall7459/KiteBoard-Documentation/issues
[Bug Report]: https://github.com/Niall7459/KiteBoard-Documentation/issues/new?template=bug_report.md
# Feature request
This template is for suggesting changes or new features to [KiteBoard][download].
For reporting issues and/or bugs, use the [Bug Report] template!
## Confirmation
I confirm, that I made the following steps:
<!-- Replace the [ ] with [X] to "check" them -->
- [X] I checked the [wiki] and/or [plugin-page][download] for the feature.
- [X] I checked the [issue-page] for already existing suggestions.
## Feature
> What feature should be added to KiteBoard? What should be improved?
> Describe it as good as possible.
<!-- Please write below this line -->
It would be great to have at least a method to set a player's active scoreboard. If you're motivated, a custom trigger system would also be great, but that's quite a bit of work haha
## Reasons to add/improve
> Why should it be added or changed?
> Reasons like "It's cool." don't count! Give *real* reasons.
<!-- Please write below this line -->
I want to be able to switch scoreboards for our server based on our own conditions (such as a player having a faction). It should be simple really given that it's already a command, should just be a couple of lines.
## Images/Links
> Provide links/images of the feature/improvement (if possible).
> Code examples are also accepted.
<!-- Please write below this line. Post images from your clipboard with Ctrl + V -->
Not an image but I would imagine it as something like this in the KiteboardAPI class.
```
/**
* Set a player's current scoreboard group.
* @param player The player to set
* @param group The name of the group
*/
public static void setScoreboardGroup(Player player, String group) {
// Your implementation
}
```
|
1.0
|
Method to set player's scoreboard group and/or custom triggers - <!-- Please don't touch them -->
[Wiki]: https://github.com/Niall7459/KiteBoard-Documentation/wiki
[download]: https://www.spigotmc.org/resources/13694/
[issue-page]: https://github.com/Niall7459/KiteBoard-Documentation/issues
[Bug Report]: https://github.com/Niall7459/KiteBoard-Documentation/issues/new?template=bug_report.md
# Feature request
This template is for suggesting changes or new features to [KiteBoard][download].
For reporting issues and/or bugs, use the [Bug Report] template!
## Confirmation
I confirm, that I made the following steps:
<!-- Replace the [ ] with [X] to "check" them -->
- [X] I checked the [wiki] and/or [plugin-page][download] for the feature.
- [X] I checked the [issue-page] for already existing suggestions.
## Feature
> What feature should be added to KiteBoard? What should be improved?
> Describe it as good as possible.
<!-- Please write below this line -->
It would be great to have at least a method to set a player's active scoreboard. If you're motivated, a custom trigger system would also be great, but that's quite a bit of work haha
## Reasons to add/improve
> Why should it be added or changed?
> Reasons like "It's cool." don't count! Give *real* reasons.
<!-- Please write below this line -->
I want to be able to switch scoreboards for our server based on our own conditions (such as a player having a faction). It should be simple really given that it's already a command, should just be a couple of lines.
## Images/Links
> Provide links/images of the feature/improvement (if possible).
> Code examples are also accepted.
<!-- Please write below this line. Post images from your clipboard with Ctrl + V -->
Not an image but I would imagine it as something like this in the KiteboardAPI class.
```
/**
* Set a player's current scoreboard group.
* @param player The player to set
* @param group The name of the group
*/
public static void setScoreboardGroup(Player player, String group) {
// Your implementation
}
```
|
non_process
|
method to set player s scoreboard group and or custom triggers feature request this template is for suggesting changes or new features to for reporting issues and or bugs use the template confirmation i confirm that i made the following steps i checked the and or for the feature i checked the for already existing suggestions feature what feature should be added to kiteboard what should be improved describe it as good as possible it would be great to have at least a method to set a player s active scoreboard if you re motivated a custom trigger system would also be great but that s quite a bit of work haha reasons to add improve why should it be added or changed reasons like it s cool don t count give real reasons i want to be able to switch scoreboards for our server based on our own conditions such as a player having a faction it should be simple really given that it s already a command should just be a couple of lines images links provide links images of the feature improvement if possible code examples are also accepted not an image but i would imagine it as something like this in the kiteboardapi class set a player s current scoreboard group param player the player to set param group the name of the group public static void setscoreboardgroup player player string group your implementation
| 0
|
2,294
| 5,114,828,606
|
IssuesEvent
|
2017-01-06 19:47:01
|
NicoletFEAR/Steamworks-2017
|
https://api.github.com/repos/NicoletFEAR/Steamworks-2017
|
opened
|
Make master a Protected Branch and set up review policies
|
process
|
As we talked about in a meeting, we want to make master a protected branch after we initialize the repository and set the pull request policies to require 2 approved reviews to be merged.
|
1.0
|
Make master a Protected Branch and set up review policies - As we talked about in a meeting, we want to make master a protected branch after we initialize the repository and set the pull request policies to require 2 approved reviews to be merged.
|
process
|
make master a protected branch and set up review policies as we talked about in a meeting we want to make master a protected branch after we initialize the repository and set the pull request policies to require approved reviews to be merged
| 1
|
179
| 2,587,612,307
|
IssuesEvent
|
2015-02-17 19:33:59
|
GsDevKit/gsDevKitHome
|
https://api.github.com/repos/GsDevKit/gsDevKitHome
|
closed
|
/sys/default/client/scripts/installServerTode2/ should be split out
|
in process
|
for folks with existing stone structures, the code between the backups in /sys/default/client/scripts/installServerTode2/ needs to be run ... would be convenient to split that out ahead of time so that a single command can be documented as the conversion method:
jUst for grins here's the code of interest:
```Shell
# Set up /sys node structure
mount --todeRoot sys/default /sys default
mount --todeRoot sys/local /sys local
mount --todeRoot sys/stones /sys stones
# ensure that --stoneRoot directory structure is present
/sys/default/bin/validateStoneSysNodes --files --repair
mount --stoneRoot / /sys stone
# Define /home and /projects based on a composition of the /sys nodes
mount --stoneRoot homeComposition.ston / home
mount --stoneRoot projectComposition.ston / projects
```
|
1.0
|
/sys/default/client/scripts/installServerTode2/ should be split out - for folks with existing stone structures, the code between the backups in /sys/default/client/scripts/installServerTode2/ needs to be run ... would be convenient to split that out ahead of time so that a single command can be documented as the conversion method:
jUst for grins here's the code of interest:
```Shell
# Set up /sys node structure
mount --todeRoot sys/default /sys default
mount --todeRoot sys/local /sys local
mount --todeRoot sys/stones /sys stones
# ensure that --stoneRoot directory structure is present
/sys/default/bin/validateStoneSysNodes --files --repair
mount --stoneRoot / /sys stone
# Define /home and /projects based on a composition of the /sys nodes
mount --stoneRoot homeComposition.ston / home
mount --stoneRoot projectComposition.ston / projects
```
|
process
|
sys default client scripts should be split out for folks with existing stone structures the code between the backups in sys default client scripts needs to be run would be convenient to split that out ahead of time so that a single command can be documented as the conversion method just for grins here s the code of interest shell set up sys node structure mount toderoot sys default sys default mount toderoot sys local sys local mount toderoot sys stones sys stones ensure that stoneroot directory structure is present sys default bin validatestonesysnodes files repair mount stoneroot sys stone define home and projects based on a composition of the sys nodes mount stoneroot homecomposition ston home mount stoneroot projectcomposition ston projects
| 1
|
33,087
| 7,656,257,607
|
IssuesEvent
|
2018-05-10 15:43:23
|
GetDKAN/dkan_dash
|
https://api.github.com/repos/GetDKAN/dkan_dash
|
closed
|
Bootstrap is already available so lets not add it again for dkan dashboards
|
code review
|
connects to NuCivic/ga_gbpw#156
Is react-dash adding bootstrap? normally we have radix (which adds bootstrap css) then nuboot_radix on top to modify, then client/dashboard css on top of that, but looks like we have bootstrap being added on top of nuboot_radix
|
1.0
|
Bootstrap is already available so lets not add it again for dkan dashboards - connects to NuCivic/ga_gbpw#156
Is react-dash adding bootstrap? normally we have radix (which adds bootstrap css) then nuboot_radix on top to modify, then client/dashboard css on top of that, but looks like we have bootstrap being added on top of nuboot_radix
|
non_process
|
bootstrap is already available so lets not add it again for dkan dashboards connects to nucivic ga gbpw is react dash adding bootstrap normally we have radix which adds bootstrap css then nuboot radix on top to modify then client dashboard css on top of that but looks like we have bootstrap being added on top of nuboot radix
| 0
|
325,918
| 24,066,109,933
|
IssuesEvent
|
2022-09-17 14:08:29
|
GioF71/squeezelite-docker
|
https://api.github.com/repos/GioF71/squeezelite-docker
|
closed
|
PulseAudio volume: clarify
|
documentation
|
It is not very clear. The right part can be changed according to PUID and PGID.
The left part should be set according to the user which runs the container.
|
1.0
|
PulseAudio volume: clarify - It is not very clear. The right part can be changed according to PUID and PGID.
The left part should be set according to the user which runs the container.
|
non_process
|
pulseaudio volume clarify it is not very clear the right part can be changed according to puid and pgid the left part should be set according to the user which runs the container
| 0
|
351,964
| 25,044,540,029
|
IssuesEvent
|
2022-11-05 04:13:12
|
HausDAO/Grant-Seekersv2
|
https://api.github.com/repos/HausDAO/Grant-Seekersv2
|
opened
|
Gnosis DAO Application Retro
|
documentation help wanted
|
Retro and lessons learned. What went well, where can we improve. Can we pick this up again? What would need to change?
|
1.0
|
Gnosis DAO Application Retro - Retro and lessons learned. What went well, where can we improve. Can we pick this up again? What would need to change?
|
non_process
|
gnosis dao application retro retro and lessons learned what went well where can we improve can we pick this up again what would need to change
| 0
|
13,070
| 15,397,140,457
|
IssuesEvent
|
2021-03-03 21:41:13
|
retaildevcrews/ngsa
|
https://api.github.com/repos/retaildevcrews/ngsa
|
closed
|
Update Working Agreement
|
EngPrac FC:Onboarding Process
|
- [x] - include "see something, say something", etc from Joe L.
- [x] - work "your" schedule
- [x] - flexible based on your life situation - especially during COVID
- [x] - many of us work "split" schedules
- [x] - should we have "core hours" where we're usually available
- [x] - the auto crew is based in PDT
- [x] - ARCANZADO is based in PDT and will join in Mar / April
- [x] - email and Teams are async communication
- [x] - use Outlook delay send
- [x] - Teams doesn't have delay (but is async)
- [ ]
- [x] - #SharingIsCaring
- [x] - #ShareEarlyShareOften
- [x] - #ShareAsWeGo (don't wait until the end)
- [x] - prefer public repos when possible
- [ ]
- [x] - Accountability, Integrity, Respect
- [x] - Model, Coach, Care
- [x] - Team owns quality
- [x] - Two-in-a-box
- [x] - TPM / DL
- [x] - Dev / Bart
- [x] - Neil / Drew
- [x] - permission to hold accountable - see something, say something
|
1.0
|
Update Working Agreement - - [x] - include "see something, say something", etc from Joe L.
- [x] - work "your" schedule
- [x] - flexible based on your life situation - especially during COVID
- [x] - many of us work "split" schedules
- [x] - should we have "core hours" where we're usually available
- [x] - the auto crew is based in PDT
- [x] - ARCANZADO is based in PDT and will join in Mar / April
- [x] - email and Teams are async communication
- [x] - use Outlook delay send
- [x] - Teams doesn't have delay (but is async)
- [ ]
- [x] - #SharingIsCaring
- [x] - #ShareEarlyShareOften
- [x] - #ShareAsWeGo (don't wait until the end)
- [x] - prefer public repos when possible
- [ ]
- [x] - Accountability, Integrity, Respect
- [x] - Model, Coach, Care
- [x] - Team owns quality
- [x] - Two-in-a-box
- [x] - TPM / DL
- [x] - Dev / Bart
- [x] - Neil / Drew
- [x] - permission to hold accountable - see something, say something
|
process
|
update working agreement include see something say something etc from joe l work your schedule flexible based on your life situation especially during covid many of us work split schedules should we have core hours where we re usually available the auto crew is based in pdt arcanzado is based in pdt and will join in mar april email and teams are async communication use outlook delay send teams doesn t have delay but is async sharingiscaring shareearlyshareoften shareaswego don t wait until the end prefer public repos when possible accountability integrity respect model coach care team owns quality two in a box tpm dl dev bart neil drew permission to hold accountable see something say something
| 1
|
62,607
| 7,612,504,193
|
IssuesEvent
|
2018-05-01 17:47:49
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
closed
|
PD Error Handling: Track Errors per Step
|
feature large protocol designer
|
As a protocol designer, I would like to be able to see all the errors that apply to any given step.
## Acceptance Criteria
- [ ] Command creator accumulates errors per step
|
1.0
|
PD Error Handling: Track Errors per Step - As a protocol designer, I would like to be able to see all the errors that apply to any given step.
## Acceptance Criteria
- [ ] Command creator accumulates errors per step
|
non_process
|
pd error handling track errors per step as a protocol designer i would like to be able to see all the errors that apply to any given step acceptance criteria command creator accumulates errors per step
| 0
|
6,984
| 10,131,474,546
|
IssuesEvent
|
2019-08-01 19:38:29
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Create a manual PR template (to copy and paste) for translations fixes/improvements
|
process
|
This won't be much different from whatever is defined on #4836, except that it will touch existing translations.
The PR should explain why the new translation is better/right and why what is already there was worse/wrong.
|
1.0
|
Create a manual PR template (to copy and paste) for translations fixes/improvements - This won't be much different from whatever is defined on #4836, except that it will touch existing translations.
The PR should explain why the new translation is better/right and why what is already there was worse/wrong.
|
process
|
create a manual pr template to copy and paste for translations fixes improvements this won t be much different from whatever is defined on except that it will touch existing translations the pr should explain why the new translation is better right and why what is already there was worse wrong
| 1
|
175,074
| 21,300,746,045
|
IssuesEvent
|
2022-04-15 02:32:02
|
vwasthename/iaf
|
https://api.github.com/repos/vwasthename/iaf
|
opened
|
CVE-2022-22968 (Low) detected in spring-context-5.3.8.jar
|
security vulnerability
|
## CVE-2022-22968 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.8.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /core/pom.xml</p>
<p>Path to vulnerable library: /1210151327_OFXRIV/downloadResource_STZRKX/20211210151355/spring-context-5.3.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-context-5.3.8.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-context:5.2.21,5.3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22968 (Low) detected in spring-context-5.3.8.jar - ## CVE-2022-22968 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.8.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /core/pom.xml</p>
<p>Path to vulnerable library: /1210151327_OFXRIV/downloadResource_STZRKX/20211210151355/spring-context-5.3.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-context-5.3.8.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-context:5.2.21,5.3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve low detected in spring context jar cve low severity vulnerability vulnerable library spring context jar spring context library home page a href path to dependency file core pom xml path to vulnerable library ofxriv downloadresource stzrkx spring context jar dependency hierarchy x spring context jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions the patterns for disallowedfields on a databinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field including upper and lower case for the first character of all nested fields within the property path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring context step up your open source security game with whitesource
| 0
|
19,045
| 25,046,590,729
|
IssuesEvent
|
2022-11-05 10:27:42
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Grammatical error on page should be fixed
|
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
This sentence: "A user with stakeholder access level cannot create the environment as stakeholders do not access to repository."
s.b.
"A user with stakeholder access level cannot create the environment as stakeholders do not **have** access to **the** repository."
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Create target environment - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#q-why-am-i-getting-error-job-xxxx-environment-xxxx-could-not-be-found-the-environment-does-not-exist-or-has-not-been-authorized-for-use)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/environments.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Grammatical error on page should be fixed -
This sentence: "A user with stakeholder access level cannot create the environment as stakeholders do not access to repository."
s.b.
"A user with stakeholder access level cannot create the environment as stakeholders do not **have** access to **the** repository."
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Create target environment - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#q-why-am-i-getting-error-job-xxxx-environment-xxxx-could-not-be-found-the-environment-does-not-exist-or-has-not-been-authorized-for-use)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/environments.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
grammatical error on page should be fixed this sentence a user with stakeholder access level cannot create the environment as stakeholders do not access to repository s b a user with stakeholder access level cannot create the environment as stakeholders do not have access to the repository document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
234,166
| 17,935,816,275
|
IssuesEvent
|
2021-09-10 15:10:56
|
SAP/fundamental-styles
|
https://api.github.com/repos/SAP/fundamental-styles
|
opened
|
Create style guide for the repo
|
documentation
|
Create a style guide that includes naming conventions, best practices, standards, etc. that are used in the library and that will help with onboarding new developers or external contributors. This will also help the PR review process by automating some of these checks.
Some examples of good style guides:
- https://github.com/airbnb/javascript/tree/master/react
- https://angular.io/guide/styleguide
- https://vuejs.org/v2/style-guide/
|
1.0
|
Create style guide for the repo - Create a style guide that includes naming conventions, best practices, standards, etc. that are used in the library and that will help with onboarding new developers or external contributors. This will also help the PR review process by automating some of these checks.
Some examples of good style guides:
- https://github.com/airbnb/javascript/tree/master/react
- https://angular.io/guide/styleguide
- https://vuejs.org/v2/style-guide/
|
non_process
|
create style guide for the repo create a style guide that includes naming conventions best practices standards etc that are used in the library and that will help with onboarding new developers or external contributors this will also help the pr review process by automating some of these checks some examples of good style guides
| 0
|
2,137
| 4,974,636,111
|
IssuesEvent
|
2016-12-06 07:34:45
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Extract and expose last verified date from NCT
|
0. Ready for Analysis API Collectors Explorer Processors
|
For example, on https://clinicaltrials.gov/ct2/show/NCT00564096:

See https://github.com/opentrials/opentrials/issues/515 for more information on this issue.
|
1.0
|
Extract and expose last verified date from NCT - For example, on https://clinicaltrials.gov/ct2/show/NCT00564096:

See https://github.com/opentrials/opentrials/issues/515 for more information on this issue.
|
process
|
extract and expose last verified date from nct for example on see for more information on this issue
| 1
|
7,523
| 10,597,767,216
|
IssuesEvent
|
2019-10-10 02:02:54
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
closed
|
Release description body should be a changelist
|
process
|
We can construct the changelist from the list of commits in master since the last release.
|
1.0
|
Release description body should be a changelist - We can construct the changelist from the list of commits in master since the last release.
|
process
|
release description body should be a changelist we can construct the changelist from the list of commits in master since the last release
| 1
|
23,745
| 4,044,648,242
|
IssuesEvent
|
2016-05-21 13:17:27
|
Joklost/P4-Code
|
https://api.github.com/repos/Joklost/P4-Code
|
opened
|
Test kontrolstrukturer i Simulationer
|
solved - needs testing
|
Det skulle gerne virke efter commit #dda3687, men kræver testing
|
1.0
|
Test kontrolstrukturer i Simulationer - Det skulle gerne virke efter commit #dda3687, men kræver testing
|
non_process
|
test kontrolstrukturer i simulationer det skulle gerne virke efter commit men kræver testing
| 0
|
19,643
| 14,368,023,432
|
IssuesEvent
|
2020-12-01 07:44:03
|
argoproj/argo-cd
|
https://api.github.com/repos/argoproj/argo-cd
|
closed
|
Provide tooling to test & debug RBAC policies
|
component:rbac enhancement type:supportability type:usability
|
# Summary
We should provide a mechanism/tool so that users can test & validate RBAC policies before configuring them in live Argo CD instances.
# Motivation
RBAC policies currently can be hard to understand, test and troubleshoot. There is no convenient way for users to check whether the RBAC policies they configure actually do what they expect them to do. This can lead to frustration, as well as insecure or unexpected configurations.
# Proposal
We should provide an easy way for users to test & troubleshoot RBAC policies. Currently, what we provide is the `argocd account can-i` command that users can use to check whether they are authorized for certain operations on given resources in the live system. The limit with that is a) that it's working on the live system and b) checks permissions for the logged in user only.
What we need is a similar tooling that is not running against the API, but work on local policy files.
I can imagine this built into `argocd-util`, i.e. to check whether a group `somegroup` can perform `get` actions for all `applications` resources from a policy in `~/policy.csv`:
```
argocd-util rbac can somegroup get applications '*' --policy-file ~/policy.csv
```
Additionally, if the user has connection to K8s cluster and appropriate permissions, tooling could read-out the current live policy and test against it to answer certain questions about live state (i.e. user calling saying "I cannot create application, I get permission denied")
|
True
|
Provide tooling to test & debug RBAC policies - # Summary
We should provide a mechanism/tool so that users can test & validate RBAC policies before configuring them in live Argo CD instances.
# Motivation
RBAC policies currently can be hard to understand, test and troubleshoot. There is no convenient way for users to check whether the RBAC policies they configure actually do what they expect them to do. This can lead to frustration, as well as insecure or unexpected configurations.
# Proposal
We should provide an easy way for users to test & troubleshoot RBAC policies. Currently, what we provide is the `argocd account can-i` command that users can use to check whether they are authorized for certain operations on given resources in the live system. The limit with that is a) that it's working on the live system and b) checks permissions for the logged in user only.
What we need is a similar tooling that is not running against the API, but work on local policy files.
I can imagine this built into `argocd-util`, i.e. to check whether a group `somegroup` can perform `get` actions for all `applications` resources from a policy in `~/policy.csv`:
```
argocd-util rbac can somegroup get applications '*' --policy-file ~/policy.csv
```
Additionally, if the user has connection to K8s cluster and appropriate permissions, tooling could read-out the current live policy and test against it to answer certain questions about live state (i.e. user calling saying "I cannot create application, I get permission denied")
|
non_process
|
provide tooling to test debug rbac policies summary we should provide a mechanism tool so that users can test validate rbac policies before configuring them in live argo cd instances motivation rbac policies currently can be hard to understand test and troubleshoot there is no convenient way for users to check whether the rbac policies they configure actually do what they expect them to do this can lead to frustration as well as insecure or unexpected configurations proposal we should provide an easy way for users to test troubleshoot rbac policies currently what we provide is the argocd account can i command that users can use to check whether they are authorized for certain operations on given resources in the live system the limit with that is a that it s working on the live system and b checks permissions for the logged in user only what we need is a similar tooling that is not running against the api but work on local policy files i can imagine this built into argocd util i e to check whether a group somegroup can perform get actions for all applications resources from a policy in policy csv argocd util rbac can somegroup get applications policy file policy csv additionally if the user has connection to cluster and appropriate permissions tooling could read out the current live policy and test against it to answer certain questions about live state i e user calling saying i cannot create application i get permission denied
| 0
|
787,034
| 27,702,503,042
|
IssuesEvent
|
2023-03-14 09:03:38
|
concretecms/concretecms
|
https://api.github.com/repos/concretecms/concretecms
|
closed
|
Canonical Tag on alias page should show original page url
|
Type:Bug Status:Available Bug Priority:Low Product Areas:Sitemap
|
### Affected Version of Concrete CMS
9.x
### Description
Where aliases are used to show pages in more than one location and canonical tags are on the original and the aliased pages declare themselves as canonical.
This behaviour is wrong - only one page should be canonical and will result in 'Non-canonical URL' errors in SemRush and other SEO optimization tools.
### How to reproduce
Create an alias of a page, turn on 'Add a <meta rel="canonical" href="..."> tag to the site pages.' in URLs and Redirection and compare page sources.
### Possible Solution
_No response_
### Additional Context
_No response_
|
1.0
|
Canonical Tag on alias page should show original page url - ### Affected Version of Concrete CMS
9.x
### Description
Where aliases are used to show pages in more than one location and canonical tags are on the original and the aliased pages declare themselves as canonical.
This behaviour is wrong - only one page should be canonical and will result in 'Non-canonical URL' errors in SemRush and other SEO optimization tools.
### How to reproduce
Create an alias of a page, turn on 'Add a <meta rel="canonical" href="..."> tag to the site pages.' in URLs and Redirection and compare page sources.
### Possible Solution
_No response_
### Additional Context
_No response_
|
non_process
|
canonical tag on alias page should show original page url affected version of concrete cms x description where aliases are used to show pages in more than one location and canonical tags are on the original and the aliased pages declare themselves as canonical this behaviour is wrong only one page should be canonical and will result in non canonical url errors in semrush and other seo optimization tools how to reproduce create an alias of a page turn on add a tag to the site pages in urls and redirection and compare page sources possible solution no response additional context no response
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.