Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,114
| 8,972,798,251
|
IssuesEvent
|
2019-01-29 19:15:28
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[ActionSheet] Theme examples using Theming Extensions
|
[ActionSheet] type:Process
|
This was filed as an internal issue. If you are a Googler, please visit [b/123234713](http://b/123234713) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/123234713](http://b/123234713)
|
1.0
|
[ActionSheet] Theme examples using Theming Extensions - This was filed as an internal issue. If you are a Googler, please visit [b/123234713](http://b/123234713) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/123234713](http://b/123234713)
|
process
|
theme examples using theming extensions this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
94,873
| 8,526,472,217
|
IssuesEvent
|
2018-11-02 16:20:59
|
SME-Issues/issues
|
https://api.github.com/repos/SME-Issues/issues
|
closed
|
Query Balance Tests Comprehension Partial - 01/11/2018 - 5004
|
NLP Api pulse_tests
|
**Query Balance Tests Comprehension Partial**
- Total: 11
- Passed: 6
- **Full Pass: 6 (55%)**
- Not Understood: 0
- Failed but Understood: 5 (45%)
|
1.0
|
Query Balance Tests Comprehension Partial - 01/11/2018 - 5004 - **Query Balance Tests Comprehension Partial**
- Total: 11
- Passed: 6
- **Full Pass: 6 (55%)**
- Not Understood: 0
- Failed but Understood: 5 (45%)
|
non_process
|
query balance tests comprehension partial query balance tests comprehension partial total passed full pass not understood failed but understood
| 0
|
8,321
| 11,487,748,330
|
IssuesEvent
|
2020-02-11 12:37:08
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
[mask refinement] feathering radius weird artifacts
|
bug: pending priority: high reproduce: confirmed scope: image processing understood: clear
|
**Describe the bug**
When mask opacity has been set to 1,00 in a drawn mask, feathering radius creates strange lines, and they are actually in the mask than
I know it makes no sense in a drawn mask only, but drawn and parametric, it does, typically L-Mask). Just the error also appears with drawn only, so we can concentrate on that.
**To Reproduce**
1. Go to any module, e.g. exposer
2. create a drawn mask
3. change opacity to 1,00
4. change feathering radius to e.g. 2,4
**Expected behavior**
Mask should be handled propperly, as it is, when mask opacity is not changed
**Screenshots**

**Platform (please complete the following information):**
- Darktable Version: **3.0.0** and **current master**
- OS: **Gentoo Linux**
- OpenCL activated or no? **on and off**
- Which graphics card and driver version **2x nvidia GTX1060; nvidia-drivers 440.44-r1**
**Additional context**
- Can you reproduce with another Darktable version? **Yes**
- Can you reproduce with a RAW or Jpeg or both? **RAW**
- Are the steps above reproduce with a fresh edit (removing history)? **Yes**
- Attach an XMP if this is necessary
[2019-12-14_124134__EM50100.ORF.xmp.tar.gz](https://github.com/darktable-org/darktable/files/4021953/2019-12-14_124134__EM50100.ORF.xmp.tar.gz)
- Did you compile Darktable yourself? If so which compiler was used, with what options? **x86_64-pc-linux-gnu-9.2.0 CMAKE_BUILD_TYPE="Release" CFLAGS="-march=native -O3 -mtune=native -pipe"**
- Is the issue still present using an empty/new config-dir **Yes**
|
1.0
|
[mask refinement] feathering radius weird artifacts - **Describe the bug**
When mask opacity has been set to 1,00 in a drawn mask, feathering radius creates strange lines, and they are actually in the mask than
I know it makes no sense in a drawn mask only, but drawn and parametric, it does, typically L-Mask). Just the error also appears with drawn only, so we can concentrate on that.
**To Reproduce**
1. Go to any module, e.g. exposer
2. create a drawn mask
3. change opacity to 1,00
4. change feathering radius to e.g. 2,4
**Expected behavior**
Mask should be handled propperly, as it is, when mask opacity is not changed
**Screenshots**

**Platform (please complete the following information):**
- Darktable Version: **3.0.0** and **current master**
- OS: **Gentoo Linux**
- OpenCL activated or no? **on and off**
- Which graphics card and driver version **2x nvidia GTX1060; nvidia-drivers 440.44-r1**
**Additional context**
- Can you reproduce with another Darktable version? **Yes**
- Can you reproduce with a RAW or Jpeg or both? **RAW**
- Are the steps above reproduce with a fresh edit (removing history)? **Yes**
- Attach an XMP if this is necessary
[2019-12-14_124134__EM50100.ORF.xmp.tar.gz](https://github.com/darktable-org/darktable/files/4021953/2019-12-14_124134__EM50100.ORF.xmp.tar.gz)
- Did you compile Darktable yourself? If so which compiler was used, with what options? **x86_64-pc-linux-gnu-9.2.0 CMAKE_BUILD_TYPE="Release" CFLAGS="-march=native -O3 -mtune=native -pipe"**
- Is the issue still present using an empty/new config-dir **Yes**
|
process
|
feathering radius weird artifacts describe the bug when mask opacity has been set to in a drawn mask feathering radius creates strange lines and they are actually in the mask than i know it makes no sense in a drawn mask only but drawn and parametric it does typically l mask just the error also appears with drawn only so we can concentrate on that to reproduce go to any module e g exposer create a drawn mask change opacity to change feathering radius to e g expected behavior mask should be handled propperly as it is when mask opacity is not changed screenshots platform please complete the following information darktable version and current master os gentoo linux opencl activated or no on and off which graphics card and driver version nvidia nvidia drivers additional context can you reproduce with another darktable version yes can you reproduce with a raw or jpeg or both raw are the steps above reproduce with a fresh edit removing history yes attach an xmp if this is necessary did you compile darktable yourself if so which compiler was used with what options pc linux gnu cmake build type release cflags march native mtune native pipe is the issue still present using an empty new config dir yes
| 1
|
431,161
| 12,475,906,727
|
IssuesEvent
|
2020-05-29 12:32:31
|
cilium/cilium
|
https://api.github.com/repos/cilium/cilium
|
closed
|
Cilium drops pod traffic that should be allowed by policy (due to CIDR / FQDN identity)
|
kind/bug kind/community-report kind/regression needs-backport/1.7 priority/release-blocker
|
## Summary
Cilium 1.7.3. Issue reported from the community during upgrade testing from v1.6.x.
Kernel 4.15.
Connectivity from pod to pod is rejected by policy despite the policy allowing that traffic.
## Symptoms
Cilium monitor reports drops for traffic that should be allowed by policy:
```
# cilium monitor --type=drop
Listening for events on 16 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
level=info msg="Initializing dissection cache..." subsys=monitor
xx drop (Policy denied) flow 0xb6b3b6b3 to endpoint 1585, identity 16777388->33623: 10.0.x.y:55454 -> 10.0.a.b:c tcp SYN
```
Source ip `10.0.x.y` Userspace report of ipcache reports that the IP is mapped to a pod identity:
```
# cilium map get cilium_ipcache | grep 10.0.x.y
10.0.x.y/32 6464 0 10.0.x.y sync
```
BPF map dump fails due to lack of kernel support:
```
# cilium bpf ipcache list
error dumping contents of map: Unable to get next key from map with file descriptor 5: errno 524
```
The IP also appears in the identity list as a CIDR identity:
```
cilium-identity-list.md:16777388 cidr:10.0.x.y/32
```
Furthermore it is listed in `cilium fqdn cache list`:
```
cilium-fqdn-cache-list.md:3906 lookup foo.namespace.svc.cluster.local. 3600 2020-05-12T19:02:17.413Z 10.0.x.y
```
Cilium status reports that the ipcache-bpf-garbage-collection controller has recently run successfully on the node, so userspace should be in sync with the datapath:
```
ipcache-bpf-garbage-collection 1m47s ago never 0 no error
```
Endpoint was regenerated regularly per expected ipcache GC controller behaviour:
```
$ grep -e "Endpoint Log" -e "ipcache" cilium-debuginfo-20200512-180805.689+0000-UTC.md | grep -A 200 1585 | head -n+8
#### Endpoint Log 1585
2020-05-12T18:06:16Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:06:13Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:06:13Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T18:01:12Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:01:10Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:01:08Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T17:56:06Z OK ready Successfully regenerated endpoint program (Reason: datapath ipcache)
```
If we look more broadly at the regenerations occurring around the time of the ipcache-triggered regenerations, we also see other reasons and there is not a 1-1 correlation between triggers and moving the endpoint into "ready" state, for example:
```
2020-05-12T18:06:27Z OK waiting-to-regenerate Triggering endpoint regeneration due to one or more identities created or deleted
2020-05-12T18:06:16Z OK ready Successfully regenerated endpoint program (Reason: one or more identities created or deleted)
2020-05-12T18:06:16Z OK ready Completed endpoint regeneration with no pending regeneration requests
2020-05-12T18:06:16Z OK regenerating Regenerating endpoint: one or more identities created or deleted
2020-05-12T18:06:16Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:06:16Z OK waiting-to-regenerate Triggering endpoint regeneration due to one or more identities created or deleted
2020-05-12T18:06:13Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:06:13Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T18:05:33Z OK ready Successfully regenerated endpoint program (Reason: one or more identities created or deleted)
```
|
1.0
|
Cilium drops pod traffic that should be allowed by policy (due to CIDR / FQDN identity) - ## Summary
Cilium 1.7.3. Issue reported from the community during upgrade testing from v1.6.x.
Kernel 4.15.
Connectivity from pod to pod is rejected by policy despite the policy allowing that traffic.
## Symptoms
Cilium monitor reports drops for traffic that should be allowed by policy:
```
# cilium monitor --type=drop
Listening for events on 16 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
level=info msg="Initializing dissection cache..." subsys=monitor
xx drop (Policy denied) flow 0xb6b3b6b3 to endpoint 1585, identity 16777388->33623: 10.0.x.y:55454 -> 10.0.a.b:c tcp SYN
```
Source ip `10.0.x.y` Userspace report of ipcache reports that the IP is mapped to a pod identity:
```
# cilium map get cilium_ipcache | grep 10.0.x.y
10.0.x.y/32 6464 0 10.0.x.y sync
```
BPF map dump fails due to lack of kernel support:
```
# cilium bpf ipcache list
error dumping contents of map: Unable to get next key from map with file descriptor 5: errno 524
```
The IP also appears in the identity list as a CIDR identity:
```
cilium-identity-list.md:16777388 cidr:10.0.x.y/32
```
Furthermore it is listed in `cilium fqdn cache list`:
```
cilium-fqdn-cache-list.md:3906 lookup foo.namespace.svc.cluster.local. 3600 2020-05-12T19:02:17.413Z 10.0.x.y
```
Cilium status reports that the ipcache-bpf-garbage-collection controller has recently run successfully on the node, so userspace should be in sync with the datapath:
```
ipcache-bpf-garbage-collection 1m47s ago never 0 no error
```
Endpoint was regenerated regularly per expected ipcache GC controller behaviour:
```
$ grep -e "Endpoint Log" -e "ipcache" cilium-debuginfo-20200512-180805.689+0000-UTC.md | grep -A 200 1585 | head -n+8
#### Endpoint Log 1585
2020-05-12T18:06:16Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:06:13Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:06:13Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T18:01:12Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:01:10Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:01:08Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T17:56:06Z OK ready Successfully regenerated endpoint program (Reason: datapath ipcache)
```
If we look more broadly at the regenerations occurring around the time of the ipcache-triggered regenerations, we also see other reasons and there is not a 1-1 correlation between triggers and moving the endpoint into "ready" state, for example:
```
2020-05-12T18:06:27Z OK waiting-to-regenerate Triggering endpoint regeneration due to one or more identities created or deleted
2020-05-12T18:06:16Z OK ready Successfully regenerated endpoint program (Reason: one or more identities created or deleted)
2020-05-12T18:06:16Z OK ready Completed endpoint regeneration with no pending regeneration requests
2020-05-12T18:06:16Z OK regenerating Regenerating endpoint: one or more identities created or deleted
2020-05-12T18:06:16Z OK waiting-to-regenerate Successfully regenerated endpoint program (Reason: datapath ipcache)
2020-05-12T18:06:16Z OK waiting-to-regenerate Triggering endpoint regeneration due to one or more identities created or deleted
2020-05-12T18:06:13Z OK regenerating Regenerating endpoint: datapath ipcache
2020-05-12T18:06:13Z OK waiting-to-regenerate Triggering endpoint regeneration due to datapath ipcache
2020-05-12T18:05:33Z OK ready Successfully regenerated endpoint program (Reason: one or more identities created or deleted)
```
|
non_process
|
cilium drops pod traffic that should be allowed by policy due to cidr fqdn identity summary cilium issue reported from the community during upgrade testing from x kernel connectivity from pod to pod is rejected by policy despite the policy allowing that traffic symptoms cilium monitor reports drops for traffic that should be allowed by policy cilium monitor type drop listening for events on cpus with of shared memory press ctrl c to quit level info msg initializing dissection cache subsys monitor xx drop policy denied flow to endpoint identity x y a b c tcp syn source ip x y userspace report of ipcache reports that the ip is mapped to a pod identity cilium map get cilium ipcache grep x y x y x y sync bpf map dump fails due to lack of kernel support cilium bpf ipcache list error dumping contents of map unable to get next key from map with file descriptor errno the ip also appears in the identity list as a cidr identity cilium identity list md cidr x y furthermore it is listed in cilium fqdn cache list cilium fqdn cache list md lookup foo namespace svc cluster local x y cilium status reports that the ipcache bpf garbage collection controller has recently run successfully on the node so userspace should be in sync with the datapath ipcache bpf garbage collection ago never no error endpoint was regenerated regularly per expected ipcache gc controller behaviour grep e endpoint log e ipcache cilium debuginfo utc md grep a head n endpoint log ok waiting to regenerate successfully regenerated endpoint program reason datapath ipcache ok regenerating regenerating endpoint datapath ipcache ok waiting to regenerate triggering endpoint regeneration due to datapath ipcache ok waiting to regenerate successfully regenerated endpoint program reason datapath ipcache ok regenerating regenerating endpoint datapath ipcache ok waiting to regenerate triggering endpoint regeneration due to datapath ipcache ok ready successfully regenerated endpoint program reason datapath ipcache if we look more broadly at the regenerations occurring around the time of the ipcache triggered regenerations we also see other reasons and there is not a correlation between triggers and moving the endpoint into ready state for example ok waiting to regenerate triggering endpoint regeneration due to one or more identities created or deleted ok ready successfully regenerated endpoint program reason one or more identities created or deleted ok ready completed endpoint regeneration with no pending regeneration requests ok regenerating regenerating endpoint one or more identities created or deleted ok waiting to regenerate successfully regenerated endpoint program reason datapath ipcache ok waiting to regenerate triggering endpoint regeneration due to one or more identities created or deleted ok regenerating regenerating endpoint datapath ipcache ok waiting to regenerate triggering endpoint regeneration due to datapath ipcache ok ready successfully regenerated endpoint program reason one or more identities created or deleted
| 0
|
350,564
| 10,492,236,720
|
IssuesEvent
|
2019-09-25 12:54:40
|
conan-community/community
|
https://api.github.com/repos/conan-community/community
|
closed
|
[boost] building boost with python builds boost_numpy
|
complex: high priority: medium stage: queue type: bug
|
### Description of Problem, Request, or Question
building boost with python builds boost_numpy depending on if numpy happens to be installed on the building machine.
boost/1.69.0@conan/stable
|
1.0
|
[boost] building boost with python builds boost_numpy - ### Description of Problem, Request, or Question
building boost with python builds boost_numpy depending on if numpy happens to be installed on the building machine.
boost/1.69.0@conan/stable
|
non_process
|
building boost with python builds boost numpy description of problem request or question building boost with python builds boost numpy depending on if numpy happens to be installed on the building machine boost conan stable
| 0
|
220,624
| 24,565,341,057
|
IssuesEvent
|
2022-10-13 02:06:22
|
phunware/react-select
|
https://api.github.com/repos/phunware/react-select
|
opened
|
CVE-2022-37617 (Medium) detected in browserify-shim-3.8.14.tgz
|
security vulnerability
|
## CVE-2022-37617 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserify-shim-3.8.14.tgz</b></p></summary>
<p>Makes CommonJS-incompatible modules browserifyable.</p>
<p>Library home page: <a href="https://registry.npmjs.org/browserify-shim/-/browserify-shim-3.8.14.tgz">https://registry.npmjs.org/browserify-shim/-/browserify-shim-3.8.14.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/browserify-shim/package.json</p>
<p>
Dependency Hierarchy:
- react-component-gulp-tasks-0.7.7.tgz (Root Library)
- :x: **browserify-shim-3.8.14.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function resolveShims in resolve-shims.js in thlorenz browserify-shim 3.8.15 via the k variable in resolve-shims.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37617>CVE-2022-37617</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-37617 (Medium) detected in browserify-shim-3.8.14.tgz - ## CVE-2022-37617 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserify-shim-3.8.14.tgz</b></p></summary>
<p>Makes CommonJS-incompatible modules browserifyable.</p>
<p>Library home page: <a href="https://registry.npmjs.org/browserify-shim/-/browserify-shim-3.8.14.tgz">https://registry.npmjs.org/browserify-shim/-/browserify-shim-3.8.14.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/browserify-shim/package.json</p>
<p>
Dependency Hierarchy:
- react-component-gulp-tasks-0.7.7.tgz (Root Library)
- :x: **browserify-shim-3.8.14.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function resolveShims in resolve-shims.js in thlorenz browserify-shim 3.8.15 via the k variable in resolve-shims.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37617>CVE-2022-37617</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in browserify shim tgz cve medium severity vulnerability vulnerable library browserify shim tgz makes commonjs incompatible modules browserifyable library home page a href path to dependency file package json path to vulnerable library node modules browserify shim package json dependency hierarchy react component gulp tasks tgz root library x browserify shim tgz vulnerable library vulnerability details prototype pollution vulnerability in function resolveshims in resolve shims js in thlorenz browserify shim via the k variable in resolve shims js publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
16,667
| 21,771,397,044
|
IssuesEvent
|
2022-05-13 09:25:34
|
camunda/feel-scala
|
https://api.github.com/repos/camunda/feel-scala
|
opened
|
Learning material for the FEEL language
|
type: documentation team/process-automation
|
## Which documentation is missing/incorrect?
* Learning material that leads the reader through exploring the FEEL language.
This could be inspired by other great language guides (https://doc.rust-lang.org/book/, http://www.learnyouahaskell.com/, etc)
Currently, the documentation is more reference-based:
* Getting Started: https://camunda.github.io/feel-scala/docs/reference
* What is FEEL: https://camunda.github.io/feel-scala/docs/reference/what-is-feel
Affected versions:
* 1.15
|
1.0
|
Learning material for the FEEL language - ## Which documentation is missing/incorrect?
* Learning material that leads the reader through exploring the FEEL language.
This could be inspired by other great language guides (https://doc.rust-lang.org/book/, http://www.learnyouahaskell.com/, etc)
Currently, the documentation is more reference-based:
* Getting Started: https://camunda.github.io/feel-scala/docs/reference
* What is FEEL: https://camunda.github.io/feel-scala/docs/reference/what-is-feel
Affected versions:
* 1.15
|
process
|
learning material for the feel language which documentation is missing incorrect learning material that leads the reader through exploring the feel language this could be inspired by other great language guides etc currently the documentation is more reference based getting started what is feel affected versions
| 1
|
297,838
| 9,182,295,528
|
IssuesEvent
|
2019-03-05 12:29:13
|
servicemesher/istio-official-translation
|
https://api.github.com/repos/servicemesher/istio-official-translation
|
closed
|
content/about/contribute/writing-a-new-topic/index.md
|
lang/zh pending priority/P0 sync/update version/1.1
|
文件路径:content/about/contribute/writing-a-new-topic/index.md
[源码](https://github.com/istio/istio.github.io/tree/master/content/about/contribute/writing-a-new-topic/index.md)
[网址](https://istio.io//about/contribute/writing-a-new-topic/index.htm)
```diff
diff --git a/content/about/contribute/writing-a-new-topic/index.md b/content/about/contribute/writing-a-new-topic/index.md
index 03836859..7e92ac78 100644
--- a/content/about/contribute/writing-a-new-topic/index.md
+++ b/content/about/contribute/writing-a-new-topic/index.md
@@ -65,11 +65,6 @@ is the best fit for your content:
</tr>
</table>
-### About blog posts
-
-The Istio blog is intended to contain authoritative posts regarding Istio and technologies or products related to
-Istio. We generally do not publish user or enthusiast posts about using Istio.
-
## Naming a topic
Choose a title for your topic that has the keywords you want search engines to find.
@@ -128,25 +123,21 @@ Within markdown, use the following sequence to add the image:
{{< text html >}}
{{</* image width="75%"
link="./myfile.svg"
- alt="Alternate text to display when the image can't be loaded"
+ alt="Alternate text to display when the image is not available"
title="A tooltip displayed when hovering over the image"
caption="A caption displayed under the image"
*/>}}
{{< /text >}}
-The `link` and `caption` values are required, all other values are optional.
-
-If the `title` value isn't supplied, it'll default to the same as `caption`. If the `alt` value is not supplied, it'll
+The `width`, `link` and `caption` values are always required. If the image is a PNG or JPG file, then the
+`ratio` value is required. If the `title` value isn't
+supplied, it'll default to the same as `caption`. If the `alt` value is not supplied, it'll
default to `title` or if that's not defined, to `caption`.
`width` represents the percentage of space used by the image
-relative to the surrounding text. If the value is not specified, it
-defaults to 100$.
+relative to the surrounding text.
-`ratio` represents the ratio of the image height compared to the image width. This
-value is calculated automatically for any local image content, but must be calculated
-manually when referencing external image content.
-In that case, `ratio` must be manually calculated using (image height / image width) * 100.
+For PNG and JPG images, `ratio` must be manually calculated using (image height / image width) * 100.
## Adding icons & emojis
@@ -442,22 +433,6 @@ will use when the user chooses to download the file. For example:
If you don't specify the `downloadas` attribute, then the download name is taken from the `url`
attribute instead.
-## Embedding boilerplate text
-
-You can embed common boilerplate text into any markdown output using the `boilerplate` sequence:
-
-{{< text markdown >}}
-{{</* boilerplate example */>}}
-{{< /text >}}
-
-which results in:
-
-{{< boilerplate example >}}
-
-You supply the name of a boilerplate file to insert at the current location. Available boilerplates are
-located in the `boilerplates` directory. Boilerplates are just
-normal markdown files.
-
## Using tabs
If you have some content to display in a variety of formats, it is convenient to use a tab set and display each
```
|
1.0
|
content/about/contribute/writing-a-new-topic/index.md - 文件路径:content/about/contribute/writing-a-new-topic/index.md
[源码](https://github.com/istio/istio.github.io/tree/master/content/about/contribute/writing-a-new-topic/index.md)
[网址](https://istio.io//about/contribute/writing-a-new-topic/index.htm)
```diff
diff --git a/content/about/contribute/writing-a-new-topic/index.md b/content/about/contribute/writing-a-new-topic/index.md
index 03836859..7e92ac78 100644
--- a/content/about/contribute/writing-a-new-topic/index.md
+++ b/content/about/contribute/writing-a-new-topic/index.md
@@ -65,11 +65,6 @@ is the best fit for your content:
</tr>
</table>
-### About blog posts
-
-The Istio blog is intended to contain authoritative posts regarding Istio and technologies or products related to
-Istio. We generally do not publish user or enthusiast posts about using Istio.
-
## Naming a topic
Choose a title for your topic that has the keywords you want search engines to find.
@@ -128,25 +123,21 @@ Within markdown, use the following sequence to add the image:
{{< text html >}}
{{</* image width="75%"
link="./myfile.svg"
- alt="Alternate text to display when the image can't be loaded"
+ alt="Alternate text to display when the image is not available"
title="A tooltip displayed when hovering over the image"
caption="A caption displayed under the image"
*/>}}
{{< /text >}}
-The `link` and `caption` values are required, all other values are optional.
-
-If the `title` value isn't supplied, it'll default to the same as `caption`. If the `alt` value is not supplied, it'll
+The `width`, `link` and `caption` values are always required. If the image is a PNG or JPG file, then the
+`ratio` value is required. If the `title` value isn't
+supplied, it'll default to the same as `caption`. If the `alt` value is not supplied, it'll
default to `title` or if that's not defined, to `caption`.
`width` represents the percentage of space used by the image
-relative to the surrounding text. If the value is not specified, it
-defaults to 100$.
+relative to the surrounding text.
-`ratio` represents the ratio of the image height compared to the image width. This
-value is calculated automatically for any local image content, but must be calculated
-manually when referencing external image content.
-In that case, `ratio` must be manually calculated using (image height / image width) * 100.
+For PNG and JPG images, `ratio` must be manually calculated using (image height / image width) * 100.
## Adding icons & emojis
@@ -442,22 +433,6 @@ will use when the user chooses to download the file. For example:
If you don't specify the `downloadas` attribute, then the download name is taken from the `url`
attribute instead.
-## Embedding boilerplate text
-
-You can embed common boilerplate text into any markdown output using the `boilerplate` sequence:
-
-{{< text markdown >}}
-{{</* boilerplate example */>}}
-{{< /text >}}
-
-which results in:
-
-{{< boilerplate example >}}
-
-You supply the name of a boilerplate file to insert at the current location. Available boilerplates are
-located in the `boilerplates` directory. Boilerplates are just
-normal markdown files.
-
## Using tabs
If you have some content to display in a variety of formats, it is convenient to use a tab set and display each
```
|
non_process
|
content about contribute writing a new topic index md 文件路径:content about contribute writing a new topic index md diff diff git a content about contribute writing a new topic index md b content about contribute writing a new topic index md index a content about contribute writing a new topic index md b content about contribute writing a new topic index md is the best fit for your content about blog posts the istio blog is intended to contain authoritative posts regarding istio and technologies or products related to istio we generally do not publish user or enthusiast posts about using istio naming a topic choose a title for your topic that has the keywords you want search engines to find within markdown use the following sequence to add the image image width link myfile svg alt alternate text to display when the image can t be loaded alt alternate text to display when the image is not available title a tooltip displayed when hovering over the image caption a caption displayed under the image the link and caption values are required all other values are optional if the title value isn t supplied it ll default to the same as caption if the alt value is not supplied it ll the width link and caption values are always required if the image is a png or jpg file then the ratio value is required if the title value isn t supplied it ll default to the same as caption if the alt value is not supplied it ll default to title or if that s not defined to caption width represents the percentage of space used by the image relative to the surrounding text if the value is not specified it defaults to relative to the surrounding text ratio represents the ratio of the image height compared to the image width this value is calculated automatically for any local image content but must be calculated manually when referencing external image content in that case ratio must be manually calculated using image height image width for png and jpg images ratio must be manually calculated using image height image width adding icons emojis will use when the user chooses to download the file for example if you don t specify the downloadas attribute then the download name is taken from the url attribute instead embedding boilerplate text you can embed common boilerplate text into any markdown output using the boilerplate sequence which results in you supply the name of a boilerplate file to insert at the current location available boilerplates are located in the boilerplates directory boilerplates are just normal markdown files using tabs if you have some content to display in a variety of formats it is convenient to use a tab set and display each
| 0
|
143,301
| 21,995,887,603
|
IssuesEvent
|
2022-05-26 06:16:02
|
stores-cedcommerce/Anthony-Store-Design
|
https://api.github.com/repos/stores-cedcommerce/Anthony-Store-Design
|
opened
|
The spacing from the left and right side is not equal.
|
Header section Mobile Design / UI / UX
|
**Actual result:**
The spacing from the left and right side is not equal.

**Expected result:**
The spacing can be equal from the left and right side.
|
1.0
|
The spacing from the left and right side is not equal. - **Actual result:**
The spacing from the left and right side is not equal.

**Expected result:**
The spacing can be equal from the left and right side.
|
non_process
|
the spacing from the left and right side is not equal actual result the spacing from the left and right side is not equal expected result the spacing can be equal from the left and right side
| 0
|
722,814
| 24,874,771,351
|
IssuesEvent
|
2022-10-27 18:05:43
|
CarnegieLearningWeb/UpGrade
|
https://api.github.com/repos/CarnegieLearningWeb/UpGrade
|
closed
|
Batch import/export of experiment files
|
enhancement priority: low
|
It would be helpful to enable batch import/export of experiment json files to facilitate faster setup. Low priority though.
|
1.0
|
Batch import/export of experiment files - It would be helpful to enable batch import/export of experiment json files to facilitate faster setup. Low priority though.
|
non_process
|
batch import export of experiment files it would be helpful to enable batch import export of experiment json files to facilitate faster setup low priority though
| 0
|
433,244
| 30,320,197,013
|
IssuesEvent
|
2023-07-10 18:37:07
|
VerisimilitudeX/DNAnalyzer
|
https://api.github.com/repos/VerisimilitudeX/DNAnalyzer
|
closed
|
Design `Current Version` Documentation
|
documentation help wanted hacktoberfest-accepted no-issue-activity
|
**Is your feature request related to a problem? Please describe.**
We need a documentation file to display current version of the application. This will also be used with the CLI argument for version identification.
**Describe the solution you'd like**
Either some form of version control [like this](https://rebelsguidetopm.com/how-to-do-document-version-control/) or a basic file with the current version.
|
1.0
|
Design `Current Version` Documentation - **Is your feature request related to a problem? Please describe.**
We need a documentation file to display current version of the application. This will also be used with the CLI argument for version identification.
**Describe the solution you'd like**
Either some form of version control [like this](https://rebelsguidetopm.com/how-to-do-document-version-control/) or a basic file with the current version.
|
non_process
|
design current version documentation is your feature request related to a problem please describe we need a documentation file to display current version of the application this will also be used with the cli argument for version identification describe the solution you d like either some form of version control or a basic file with the current version
| 0
|
273,763
| 20,815,136,847
|
IssuesEvent
|
2022-03-18 09:26:03
|
rtsoft-gmbh/up2date-cpp
|
https://api.github.com/repos/rtsoft-gmbh/up2date-cpp
|
opened
|
Add docs for public api
|
documentation
|
Add usage commemnts for public api in ddi and dps modules
May be use docs-generator: https://github.com/pseudomuto/protoc-gen-doc ?
|
1.0
|
Add docs for public api - Add usage commemnts for public api in ddi and dps modules
May be use docs-generator: https://github.com/pseudomuto/protoc-gen-doc ?
|
non_process
|
add docs for public api add usage commemnts for public api in ddi and dps modules may be use docs generator
| 0
|
179,871
| 6,630,774,789
|
IssuesEvent
|
2017-09-25 02:21:46
|
Citadel-Station-13/Citadel-Station-13
|
https://api.github.com/repos/Citadel-Station-13/Citadel-Station-13
|
closed
|
Examining breaks when a ` is present in your flavortext
|
Bug Priority: CRITICAL
|
Basically, if a ` (backtick, NOT an apostrophe) is present in the flavortext, your character will be entirely unable to be examined
|
1.0
|
Examining breaks when a ` is present in your flavortext - Basically, if a ` (backtick, NOT an apostrophe) is present in the flavortext, your character will be entirely unable to be examined
|
non_process
|
examining breaks when a is present in your flavortext basically if a backtick not an apostrophe is present in the flavortext your character will be entirely unable to be examined
| 0
|
17,189
| 22,769,985,518
|
IssuesEvent
|
2022-07-08 09:05:05
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: GO:0036472 suppression by virus of host protein-protein interaction
|
multi-species process
|
Dear all,
The proposal has been made to obsolete GO:0036472 suppression by virus of host protein-protein interaction.
The reason for obsoletion is that this term represents a molecular function (GO:0140311 protein sequestering activity or other type of GO:0098772 molecular function regulator activity).
There is a single annotation to this term (disputed in P2GO). There are no mappings; this term is not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23645
Thanks, Pascale
|
1.0
|
Obsoletion notice: GO:0036472 suppression by virus of host protein-protein interaction - Dear all,
The proposal has been made to obsolete GO:0036472 suppression by virus of host protein-protein interaction.
The reason for obsoletion is that this term represents a molecular function (GO:0140311 protein sequestering activity or other type of GO:0098772 molecular function regulator activity).
There is a single annotation to this term (disputed in P2GO). There are no mappings; this term is not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23645
Thanks, Pascale
|
process
|
obsoletion notice go suppression by virus of host protein protein interaction dear all the proposal has been made to obsolete go suppression by virus of host protein protein interaction the reason for obsoletion is that this term represents a molecular function go protein sequestering activity or other type of go molecular function regulator activity there is a single annotation to this term disputed in there are no mappings this term is not present in any subsets you can comment on the ticket thanks pascale
| 1
|
602,748
| 18,502,810,594
|
IssuesEvent
|
2021-10-19 15:17:57
|
tinkerbell/hook
|
https://api.github.com/repos/tinkerbell/hook
|
closed
|
Push-based publish job is failing
|
kind/bug priority/important-soon
|
## Expected Behaviour
Current publish job is referencing a non-existing s3 bucket and does not have credentials, example failure: https://github.com/tinkerbell/hook/actions/runs/559080308
|
1.0
|
Push-based publish job is failing - ## Expected Behaviour
Current publish job is referencing a non-existing s3 bucket and does not have credentials, example failure: https://github.com/tinkerbell/hook/actions/runs/559080308
|
non_process
|
push based publish job is failing expected behaviour current publish job is referencing a non existing bucket and does not have credentials example failure
| 0
|
17,735
| 23,651,255,527
|
IssuesEvent
|
2022-08-26 06:53:53
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
closed
|
Prefix includes with "stablehlo"
|
Process
|
At the moment, our includes are not prefixed with the project name, e.g. `#include dialect/ChloOps.h`. Now that StableHLO is starting getting used, we should change that to `#include stablehlo/dialect/ChloOps.h`.
|
1.0
|
Prefix includes with "stablehlo" - At the moment, our includes are not prefixed with the project name, e.g. `#include dialect/ChloOps.h`. Now that StableHLO is starting getting used, we should change that to `#include stablehlo/dialect/ChloOps.h`.
|
process
|
prefix includes with stablehlo at the moment our includes are not prefixed with the project name e g include dialect chloops h now that stablehlo is starting getting used we should change that to include stablehlo dialect chloops h
| 1
|
551,350
| 16,166,641,440
|
IssuesEvent
|
2021-05-01 16:29:06
|
sopra-fs21-group-13/Server
|
https://api.github.com/repos/sopra-fs21-group-13/Server
|
closed
|
Functionality to add members to a learnSet
|
high priority task
|
Duration: 6h
Add new dependencies between user and set
Create easy access to memberList and functionality to add members and remove them
#6
|
1.0
|
Functionality to add members to a learnSet - Duration: 6h
Add new dependencies between user and set
Create easy access to memberList and functionality to add members and remove them
#6
|
non_process
|
functionality to add members to a learnset duration add new dependencies between user and set create easy access to memberlist and functionality to add members and remove them
| 0
|
22,135
| 30,679,908,236
|
IssuesEvent
|
2023-07-26 08:28:32
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Create Automation PowerShell runbook using managed identity
|
automation/svc triaged cxp product-question process-automation/subsvc Pri2
|
[Enter feedback here]
While creating Automation Powershell runbook using managed identities, this particular solution doesn't enable the System assigned identity, as a result of which there is no ObjectId associated and hence role assignment fails.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6
* Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8
* Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity)
* Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/powershell-runbook-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Create Automation PowerShell runbook using managed identity -
[Enter feedback here]
While creating Automation Powershell runbook using managed identities, this particular solution doesn't enable the System assigned identity, as a result of which there is no ObjectId associated and hence role assignment fails.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6
* Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8
* Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity)
* Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/powershell-runbook-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
create automation powershell runbook using managed identity while creating automation powershell runbook using managed identities this particular solution doesn t enable the system assigned identity as a result of which there is no objectid associated and hence role assignment fails document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
17,787
| 23,714,837,929
|
IssuesEvent
|
2022-08-30 10:52:26
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
Use Action Scheduler for background processing
|
needs: testing module: preload module: database feature request tool: background process priority: medium status: blocked needs: r&d
|
Right now the primary issue with the delicious brains batch processing is it stores everything in wp_options and tries to use admin-ajax to keep the queue going. The preloader is also badly designed in my opinion such that all URLs are stored as a single batch vs individual.
The problem with that is no way to detect for the actual progress of the queue beside the simple options based counter.
the latest action scheduler (https://actionscheduler.org) stores to a dedicated table and has integrated delicious brains async class, but it does not rely on it 100%, and you can still process everything individually.
This overall would provide better scalability and performance in both shared hosting and more advanced setups/use cases.
I have had to write a plugin myself not too long ago that hooks into the WP HTTP API to abort the ajax callbacks from the preloader and force everything on the CLI/cron, and I would like to see that not be needed truthfully.
Thanks :)
|
1.0
|
Use Action Scheduler for background processing - Right now the primary issue with the delicious brains batch processing is it stores everything in wp_options and tries to use admin-ajax to keep the queue going. The preloader is also badly designed in my opinion such that all URLs are stored as a single batch vs individual.
The problem with that is no way to detect for the actual progress of the queue beside the simple options based counter.
the latest action scheduler (https://actionscheduler.org) stores to a dedicated table and has integrated delicious brains async class, but it does not rely on it 100%, and you can still process everything individually.
This overall would provide better scalability and performance in both shared hosting and more advanced setups/use cases.
I have had to write a plugin myself not too long ago that hooks into the WP HTTP API to abort the ajax callbacks from the preloader and force everything on the CLI/cron, and I would like to see that not be needed truthfully.
Thanks :)
|
process
|
use action scheduler for background processing right now the primary issue with the delicious brains batch processing is it stores everything in wp options and tries to use admin ajax to keep the queue going the preloader is also badly designed in my opinion such that all urls are stored as a single batch vs individual the problem with that is no way to detect for the actual progress of the queue beside the simple options based counter the latest action scheduler stores to a dedicated table and has integrated delicious brains async class but it does not rely on it and you can still process everything individually this overall would provide better scalability and performance in both shared hosting and more advanced setups use cases i have had to write a plugin myself not too long ago that hooks into the wp http api to abort the ajax callbacks from the preloader and force everything on the cli cron and i would like to see that not be needed truthfully thanks
| 1
|
620,079
| 19,548,168,647
|
IssuesEvent
|
2022-01-02 08:48:06
|
WA-WF-Bot/WA-WF-Bot-Public
|
https://api.github.com/repos/WA-WF-Bot/WA-WF-Bot-Public
|
opened
|
Change bug report option
|
priority-1
|
The command !bug should create the following message:
You can report bugs to the following mail:connect@toinbox.org
.Thank you for helping us make ToInbox better!
|
1.0
|
Change bug report option - The command !bug should create the following message:
You can report bugs to the following mail:connect@toinbox.org
.Thank you for helping us make ToInbox better!
|
non_process
|
change bug report option the command bug should create the following message you can report bugs to the following mail connect toinbox org thank you for helping us make toinbox better
| 0
|
23,528
| 3,834,753,711
|
IssuesEvent
|
2016-04-01 11:21:31
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Meta Table.getKeys() returns an empty list containing "null", if a table has no primary key
|
C: Functionality P: Medium R: Fixed T: Defect
|
Related issue: #5179
|
1.0
|
Meta Table.getKeys() returns an empty list containing "null", if a table has no primary key - Related issue: #5179
|
non_process
|
meta table getkeys returns an empty list containing null if a table has no primary key related issue
| 0
|
22,933
| 7,247,259,294
|
IssuesEvent
|
2018-02-15 01:36:26
|
jaagr/polybar
|
https://api.github.com/repos/jaagr/polybar
|
closed
|
xpp/proto build error on openSUSE tumbleweed
|
build
|
Hello
When I try to build polybar, I get this error. I have tried playing around with different dependencies and versions of polybar, but it just will not go away.
Error:
```
[ 1%] Generating ../../../lib/xpp/include/xpp/proto/x.hpp
Traceback (most recent call last):
File "/home/james/mysuseconfig/polybar/lib/xpp/generators/cpp_client.py", line 3163, in <module>
from xcbgen.state import Module
File "/usr/lib/python3.6/site-packages/xcbgen/state.py", line 7, in <module>
from xcbgen import matcher
File "/usr/lib/python3.6/site-packages/xcbgen/matcher.py", line 12, in <module>
from xcbgen.xtypes import *
File "/usr/lib/python3.6/site-packages/xcbgen/xtypes.py", line 1201, in <module>
class EventStruct(Union):
File "/usr/lib/python3.6/site-packages/xcbgen/xtypes.py", line 1219, in EventStruct
out = __main__.output['eventstruct']
KeyError: 'eventstruct'
make[2]: *** [lib/xpp/CMakeFiles/xpp.dir/build.make:70: ../lib/xpp/include/xpp/proto/x.hpp] Error 1
make[2]: *** Deleting file '../lib/xpp/include/xpp/proto/x.hpp'
make[1]: *** [CMakeFiles/Makefile2:402: lib/xpp/CMakeFiles/xpp.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
```
James
|
1.0
|
xpp/proto build error on openSUSE tumbleweed - Hello
When I try to build polybar, I get this error. I have tried playing around with different dependencies and versions of polybar, but it just will not go away.
Error:
```
[ 1%] Generating ../../../lib/xpp/include/xpp/proto/x.hpp
Traceback (most recent call last):
File "/home/james/mysuseconfig/polybar/lib/xpp/generators/cpp_client.py", line 3163, in <module>
from xcbgen.state import Module
File "/usr/lib/python3.6/site-packages/xcbgen/state.py", line 7, in <module>
from xcbgen import matcher
File "/usr/lib/python3.6/site-packages/xcbgen/matcher.py", line 12, in <module>
from xcbgen.xtypes import *
File "/usr/lib/python3.6/site-packages/xcbgen/xtypes.py", line 1201, in <module>
class EventStruct(Union):
File "/usr/lib/python3.6/site-packages/xcbgen/xtypes.py", line 1219, in EventStruct
out = __main__.output['eventstruct']
KeyError: 'eventstruct'
make[2]: *** [lib/xpp/CMakeFiles/xpp.dir/build.make:70: ../lib/xpp/include/xpp/proto/x.hpp] Error 1
make[2]: *** Deleting file '../lib/xpp/include/xpp/proto/x.hpp'
make[1]: *** [CMakeFiles/Makefile2:402: lib/xpp/CMakeFiles/xpp.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
```
James
|
non_process
|
xpp proto build error on opensuse tumbleweed hello when i try to build polybar i get this error i have tried playing around with different dependencies and versions of polybar but it just will not go away error generating lib xpp include xpp proto x hpp traceback most recent call last file home james mysuseconfig polybar lib xpp generators cpp client py line in from xcbgen state import module file usr lib site packages xcbgen state py line in from xcbgen import matcher file usr lib site packages xcbgen matcher py line in from xcbgen xtypes import file usr lib site packages xcbgen xtypes py line in class eventstruct union file usr lib site packages xcbgen xtypes py line in eventstruct out main output keyerror eventstruct make error make deleting file lib xpp include xpp proto x hpp make error make error james
| 0
|
441,983
| 12,735,724,040
|
IssuesEvent
|
2020-06-25 15:45:14
|
graknlabs/grakn
|
https://api.github.com/repos/graknlabs/grakn
|
closed
|
Incorrect Graql behaviour in some scenarios (minor issues)
|
priority: low type: bug
|
## Description
A number of minor issues have been found while crafting BDD scenarios, where the actual behaviour of Graql does not match the expected behaviour.
## Environment
1. OS (where Grakn server runs): Mac OS 10
2. Grakn version: Grakn Core 1.7.2
3. Grakn client: client-java
## Scenarios
### Scenario: define attribute subtype throws if you try to override 'value'
#### expected behaviour
given
```
define
name sub attribute, value string;
```
then `define code-name sub name, value long;` should fail with an error message.
#### actual behaviour
It does not throw an error, and defines the type `code-name`. (fixed)
### Scenario: define rule with an attribute value set in `then` that doesn't match the attribute's type throws on commit
#### expected behaviour
given
```
define
name sub attribute, value string;
nickname sub name;
person has nickname;
```
then `define may-has-nickname-5 sub rule, when { $p has name "May"; }, then { $p has nickname 5; };` should throw an error on commit, saying that the rule infers an attribute value that is of the incorrect type.
#### actual behaviour
It defines the rule successfully. It then lets you insert a person with name "May". If you then try to `match $n isa name; get;` this query throws an error, saying that Long cannot be cast to String. (raise issue, do not fix- too hard)
### Scenario: define rule that infers an abstract relation throws on commit
#### expected behaviour
When you define a rule that infers an abstract relation, it should throw on commit.
#### actual behaviour
It doesn't. (raise issue + fix)
### Scenario: define rule that infers an abstract attribute value throws on commit
#### expected behaviour
When you define a rule that infers the value of an abstract attribute, it should throw on commit.
#### actual behaviour
It doesn't. (raise issue + fix)
### Scenario: define a subrule throws on commit
#### expected behaviour
When you define a rule to `sub` another rule, it should throw on commit (or on define)
#### actual behaviour
It lets you define the rule. The rule works and functions normally, as if it was defined normally with `sub rule`. (fixed in 2.0)
### Scenario: assign new supertype with existing data succeeds if the supertypes play the same roles
#### expected behaviour
Given
```
define
bird sub entity, plays flier;
pigeon sub bird;
flying sub relation, relates flier;
insert $p isa pigeon;
define
animal sub entity, plays flier;
```
then `define pigeon sub animal;` should be allowed because `animal` and `bird` both play the same roles and have no other differences.
#### actual behaviour
The following error is thrown:
`Cannot change the super type {bird} to {animal} because {bird} is connected to role {flier} which {animal} is not connected to.` - SUCCEEDS with no data - this should be put into a test! Raise an issue, but don't fix yet; it's complicated.
### Scenario: assign new supertype with existing data succeeds if the supertypes have the same attributes
Similar to the above scenario, but where the two parent types each have an attribute ownership and nothing else, and the attributes they own are the same attribute. - Same notes as above scenario.
### Scenario: write a variable in a 'define' throws
#### expected behaviour
We should throw an exception if the user writes a variable in a 'define'.
#### actual behaviour
We don't. - Raise issue in graql repo, and fix.
|
1.0
|
Incorrect Graql behaviour in some scenarios (minor issues) - ## Description
A number of minor issues have been found while crafting BDD scenarios, where the actual behaviour of Graql does not match the expected behaviour.
## Environment
1. OS (where Grakn server runs): Mac OS 10
2. Grakn version: Grakn Core 1.7.2
3. Grakn client: client-java
## Scenarios
### Scenario: define attribute subtype throws if you try to override 'value'
#### expected behaviour
given
```
define
name sub attribute, value string;
```
then `define code-name sub name, value long;` should fail with an error message.
#### actual behaviour
It does not throw an error, and defines the type `code-name`. (fixed)
### Scenario: define rule with an attribute value set in `then` that doesn't match the attribute's type throws on commit
#### expected behaviour
given
```
define
name sub attribute, value string;
nickname sub name;
person has nickname;
```
then `define may-has-nickname-5 sub rule, when { $p has name "May"; }, then { $p has nickname 5; };` should throw an error on commit, saying that the rule infers an attribute value that is of the incorrect type.
#### actual behaviour
It defines the rule successfully. It then lets you insert a person with name "May". If you then try to `match $n isa name; get;` this query throws an error, saying that Long cannot be cast to String. (raise issue, do not fix- too hard)
### Scenario: define rule that infers an abstract relation throws on commit
#### expected behaviour
When you define a rule that infers an abstract relation, it should throw on commit.
#### actual behaviour
It doesn't. (raise issue + fix)
### Scenario: define rule that infers an abstract attribute value throws on commit
#### expected behaviour
When you define a rule that infers the value of an abstract attribute, it should throw on commit.
#### actual behaviour
It doesn't. (raise issue + fix)
### Scenario: define a subrule throws on commit
#### expected behaviour
When you define a rule to `sub` another rule, it should throw on commit (or on define)
#### actual behaviour
It lets you define the rule. The rule works and functions normally, as if it was defined normally with `sub rule`. (fixed in 2.0)
### Scenario: assign new supertype with existing data succeeds if the supertypes play the same roles
#### expected behaviour
Given
```
define
bird sub entity, plays flier;
pigeon sub bird;
flying sub relation, relates flier;
insert $p isa pigeon;
define
animal sub entity, plays flier;
```
then `define pigeon sub animal;` should be allowed because `animal` and `bird` both play the same roles and have no other differences.
#### actual behaviour
The following error is thrown:
`Cannot change the super type {bird} to {animal} because {bird} is connected to role {flier} which {animal} is not connected to.` - SUCCEEDS with no data - this should be put into a test! Raise an issue, but don't fix yet; it's complicated.
### Scenario: assign new supertype with existing data succeeds if the supertypes have the same attributes
Similar to the above scenario, but where the two parent types each have an attribute ownership and nothing else, and the attributes they own are the same attribute. - Same notes as above scenario.
### Scenario: write a variable in a 'define' throws
#### expected behaviour
We should throw an exception if the user writes a variable in a 'define'.
#### actual behaviour
We don't. - Raise issue in graql repo, and fix.
|
non_process
|
incorrect graql behaviour in some scenarios minor issues description a number of minor issues have been found while crafting bdd scenarios where the actual behaviour of graql does not match the expected behaviour environment os where grakn server runs mac os grakn version grakn core grakn client client java scenarios scenario define attribute subtype throws if you try to override value expected behaviour given define name sub attribute value string then define code name sub name value long should fail with an error message actual behaviour it does not throw an error and defines the type code name fixed scenario define rule with an attribute value set in then that doesn t match the attribute s type throws on commit expected behaviour given define name sub attribute value string nickname sub name person has nickname then define may has nickname sub rule when p has name may then p has nickname should throw an error on commit saying that the rule infers an attribute value that is of the incorrect type actual behaviour it defines the rule successfully it then lets you insert a person with name may if you then try to match n isa name get this query throws an error saying that long cannot be cast to string raise issue do not fix too hard scenario define rule that infers an abstract relation throws on commit expected behaviour when you define a rule that infers an abstract relation it should throw on commit actual behaviour it doesn t raise issue fix scenario define rule that infers an abstract attribute value throws on commit expected behaviour when you define a rule that infers the value of an abstract attribute it should throw on commit actual behaviour it doesn t raise issue fix scenario define a subrule throws on commit expected behaviour when you define a rule to sub another rule it should throw on commit or on define actual behaviour it lets you define the rule the rule works and functions normally as if it was defined normally with sub rule fixed in scenario assign new supertype with existing data succeeds if the supertypes play the same roles expected behaviour given define bird sub entity plays flier pigeon sub bird flying sub relation relates flier insert p isa pigeon define animal sub entity plays flier then define pigeon sub animal should be allowed because animal and bird both play the same roles and have no other differences actual behaviour the following error is thrown cannot change the super type bird to animal because bird is connected to role flier which animal is not connected to succeeds with no data this should be put into a test raise an issue but don t fix yet it s complicated scenario assign new supertype with existing data succeeds if the supertypes have the same attributes similar to the above scenario but where the two parent types each have an attribute ownership and nothing else and the attributes they own are the same attribute same notes as above scenario scenario write a variable in a define throws expected behaviour we should throw an exception if the user writes a variable in a define actual behaviour we don t raise issue in graql repo and fix
| 0
|
306,701
| 23,169,770,066
|
IssuesEvent
|
2022-07-30 14:23:58
|
RippeR37/libbase-example-cmake
|
https://api.github.com/repos/RippeR37/libbase-example-cmake
|
closed
|
GLOG user guide is not visible in online documentation
|
bug documentation external on hold
|
When opening the documentation on GitHub Pages, then navigating to `Logging` page, the `GLOG's user guide` section is empty.
|
1.0
|
GLOG user guide is not visible in online documentation - When opening the documentation on GitHub Pages, then navigating to `Logging` page, the `GLOG's user guide` section is empty.
|
non_process
|
glog user guide is not visible in online documentation when opening the documentation on github pages then navigating to logging page the glog s user guide section is empty
| 0
|
50,149
| 13,187,348,673
|
IssuesEvent
|
2020-08-13 03:07:42
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
PFTriggerFilterMonitoring may flood the logging with "No filter mask present in frame at position ..." (Trac #205)
|
Migrated from Trac defect jeb + pnf
|
the filter + rate monitoring currently (plan-b) runs within the filter clients - all events should include a filter mask.
it will move into the server in future (plan-a), so that events may not include a filter mask, if the filter mode isn't physics filtering.
PFTriggerFilterMonitoring will create a warn logging for such events...
solutions:
a) PFTriggerFilterMonitoring only runs on physics-filtered data changing it into a conditional module using a PFFilterModeFilter (that needs to be implemented) or checking the run summary service for the current filter mode
b) PFTriggerFilterMonitoring only processes the filter mask for physics-filtered data checking the run summary service for the current filter mode
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/205
, reported by tschmidt and owned by rfranke_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "the filter + rate monitoring currently (plan-b) runs within the filter clients - all events should include a filter mask.[[BR]]\nit will move into the server in future (plan-a), so that events may not include a filter mask, if the filter mode isn't physics filtering.[[BR]]\nPFTriggerFilterMonitoring will create a warn logging for such events...\n\nsolutions:[[BR]]\na) PFTriggerFilterMonitoring only runs on physics-filtered data changing it into a conditional module using a PFFilterModeFilter (that needs to be implemented) or checking the run summary service for the current filter mode[[BR]]\nb) PFTriggerFilterMonitoring only processes the filter mask for physics-filtered data checking the run summary service for the current filter mode\n",
"reporter": "tschmidt",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "PFTriggerFilterMonitoring may flood the logging with \"No filter mask present in frame at position ...\"",
"priority": "normal",
"keywords": "trigger + filter rate monitoring, JEB, JEB server",
"time": "2010-04-14T18:20:20",
"milestone": "",
"owner": "rfranke",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
PFTriggerFilterMonitoring may flood the logging with "No filter mask present in frame at position ..." (Trac #205) - the filter + rate monitoring currently (plan-b) runs within the filter clients - all events should include a filter mask.
it will move into the server in future (plan-a), so that events may not include a filter mask, if the filter mode isn't physics filtering.
PFTriggerFilterMonitoring will create a warn logging for such events...
solutions:
a) PFTriggerFilterMonitoring only runs on physics-filtered data changing it into a conditional module using a PFFilterModeFilter (that needs to be implemented) or checking the run summary service for the current filter mode
b) PFTriggerFilterMonitoring only processes the filter mask for physics-filtered data checking the run summary service for the current filter mode
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/205
, reported by tschmidt and owned by rfranke_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "the filter + rate monitoring currently (plan-b) runs within the filter clients - all events should include a filter mask.[[BR]]\nit will move into the server in future (plan-a), so that events may not include a filter mask, if the filter mode isn't physics filtering.[[BR]]\nPFTriggerFilterMonitoring will create a warn logging for such events...\n\nsolutions:[[BR]]\na) PFTriggerFilterMonitoring only runs on physics-filtered data changing it into a conditional module using a PFFilterModeFilter (that needs to be implemented) or checking the run summary service for the current filter mode[[BR]]\nb) PFTriggerFilterMonitoring only processes the filter mask for physics-filtered data checking the run summary service for the current filter mode\n",
"reporter": "tschmidt",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "PFTriggerFilterMonitoring may flood the logging with \"No filter mask present in frame at position ...\"",
"priority": "normal",
"keywords": "trigger + filter rate monitoring, JEB, JEB server",
"time": "2010-04-14T18:20:20",
"milestone": "",
"owner": "rfranke",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
pftriggerfiltermonitoring may flood the logging with no filter mask present in frame at position trac the filter rate monitoring currently plan b runs within the filter clients all events should include a filter mask it will move into the server in future plan a so that events may not include a filter mask if the filter mode isn t physics filtering pftriggerfiltermonitoring will create a warn logging for such events solutions a pftriggerfiltermonitoring only runs on physics filtered data changing it into a conditional module using a pffiltermodefilter that needs to be implemented or checking the run summary service for the current filter mode b pftriggerfiltermonitoring only processes the filter mask for physics filtered data checking the run summary service for the current filter mode migrated from reported by tschmidt and owned by rfranke json status closed changetime description the filter rate monitoring currently plan b runs within the filter clients all events should include a filter mask nit will move into the server in future plan a so that events may not include a filter mask if the filter mode isn t physics filtering npftriggerfiltermonitoring will create a warn logging for such events n nsolutions na pftriggerfiltermonitoring only runs on physics filtered data changing it into a conditional module using a pffiltermodefilter that needs to be implemented or checking the run summary service for the current filter mode nb pftriggerfiltermonitoring only processes the filter mask for physics filtered data checking the run summary service for the current filter mode n reporter tschmidt cc resolution fixed ts component jeb pnf summary pftriggerfiltermonitoring may flood the logging with no filter mask present in frame at position priority normal keywords trigger filter rate monitoring jeb jeb server time milestone owner rfranke type defect
| 0
|
21,801
| 30,315,299,955
|
IssuesEvent
|
2023-07-10 15:11:52
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
BP: add stream grids for download
|
Batch Processor
|
The current Batch Processor has a list of downloadable stream grids: https://streamstatsags.cr.usgs.gov/StreamGrids/directoryBrowsing.asp. We need to provide these stream grids in the new BP.
Related: https://code.usgs.gov/StreamStats/tools/StreamStats-Tools/-/issues/54
|
1.0
|
BP: add stream grids for download - The current Batch Processor has a list of downloadable stream grids: https://streamstatsags.cr.usgs.gov/StreamGrids/directoryBrowsing.asp. We need to provide these stream grids in the new BP.
Related: https://code.usgs.gov/StreamStats/tools/StreamStats-Tools/-/issues/54
|
process
|
bp add stream grids for download the current batch processor has a list of downloadable stream grids we need to provide these stream grids in the new bp related
| 1
|
12,428
| 14,927,939,958
|
IssuesEvent
|
2021-01-24 17:19:08
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Resources > UI issue in bottom of the screen
|
Bug P3 Process: Fixed Process: Tested dev UI iOS
|
Steps:
1. Enroll into any study
2. Navigate to resources
3. Scroll in the bottom
4. Observe the bottom line overlaps when scrolling
https://user-images.githubusercontent.com/60386291/104912374-30a36f00-59b2-11eb-9550-a68b153acec7.MOV

|
2.0
|
[iOS] Resources > UI issue in bottom of the screen - Steps:
1. Enroll into any study
2. Navigate to resources
3. Scroll in the bottom
4. Observe the bottom line overlaps when scrolling
https://user-images.githubusercontent.com/60386291/104912374-30a36f00-59b2-11eb-9550-a68b153acec7.MOV

|
process
|
resources ui issue in bottom of the screen steps enroll into any study navigate to resources scroll in the bottom observe the bottom line overlaps when scrolling
| 1
|
8,634
| 11,785,554,307
|
IssuesEvent
|
2020-03-17 10:31:35
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
opened
|
A system test for load_table_from_datarame() consistently fails on master branch
|
priority: p1 testing type: process
|
A system test `test_load_table_from_dataframe_w_explicit_schema()` consistently fails on the latest `master` branch, both under Python 2.7 and Python 3.8 (example [Kokoro run](https://source.cloud.google.com/results/invocations/0db81847-def8-4d23-be0e-446b4855ad7f/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fpresubmit/log)). It is also consistently reproducible locally.
|
1.0
|
A system test for load_table_from_datarame() consistently fails on master branch - A system test `test_load_table_from_dataframe_w_explicit_schema()` consistently fails on the latest `master` branch, both under Python 2.7 and Python 3.8 (example [Kokoro run](https://source.cloud.google.com/results/invocations/0db81847-def8-4d23-be0e-446b4855ad7f/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fpresubmit/log)). It is also consistently reproducible locally.
|
process
|
a system test for load table from datarame consistently fails on master branch a system test test load table from dataframe w explicit schema consistently fails on the latest master branch both under python and python example it is also consistently reproducible locally
| 1
|
10,469
| 13,245,943,832
|
IssuesEvent
|
2020-08-19 15:02:21
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Is `pipeline.startTime` in UTC?
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
Hi.
It's not clear if `pipeline.startTime` is in UTC or another time zone.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Is `pipeline.startTime` in UTC? - Hi.
It's not clear if `pipeline.startTime` is in UTC or another time zone.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
is pipeline starttime in utc hi it s not clear if pipeline starttime is in utc or another time zone thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
13,230
| 15,702,388,796
|
IssuesEvent
|
2021-03-26 12:33:21
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?] stats.stackexchange.com
|
whitelisting process
|
**Domains or links**
Please list any domains and links listed here which you believe are a false positive.
stats.stackexchange.com
**More Information**
How did you discover your web site or domain was listed here?
I have added `https://hosts.ubuntu101.co.za/domains.list` in my Pi-hole. I found it by using the query feature.
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
No, I could only find it in this list as per the output given by my Pi-Hole.
**Additional context**
Add any other context about the problem here.
That's a normal community. Felt it's a false positive.
|
1.0
|
[FALSE-POSITIVE?] stats.stackexchange.com - **Domains or links**
Please list any domains and links listed here which you believe are a false positive.
stats.stackexchange.com
**More Information**
How did you discover your web site or domain was listed here?
I have added `https://hosts.ubuntu101.co.za/domains.list` in my Pi-hole. I found it by using the query feature.
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
No, I could only find it in this list as per the output given by my Pi-Hole.
**Additional context**
Add any other context about the problem here.
That's a normal community. Felt it's a false positive.
|
process
|
stats stackexchange com domains or links please list any domains and links listed here which you believe are a false positive stats stackexchange com more information how did you discover your web site or domain was listed here i have added in my pi hole i found it by using the query feature have you requested removal from other sources please include all relevant links to your existing removals whitelistings no i could only find it in this list as per the output given by my pi hole additional context add any other context about the problem here that s a normal community felt it s a false positive
| 1
|
583,597
| 17,393,709,551
|
IssuesEvent
|
2021-08-02 10:43:53
|
architectury/architectury-loom
|
https://api.github.com/repos/architectury/architectury-loom
|
closed
|
Work around the dreaded ignore filter added by Forge
|
bug priority: high
|
The ignore filter declares filters that may ignore certain classpath jars from being handled by Forge, this may cause issues with mod dependencies that coincidentally satisfies the condition, "architectury-forge-2.3.jar" for example matches the "forge-" filter.
|
1.0
|
Work around the dreaded ignore filter added by Forge - The ignore filter declares filters that may ignore certain classpath jars from being handled by Forge, this may cause issues with mod dependencies that coincidentally satisfies the condition, "architectury-forge-2.3.jar" for example matches the "forge-" filter.
|
non_process
|
work around the dreaded ignore filter added by forge the ignore filter declares filters that may ignore certain classpath jars from being handled by forge this may cause issues with mod dependencies that coincidentally satisfies the condition architectury forge jar for example matches the forge filter
| 0
|
45,232
| 18,466,857,622
|
IssuesEvent
|
2021-10-17 02:50:05
|
keepassxreboot/keepassxc
|
https://api.github.com/repos/keepassxreboot/keepassxc
|
closed
|
Secret Service will not be able to return any entries if the exposed group has search disabled
|
bug feature: Secret Service
|
## Overview
If the exposed group has searching disabled, then apps will not be able to look for secrets.
## Steps to Reproduce
1. Create a new group called "Keyring", with "Search" set to "Disabled".
2. Enable Secret Service and expose "Keyring".
3. Run `secret-tool store --label=test attr test` and enter any password.
4. Run `secret-tool lookup attr test`
## Expected Behavior
The entered password is returned.
## Actual Behavior
Nothing is returned, `secret-tool` exit with status code 1 (entry not found).
## Context
KeePassXC - Version 2.6.6
Qt 5.15.2
Debugging mode is disabled.
Operating system: Gentoo/Linux
CPU architecture: x86_64
Kernel: linux 5.14.6-zen1
Enabled extensions:
- Auto-Type
- Browser Integration
- SSH Agent
- Secret Service Integration
Cryptographic libraries:
- libgcrypt 1.9.4-unknown
Operating System: Linux
Desktop Env: Gnome
Windowing System: Wayland
|
1.0
|
Secret Service will not be able to return any entries if the exposed group has search disabled - ## Overview
If the exposed group has searching disabled, then apps will not be able to look for secrets.
## Steps to Reproduce
1. Create a new group called "Keyring", with "Search" set to "Disabled".
2. Enable Secret Service and expose "Keyring".
3. Run `secret-tool store --label=test attr test` and enter any password.
4. Run `secret-tool lookup attr test`
## Expected Behavior
The entered password is returned.
## Actual Behavior
Nothing is returned, `secret-tool` exit with status code 1 (entry not found).
## Context
KeePassXC - Version 2.6.6
Qt 5.15.2
Debugging mode is disabled.
Operating system: Gentoo/Linux
CPU architecture: x86_64
Kernel: linux 5.14.6-zen1
Enabled extensions:
- Auto-Type
- Browser Integration
- SSH Agent
- Secret Service Integration
Cryptographic libraries:
- libgcrypt 1.9.4-unknown
Operating System: Linux
Desktop Env: Gnome
Windowing System: Wayland
|
non_process
|
secret service will not be able to return any entries if the exposed group has search disabled overview if the exposed group has searching disabled then apps will not be able to look for secrets steps to reproduce create a new group called keyring with search set to disabled enable secret service and expose keyring run secret tool store label test attr test and enter any password run secret tool lookup attr test expected behavior the entered password is returned actual behavior nothing is returned secret tool exit with status code entry not found context keepassxc version qt debugging mode is disabled operating system gentoo linux cpu architecture kernel linux enabled extensions auto type browser integration ssh agent secret service integration cryptographic libraries libgcrypt unknown operating system linux desktop env gnome windowing system wayland
| 0
|
17,355
| 23,176,764,738
|
IssuesEvent
|
2022-07-31 14:34:11
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Pretty easy fix on Prisma Studio to view all columns.
|
bug/1-unconfirmed kind/bug process/candidate topic: studio team/developer-productivity
|
## Problem
If we have columns overlapping they are invisible and isn't scrollable.
## Suggested solution
By changing height in Prisma studio on the div above ag-root-wrapper we can view all columns.
## Alternatives
For now, you can do the following.
Install a stylesheet customizer. One that is recommended on *superuser is Styler Extension. It's minimal and easy to use. Available for [Firefox ](https://addons.mozilla.org/en-US/firefox/addon/styler-pro/) and [Chrome / Brave](https://chrome.google.com/webstore/detail/styler/hbhkfnpodhdcaophahpkiflechaoddoi).
*<https://superuser.com/questions/560539/how-can-i-force-my-css-styles-when-i-visit-a-website>
Install extension and simply add the following
```css
html { background: transparent; }
#root { display: block!important; }
```

end result will be:

## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
It's pretty self-explanatory
"Height: 100%" - No scrollbar available.

"Height 93%" - Scrollbar.

|
1.0
|
Pretty easy fix on Prisma Studio to view all columns. - ## Problem
If we have columns overlapping they are invisible and isn't scrollable.
## Suggested solution
By changing height in Prisma studio on the div above ag-root-wrapper we can view all columns.
## Alternatives
For now, you can do the following.
Install a stylesheet customizer. One that is recommended on *superuser is Styler Extension. It's minimal and easy to use. Available for [Firefox ](https://addons.mozilla.org/en-US/firefox/addon/styler-pro/) and [Chrome / Brave](https://chrome.google.com/webstore/detail/styler/hbhkfnpodhdcaophahpkiflechaoddoi).
*<https://superuser.com/questions/560539/how-can-i-force-my-css-styles-when-i-visit-a-website>
Install extension and simply add the following
```css
html { background: transparent; }
#root { display: block!important; }
```

end result will be:

## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
It's pretty self-explanatory
"Height: 100%" - No scrollbar available.

"Height 93%" - Scrollbar.

|
process
|
pretty easy fix on prisma studio to view all columns problem if we have columns overlapping they are invisible and isn t scrollable suggested solution by changing height in prisma studio on the div above ag root wrapper we can view all columns alternatives for now you can do the following install a stylesheet customizer one that is recommended on superuser is styler extension it s minimal and easy to use available for and install extension and simply add the following css html background transparent root display block important end result will be additional context it s pretty self explanatory height no scrollbar available height scrollbar
| 1
|
18,626
| 5,661,007,895
|
IssuesEvent
|
2017-04-10 16:19:52
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
opened
|
Rely on Flow types for external libraries
|
component: code quality qa: not needed
|
Once we have [basic Flow support](https://github.com/mozilla/addons-frontend/pull/2196) we should start adding definitions for the external libraries we depend on: https://github.com/flowtype/flow-typed
We don't need to fall down the rabbit hole and define everything. We should just pick some ones that we rely on very heavily and that do not have great test coverage. For example, we rely on using the `redux` interfaces correctly and our tests do not cover this. We should definitely define the `redux` interfaces.
|
1.0
|
Rely on Flow types for external libraries - Once we have [basic Flow support](https://github.com/mozilla/addons-frontend/pull/2196) we should start adding definitions for the external libraries we depend on: https://github.com/flowtype/flow-typed
We don't need to fall down the rabbit hole and define everything. We should just pick some ones that we rely on very heavily and that do not have great test coverage. For example, we rely on using the `redux` interfaces correctly and our tests do not cover this. We should definitely define the `redux` interfaces.
|
non_process
|
rely on flow types for external libraries once we have we should start adding definitions for the external libraries we depend on we don t need to fall down the rabbit hole and define everything we should just pick some ones that we rely on very heavily and that do not have great test coverage for example we rely on using the redux interfaces correctly and our tests do not cover this we should definitely define the redux interfaces
| 0
|
3,769
| 6,737,049,960
|
IssuesEvent
|
2017-10-19 07:55:16
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
reopened
|
Support for ZFS
|
priority_normal process_wontfix type_enhancement
|
Version: Fargo RC2
POC Vorboss
While assigning the roles, I got the following errors in the workers:
```
2017-01-24 10:30:43 72200 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 807 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Amount of jobs pending for key ovs_ensure_single_ovs.storagerouter.configure_disk: 1
2017-01-24 10:30:43 72300 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 808 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - KWARGS: {'partition_guid': u'0c4c0abb-a930-4199-b3be-946753f5bcb8', 'roles': [u'DB', u'WRITE'], 'disk_guid': u'ac732764-0bb7-41ee-baa8-194ea31096d0', 'storagerouter_guid': 'a33be319-ff37-4a60-9ee3-6dc4f370adce', 'offset': 1048576, 'size': 512100401152}
2017-01-24 10:30:43 75500 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 809 - DEBUG - Using existing partition
2017-01-24 10:30:43 75600 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 810 - DEBUG - Configuring mountpoint
2017-01-24 10:30:48 03800 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 811 - DEBUG - Found mountpoint: /mnt/ssd2
2017-01-24 10:30:48 12700 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 812 - ERROR - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Task ovs.storagerouter.configure_disk with params {'partition_guid': u'0c4c0abb-a930-4199-b3be-946753f5bcb8', 'roles': [u'DB', u'WRITE'], 'disk_guid': u'ac732764-0bb7-41ee-baa8-194ea31096d0', 'storagerouter_guid': 'a33be319-ff37-4a60-9ee3-6dc4f370adce', 'offset': 1048576, 'size': 512100401152} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1419, in configure_disk
rem.DiskTools.mount(mountpoint)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 196, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 71, in syncreq
return conn.sync_request(handler, oid, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 441, in sync_request
raise obj
CalledProcessError: Command 'mount '/mnt/ssd2'' returned non-zero exit status 32
========= Remote Traceback (1) =========
Traceback (most recent call last):
File "/tmp/tmp.PUZbca6Y60/rpyc/core/protocol.py", line 305, in _dispatch_request
res = self._HANDLERS[handler](self, *args)
File "/tmp/tmp.PUZbca6Y60/rpyc/core/protocol.py", line 535, in _handle_call
return self._local_objects[oid](*args, **dict(kwargs))
File "/opt/OpenvStorage/ovs/extensions/generic/disk.py", line 170, in mount
check_output("mount '{0}'".format(mountpoint), shell=True)
File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command 'mount '/mnt/ssd2'' returned non-zero exit status 32
2017-01-24 10:30:48 13100 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 813 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Amount of jobs pending for key ovs_ensure_single_ovs.storagerouter.configure_disk: 0
2017-01-24 10:30:48 24400 +0000 - ov-01 - 9740/140662765754112 - celery/celery.worker.job - 22598 - ERROR - Task ovs.storagerouter.configure_disk[7fca79d8-b016-4a15-94a9-c51923d3e8b3] raised unexpected: CalledProcessError()
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1419, in configure_disk
rem.DiskTools.mount(mountpoint)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 196, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 71, in syncreq
return conn.sync_request(handler, oid, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 441, in sync_request
raise obj
Exception
```
This was caused due to the ZFS filesystem on the disks:
```
[root@ov-01:~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 279.4G 0 disk
├─sda1 8:1 0 18.6G 0 part
│ └─md0 9:0 0 18.6G 0 raid1 /boot
└─sda2 8:2 0 260.8G 0 part
└─md1 9:1 0 260.7G 0 raid1
├─3nkn5y1-root 252:0 0 18.6G 0 lvm /
└─3nkn5y1-swap 252:1 0 3.7G 0 lvm [SWAP]
sdb 8:16 0 279.4G 0 disk
├─sdb1 8:17 0 18.6G 0 part
│ └─md0 9:0 0 18.6G 0 raid1 /boot
└─sdb2 8:18 0 260.8G 0 part
└─md1 9:1 0 260.7G 0 raid1
├─3nkn5y1-root 252:0 0 18.6G 0 lvm /
└─3nkn5y1-swap 252:1 0 3.7G 0 lvm [SWAP]
[root@ov-01:~]$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.4M 3.2G 1% /run
/dev/mapper/3nkn5y1-root 19G 4.6G 13G 27% /
tmpfs 16G 12K 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/md0 19G 144M 18G 1% /boot
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1000
[root@ov-01:~]$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
19514240 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
273305408 blocks super 1.2 [2/2] [UU]
```
Had to remove the partitions first.
|
1.0
|
Support for ZFS - Version: Fargo RC2
POC Vorboss
While assigning the roles, I got the following errors in the workers:
```
2017-01-24 10:30:43 72200 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 807 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Amount of jobs pending for key ovs_ensure_single_ovs.storagerouter.configure_disk: 1
2017-01-24 10:30:43 72300 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 808 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - KWARGS: {'partition_guid': u'0c4c0abb-a930-4199-b3be-946753f5bcb8', 'roles': [u'DB', u'WRITE'], 'disk_guid': u'ac732764-0bb7-41ee-baa8-194ea31096d0', 'storagerouter_guid': 'a33be319-ff37-4a60-9ee3-6dc4f370adce', 'offset': 1048576, 'size': 512100401152}
2017-01-24 10:30:43 75500 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 809 - DEBUG - Using existing partition
2017-01-24 10:30:43 75600 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 810 - DEBUG - Configuring mountpoint
2017-01-24 10:30:48 03800 +0000 - ov-01 - 9897/140662765754112 - lib/storagerouter - 811 - DEBUG - Found mountpoint: /mnt/ssd2
2017-01-24 10:30:48 12700 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 812 - ERROR - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Task ovs.storagerouter.configure_disk with params {'partition_guid': u'0c4c0abb-a930-4199-b3be-946753f5bcb8', 'roles': [u'DB', u'WRITE'], 'disk_guid': u'ac732764-0bb7-41ee-baa8-194ea31096d0', 'storagerouter_guid': 'a33be319-ff37-4a60-9ee3-6dc4f370adce', 'offset': 1048576, 'size': 512100401152} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1419, in configure_disk
rem.DiskTools.mount(mountpoint)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 196, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 71, in syncreq
return conn.sync_request(handler, oid, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 441, in sync_request
raise obj
CalledProcessError: Command 'mount '/mnt/ssd2'' returned non-zero exit status 32
========= Remote Traceback (1) =========
Traceback (most recent call last):
File "/tmp/tmp.PUZbca6Y60/rpyc/core/protocol.py", line 305, in _dispatch_request
res = self._HANDLERS[handler](self, *args)
File "/tmp/tmp.PUZbca6Y60/rpyc/core/protocol.py", line 535, in _handle_call
return self._local_objects[oid](*args, **dict(kwargs))
File "/opt/OpenvStorage/ovs/extensions/generic/disk.py", line 170, in mount
check_output("mount '{0}'".format(mountpoint), shell=True)
File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command 'mount '/mnt/ssd2'' returned non-zero exit status 32
2017-01-24 10:30:48 13100 +0000 - ov-01 - 9897/140662765754112 - lib/ensure single - 813 - INFO - Ensure single CHAINED mode - ID 1485253843_97J6a9LFyZ - Amount of jobs pending for key ovs_ensure_single_ovs.storagerouter.configure_disk: 0
2017-01-24 10:30:48 24400 +0000 - ov-01 - 9740/140662765754112 - celery/celery.worker.job - 22598 - ERROR - Task ovs.storagerouter.configure_disk[7fca79d8-b016-4a15-94a9-c51923d3e8b3] raised unexpected: CalledProcessError()
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1419, in configure_disk
rem.DiskTools.mount(mountpoint)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 196, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 71, in syncreq
return conn.sync_request(handler, oid, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 441, in sync_request
raise obj
Exception
```
This was caused due to the ZFS filesystem on the disks:
```
[root@ov-01:~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 279.4G 0 disk
├─sda1 8:1 0 18.6G 0 part
│ └─md0 9:0 0 18.6G 0 raid1 /boot
└─sda2 8:2 0 260.8G 0 part
└─md1 9:1 0 260.7G 0 raid1
├─3nkn5y1-root 252:0 0 18.6G 0 lvm /
└─3nkn5y1-swap 252:1 0 3.7G 0 lvm [SWAP]
sdb 8:16 0 279.4G 0 disk
├─sdb1 8:17 0 18.6G 0 part
│ └─md0 9:0 0 18.6G 0 raid1 /boot
└─sdb2 8:18 0 260.8G 0 part
└─md1 9:1 0 260.7G 0 raid1
├─3nkn5y1-root 252:0 0 18.6G 0 lvm /
└─3nkn5y1-swap 252:1 0 3.7G 0 lvm [SWAP]
[root@ov-01:~]$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.4M 3.2G 1% /run
/dev/mapper/3nkn5y1-root 19G 4.6G 13G 27% /
tmpfs 16G 12K 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/md0 19G 144M 18G 1% /boot
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 3.2G 0 3.2G 0% /run/user/0
tmpfs 3.2G 0 3.2G 0% /run/user/1000
[root@ov-01:~]$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
19514240 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
273305408 blocks super 1.2 [2/2] [UU]
```
Had to remove the partitions first.
|
process
|
support for zfs version fargo poc vorboss while assigning the roles i got the following errors in the workers ov lib ensure single info ensure single chained mode id amount of jobs pending for key ovs ensure single ovs storagerouter configure disk ov lib ensure single info ensure single chained mode id kwargs partition guid u roles disk guid u storagerouter guid offset size ov lib storagerouter debug using existing partition ov lib storagerouter debug configuring mountpoint ov lib storagerouter debug found mountpoint mnt ov lib ensure single error ensure single chained mode id task ovs storagerouter configure disk with params partition guid u roles disk guid u storagerouter guid offset size failed traceback most recent call last file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib storagerouter py line in configure disk rem disktools mount mountpoint file usr lib dist packages rpyc core netref py line in call return syncreq self consts handle call args kwargs file usr lib dist packages rpyc core netref py line in syncreq return conn sync request handler oid args file usr lib dist packages rpyc core protocol py line in sync request raise obj calledprocesserror command mount mnt returned non zero exit status remote traceback traceback most recent call last file tmp tmp rpyc core protocol py line in dispatch request res self handlers self args file tmp tmp rpyc core protocol py line in handle call return self local objects args dict kwargs file opt openvstorage ovs extensions generic disk py line in mount check output mount format mountpoint shell true file usr lib subprocess py line in check output raise calledprocesserror retcode cmd output output calledprocesserror command mount mnt returned non zero exit status ov lib ensure single info ensure single chained mode id amount of jobs pending for key ovs ensure single ovs storagerouter configure disk ov celery celery worker job error task ovs storagerouter configure disk raised unexpected calledprocesserror traceback most recent call last file usr lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib storagerouter py line in configure disk rem disktools mount mountpoint file usr lib dist packages rpyc core netref py line in call return syncreq self consts handle call args kwargs file usr lib dist packages rpyc core netref py line in syncreq return conn sync request handler oid args file usr lib dist packages rpyc core protocol py line in sync request raise obj exception this was caused due to the zfs filesystem on the disks lsblk name maj min rm size ro type mountpoint sda disk ├─ part │ └─ boot └─ part └─ ├─ root lvm └─ swap lvm sdb disk ├─ part │ └─ boot └─ part └─ ├─ root lvm └─ swap lvm df h filesystem size used avail use mounted on udev dev tmpfs run dev mapper root tmpfs dev shm tmpfs run lock tmpfs sys fs cgroup dev boot cgmfs run cgmanager fs tmpfs run user tmpfs run user cat proc mdstat personalities active blocks super active blocks super had to remove the partitions first
| 1
|
101,939
| 31,766,720,827
|
IssuesEvent
|
2023-09-12 09:13:17
|
redhat-developer/intellij-quarkus
|
https://api.github.com/repos/redhat-developer/intellij-quarkus
|
opened
|
Deploy to JetBrains Marketplace via Github Actions
|
enhancement build
|
Let's get rid of Jenkins and perform releases through GHA entirely
|
1.0
|
Deploy to JetBrains Marketplace via Github Actions - Let's get rid of Jenkins and perform releases through GHA entirely
|
non_process
|
deploy to jetbrains marketplace via github actions let s get rid of jenkins and perform releases through gha entirely
| 0
|
14,196
| 17,082,726,310
|
IssuesEvent
|
2021-07-08 07:56:37
|
piroor/treestyletab
|
https://api.github.com/repos/piroor/treestyletab
|
closed
|
Photon theme does not work correctly with TST Colored Tabs
|
extension-compatibility
|
## Short description
When I change to the Photon theme in Tree Style Tab Preferences, the colours from the TST Colored Tabs extension only show up in the indent area of tabs, instead of colouring in the whole tab. All other theme options (Proton, Sidebar, High Contrast, No Decoration) colour the tab correctly.
## Steps to reproduce
1. Start Firefox with clean profile.
2. Install TST.
3. Install TST Colored Tabs
4. Open a website, and a couple of pages under it (x3 for a nice representative sample)
5. Toggle between Photon and non-Photon themes in TST Preferences under Appearance
## Expected result
The whole tab is coloured for the same domains, in the Photon theme
## Actual result
Only the indent portion in the TST sidebar is coloured
Screenshot of buggy behaviour under Photon:

Screenshot of correct/expected behaviour under Proton:

## Environment
* Platform (OS): Linux, Ubuntu 21.04, KDE Plasma 5.22.3
* Version of Firefox: 90.0b12 (64-bit)
* Version (or revision) of Tree Style Tab: 3.8.6 (I started noticing this behaviour with 3.8.5 as well, but only got around to filing this issue now!)
|
True
|
Photon theme does not work correctly with TST Colored Tabs - ## Short description
When I change to the Photon theme in Tree Style Tab Preferences, the colours from the TST Colored Tabs extension only show up in the indent area of tabs, instead of colouring in the whole tab. All other theme options (Proton, Sidebar, High Contrast, No Decoration) colour the tab correctly.
## Steps to reproduce
1. Start Firefox with clean profile.
2. Install TST.
3. Install TST Colored Tabs
4. Open a website, and a couple of pages under it (x3 for a nice representative sample)
5. Toggle between Photon and non-Photon themes in TST Preferences under Appearance
## Expected result
The whole tab is coloured for the same domains, in the Photon theme
## Actual result
Only the indent portion in the TST sidebar is coloured
Screenshot of buggy behaviour under Photon:

Screenshot of correct/expected behaviour under Proton:

## Environment
* Platform (OS): Linux, Ubuntu 21.04, KDE Plasma 5.22.3
* Version of Firefox: 90.0b12 (64-bit)
* Version (or revision) of Tree Style Tab: 3.8.6 (I started noticing this behaviour with 3.8.5 as well, but only got around to filing this issue now!)
|
non_process
|
photon theme does not work correctly with tst colored tabs short description when i change to the photon theme in tree style tab preferences the colours from the tst colored tabs extension only show up in the indent area of tabs instead of colouring in the whole tab all other theme options proton sidebar high contrast no decoration colour the tab correctly steps to reproduce start firefox with clean profile install tst install tst colored tabs open a website and a couple of pages under it for a nice representative sample toggle between photon and non photon themes in tst preferences under appearance expected result the whole tab is coloured for the same domains in the photon theme actual result only the indent portion in the tst sidebar is coloured screenshot of buggy behaviour under photon screenshot of correct expected behaviour under proton environment platform os linux ubuntu kde plasma version of firefox bit version or revision of tree style tab i started noticing this behaviour with as well but only got around to filing this issue now
| 0
|
18,713
| 24,604,236,868
|
IssuesEvent
|
2022-10-14 14:51:51
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] [Standalone build] [QA] Blank screen is getting displayed to the mobile participants in the following scenario
|
Bug P1 iOS Process: Fixed Process: Tested dev
|
Steps:
1. Install the standalone build
2. Sign up or Sign in to the mobile app
3. Enroll to the study
4. After navigating to the review screen, click on 'Disagree' and observe
AR: Blank screen is getting displayed
ER: Study overview screen should get displayed
[Note: Issue not observed in Dev instance)
|
2.0
|
[iOS] [Standalone build] [QA] Blank screen is getting displayed to the mobile participants in the following scenario - Steps:
1. Install the standalone build
2. Sign up or Sign in to the mobile app
3. Enroll to the study
4. After navigating to the review screen, click on 'Disagree' and observe
AR: Blank screen is getting displayed
ER: Study overview screen should get displayed
[Note: Issue not observed in Dev instance)
|
process
|
blank screen is getting displayed to the mobile participants in the following scenario steps install the standalone build sign up or sign in to the mobile app enroll to the study after navigating to the review screen click on disagree and observe ar blank screen is getting displayed er study overview screen should get displayed note issue not observed in dev instance
| 1
|
392,920
| 26,965,079,322
|
IssuesEvent
|
2023-02-08 21:34:22
|
DD2480-group14/Assignment-2
|
https://api.github.com/repos/DD2480-group14/Assignment-2
|
closed
|
Add a paragraph in README to evaluating the TEAM checklist in Essence standard
|
documentation
|
We just need this to pass.
|
1.0
|
Add a paragraph in README to evaluating the TEAM checklist in Essence standard - We just need this to pass.
|
non_process
|
add a paragraph in readme to evaluating the team checklist in essence standard we just need this to pass
| 0
|
531,930
| 15,527,588,385
|
IssuesEvent
|
2021-03-13 06:48:25
|
worldanvil/worldanvil-bug-tracker
|
https://api.github.com/repos/worldanvil/worldanvil-bug-tracker
|
opened
|
Tooltip issue in Calendar editor
|
Feature: Calendars Priority: Optional Severity: Trivial Type: Typo
|
**World Anvil Username**: SoulLink
**Feature**: Calendar
**Describe the Issue**
The tooltip on the Shift property of the celestials is missing some words.
**Expected behavior**
`Shifts the first phase on the first day of the first year for this celestial by a positive or negative amount.`
**Screenshots**

|
1.0
|
Tooltip issue in Calendar editor - **World Anvil Username**: SoulLink
**Feature**: Calendar
**Describe the Issue**
The tooltip on the Shift property of the celestials is missing some words.
**Expected behavior**
`Shifts the first phase on the first day of the first year for this celestial by a positive or negative amount.`
**Screenshots**

|
non_process
|
tooltip issue in calendar editor world anvil username soullink feature calendar describe the issue the tooltip on the shift property of the celestials is missing some words expected behavior shifts the first phase on the first day of the first year for this celestial by a positive or negative amount screenshots
| 0
|
398,811
| 11,742,369,591
|
IssuesEvent
|
2020-03-12 00:32:10
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
Functionality to export active members
|
feature priority: low
|
In GitLab by @se-bastiaan on Mar 12, 2018, 13:01
### One-sentence description
Functionality to export active members
### Desired behaviour
Have a button or management command to export all active members within a specified academic year.
|
1.0
|
Functionality to export active members - In GitLab by @se-bastiaan on Mar 12, 2018, 13:01
### One-sentence description
Functionality to export active members
### Desired behaviour
Have a button or management command to export all active members within a specified academic year.
|
non_process
|
functionality to export active members in gitlab by se bastiaan on mar one sentence description functionality to export active members desired behaviour have a button or management command to export all active members within a specified academic year
| 0
|
122,173
| 10,216,039,017
|
IssuesEvent
|
2019-08-15 09:24:19
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
ballerinax java documentation issues
|
BetaTesting Type/Docs
|
**Description:**
- The package of `LinkedLists` is wrong in the example. It shows `java.util.UUID.LinkedList` which should be `java.util.LinkedList`
- ballerina import statement is wrong. It states `import ballerina/java` it should be `ballerinax/java`
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
ballerinax java documentation issues - **Description:**
- The package of `LinkedLists` is wrong in the example. It shows `java.util.UUID.LinkedList` which should be `java.util.LinkedList`
- ballerina import statement is wrong. It states `import ballerina/java` it should be `ballerinax/java`
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
non_process
|
ballerinax java documentation issues description the package of linkedlists is wrong in the example it shows java util uuid linkedlist which should be java util linkedlist ballerina import statement is wrong it states import ballerina java it should be ballerinax java steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 0
|
3,776
| 6,748,310,251
|
IssuesEvent
|
2017-10-22 04:37:45
|
macattackftw/IntergalacticLifelineI
|
https://api.github.com/repos/macattackftw/IntergalacticLifelineI
|
closed
|
Data Points Required
|
Markov Decision Process
|
## Description
Identify data points required for your implementation. Close this out by midnight, 21 October with details of the data points required.
## Task Estimate
| Task | Estimated Time |
| :-: | :-: |
| Identify Datapoints | 01:00 |
|||
|
1.0
|
Data Points Required - ## Description
Identify data points required for your implementation. Close this out by midnight, 21 October with details of the data points required.
## Task Estimate
| Task | Estimated Time |
| :-: | :-: |
| Identify Datapoints | 01:00 |
|||
|
process
|
data points required description identify data points required for your implementation close this out by midnight october with details of the data points required task estimate task estimated time identify datapoints
| 1
|
13,189
| 15,613,359,481
|
IssuesEvent
|
2021-03-19 16:21:46
|
Psychoanalytic-Electronic-Publishing/OpenPubArchive-Content-Server
|
https://api.github.com/repos/Psychoanalytic-Electronic-Publishing/OpenPubArchive-Content-Server
|
opened
|
ANIJP-CHI needs means to provide PDF
|
feature in process priority-high
|
ANIJP-CHI is a PDF only journal.
Currently though, there's no way to get the PDFs.
|
1.0
|
ANIJP-CHI needs means to provide PDF - ANIJP-CHI is a PDF only journal.
Currently though, there's no way to get the PDFs.
|
process
|
anijp chi needs means to provide pdf anijp chi is a pdf only journal currently though there s no way to get the pdfs
| 1
|
5,237
| 8,035,159,852
|
IssuesEvent
|
2018-07-30 02:31:39
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
COM reflection should pick up constant values
|
enhancement parse-tree-processing typeinfo-processing
|
Instead of a `Declaration` for `DeclarationType.Constant` and `DeclarationType.EnumMember`, the code that inspects the types of COM libraries should generate a `ValuedDeclaration` to store the value.
The value could be displayed in the "current selection" status in the `RubberduckCommandBar`, and in other places such as the _Code Explorer_.
|
2.0
|
COM reflection should pick up constant values - Instead of a `Declaration` for `DeclarationType.Constant` and `DeclarationType.EnumMember`, the code that inspects the types of COM libraries should generate a `ValuedDeclaration` to store the value.
The value could be displayed in the "current selection" status in the `RubberduckCommandBar`, and in other places such as the _Code Explorer_.
|
process
|
com reflection should pick up constant values instead of a declaration for declarationtype constant and declarationtype enummember the code that inspects the types of com libraries should generate a valueddeclaration to store the value the value could be displayed in the current selection status in the rubberduckcommandbar and in other places such as the code explorer
| 1
|
12,343
| 14,883,408,672
|
IssuesEvent
|
2021-01-20 13:17:55
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
We broke the css selector that is invalid
|
AREA: client FREQUENCY: level 1 SYSTEM: client side processing TYPE: bug health-monitor
|
Example:
```js
document.querySelector('button:not([class="sm"]')
^
no closing parenthesis
```
|
1.0
|
We broke the css selector that is invalid - Example:
```js
document.querySelector('button:not([class="sm"]')
^
no closing parenthesis
```
|
process
|
we broke the css selector that is invalid example js document queryselector button not no closing parenthesis
| 1
|
3,041
| 6,040,232,095
|
IssuesEvent
|
2017-06-10 12:19:08
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Removing a module causes unrecoverable Parse Error.
|
bug critical parse-tree-processing
|
Steps to reproduce.
1. New Workbook
2. Open VBE, allow auto-parse
3. Add module (either Standard or Class)
4. Allow the auto parse to complete
5. Remove the module without saving, using the PE context menu
6. Next parse results in Parse Error. The VBE isn't frozen, but the workbook must be closed and reopened for RD to recover.
|
1.0
|
Removing a module causes unrecoverable Parse Error. - Steps to reproduce.
1. New Workbook
2. Open VBE, allow auto-parse
3. Add module (either Standard or Class)
4. Allow the auto parse to complete
5. Remove the module without saving, using the PE context menu
6. Next parse results in Parse Error. The VBE isn't frozen, but the workbook must be closed and reopened for RD to recover.
|
process
|
removing a module causes unrecoverable parse error steps to reproduce new workbook open vbe allow auto parse add module either standard or class allow the auto parse to complete remove the module without saving using the pe context menu next parse results in parse error the vbe isn t frozen but the workbook must be closed and reopened for rd to recover
| 1
|
17,280
| 23,077,976,594
|
IssuesEvent
|
2022-07-26 02:52:54
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Remove `EQW` instruction set from the VM and emulate it using other instruction in assembly
|
assembly instruction set processor
|
Theoretically, we can afford operations which is of degree 5, but @bobbinth feels it's not worth it for `EQW`.
Therefore this issue can be broken down into 3 separate task:
- Remove `EQW` instruction from the VM.
- Emulate `EQW` operation using other VM instruction in the assembly.
- Update the docs/ cycle count of the impacted operations.
|
1.0
|
Remove `EQW` instruction set from the VM and emulate it using other instruction in assembly - Theoretically, we can afford operations which is of degree 5, but @bobbinth feels it's not worth it for `EQW`.
Therefore this issue can be broken down into 3 separate task:
- Remove `EQW` instruction from the VM.
- Emulate `EQW` operation using other VM instruction in the assembly.
- Update the docs/ cycle count of the impacted operations.
|
process
|
remove eqw instruction set from the vm and emulate it using other instruction in assembly theoretically we can afford operations which is of degree but bobbinth feels it s not worth it for eqw therefore this issue can be broken down into separate task remove eqw instruction from the vm emulate eqw operation using other vm instruction in the assembly update the docs cycle count of the impacted operations
| 1
|
20,744
| 27,449,799,654
|
IssuesEvent
|
2023-03-02 16:34:34
|
camunda/issues
|
https://api.github.com/repos/camunda/issues
|
opened
|
Support OIDC for Elasticsearch in Self-Managed
|
component:distribution component:operate component:optimize component:tasklist component:zeebe component:zeebe-process-automation public kind:epic potential:8.3
|
### Value Proposition Statement
Secure connections to Elasticsearch using OpenIDConnect in Self-Managed
### User Problem
Today connection between Webapps & Zeebe Elastic Exporter can only use basic authentication.
Nowadays organizations have often policies that forbid using Basic Authentication and that rely on token-based authentication mechanisms. They expect to be able to use SAML and/or OpenIDConnect.
Currently this is not supported and prevents adoption of our Platform for some customers.
### User Stories
I can use OpenIDConnect for connecting to Elasticsearch in Zeebe Elastic Exporter.
I can use OpenIDConnect for connecting to Elasticsearch in Operate.
I can use OpenIDConnect for connecting to Elasticsearch in Tasklist.
I can use OpenIDConnect for connecting to Elasticsearch in Optimize.
### Implementation Notes
From user perspective the best would be if this is just a configuration and not something I have to implement.
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Designer assigned: No Design Necessary
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://jira.camunda.com/browse/SUPPORT-14818
|
1.0
|
Support OIDC for Elasticsearch in Self-Managed - ### Value Proposition Statement
Secure connections to Elasticsearch using OpenIDConnect in Self-Managed
### User Problem
Today connection between Webapps & Zeebe Elastic Exporter can only use basic authentication.
Nowadays organizations have often policies that forbid using Basic Authentication and that rely on token-based authentication mechanisms. They expect to be able to use SAML and/or OpenIDConnect.
Currently this is not supported and prevents adoption of our Platform for some customers.
### User Stories
I can use OpenIDConnect for connecting to Elasticsearch in Zeebe Elastic Exporter.
I can use OpenIDConnect for connecting to Elasticsearch in Operate.
I can use OpenIDConnect for connecting to Elasticsearch in Tasklist.
I can use OpenIDConnect for connecting to Elasticsearch in Optimize.
### Implementation Notes
From user perspective the best would be if this is just a configuration and not something I have to implement.
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Designer assigned: No Design Necessary
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
- https://jira.camunda.com/browse/SUPPORT-14818
|
process
|
support oidc for elasticsearch in self managed value proposition statement secure connections to elasticsearch using openidconnect in self managed user problem today connection between webapps zeebe elastic exporter can only use basic authentication nowadays organizations have often policies that forbid using basic authentication and that rely on token based authentication mechanisms they expect to be able to use saml and or openidconnect currently this is not supported and prevents adoption of our platform for some customers user stories i can use openidconnect for connecting to elasticsearch in zeebe elastic exporter i can use openidconnect for connecting to elasticsearch in operate i can use openidconnect for connecting to elasticsearch in tasklist i can use openidconnect for connecting to elasticsearch in optimize implementation notes from user perspective the best would be if this is just a configuration and not something i have to implement breakdown this section links to various sub issues tasks contributing to respective epic phase or phase results where appropriate discovery phase define phase design planning designer assigned no design necessary design deliverables deliverable name link to gh issue documentation planning risk management risk class risk treatment implement phase validate phase links to additional collateral
| 1
|
7,152
| 10,294,680,785
|
IssuesEvent
|
2019-08-28 05:20:03
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Using the format() function with no arguments crashes QGIS
|
Bug Crash/Data Corruption Expressions Processing
|
**Describe the bug**
Since QGIS 3.8 it is possible to use expressions when using Processing in batch mode. Unfortunately using the format() function in an expression crashes QGIS.
**How to reproduce**
1. Open the buffer processing tool in batch mode
2. For the input layer, click on "Autofill..." and choose "Calculate by Expression..."
3. Write format(), QGIS will crash after entering the second parenthesis
**QGIS and OS versions**
QGIS version | 3.8.2-Zanzibar | QGIS code revision | 4470baa1a3
-- | -- | -- | --
Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.0 | Running against GEOS | 3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version | 10.8 | SpatiaLite Version | 4.3.0
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | Windows 7 SP 1 (6.1) | |
|
1.0
|
Using the format() function with no arguments crashes QGIS - **Describe the bug**
Since QGIS 3.8 it is possible to use expressions when using Processing in batch mode. Unfortunately using the format() function in an expression crashes QGIS.
**How to reproduce**
1. Open the buffer processing tool in batch mode
2. For the input layer, click on "Autofill..." and choose "Calculate by Expression..."
3. Write format(), QGIS will crash after entering the second parenthesis
**QGIS and OS versions**
QGIS version | 3.8.2-Zanzibar | QGIS code revision | 4470baa1a3
-- | -- | -- | --
Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.0 | Running against GEOS | 3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version | 10.8 | SpatiaLite Version | 4.3.0
QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | Windows 7 SP 1 (6.1) | |
|
process
|
using the format function with no arguments crashes qgis describe the bug since qgis it is possible to use expressions when using processing in batch mode unfortunately using the format function in an expression crashes qgis how to reproduce open the buffer processing tool in batch mode for the input layer click on autofill and choose calculate by expression write format qgis will crash after entering the second parenthesis qgis and os versions qgis version zanzibar qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi postgresql client version spatialite version qwt version version compiled against proj running against proj rel september os version windows sp
| 1
|
333,162
| 24,365,590,613
|
IssuesEvent
|
2022-10-03 14:56:16
|
chakra-ui/chakra-ui-docs
|
https://api.github.com/repos/chakra-ui/chakra-ui-docs
|
opened
|
Theming: FormControl
|
good first issue created-by: Chakra UI team Topic: Documentation 📚 hacktoberfest
|
### Subject
FormControl
### Description
Create a basic theming documentation for the `FormControl` component that showcases how you can style it.
Theming file: [Form Control theming file](https://github.com/chakra-ui/chakra-ui-docs/blob/main/content/docs/components/form-control/theming.mdx)
|
1.0
|
Theming: FormControl - ### Subject
FormControl
### Description
Create a basic theming documentation for the `FormControl` component that showcases how you can style it.
Theming file: [Form Control theming file](https://github.com/chakra-ui/chakra-ui-docs/blob/main/content/docs/components/form-control/theming.mdx)
|
non_process
|
theming formcontrol subject formcontrol description create a basic theming documentation for the formcontrol component that showcases how you can style it theming file
| 0
|
14,863
| 18,272,072,655
|
IssuesEvent
|
2021-10-04 14:44:46
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
MongoDB re-introspection
|
process/candidate topic: re-introspection team/migrations topic: mongodb
|
When we run `db pull` after doing manual changes to a data model with MongoDB, these changes are gone. We should consider which parts of re-introspection we port from the SQL connector. Currently we do the following merging between the schema file and introspected data model:
## Relations
MongoDB doesn't store relations, so user writes them manually to the PSL. We should keep this feature for both: MongoDB and SQL databases without foreign keys.
## Model `@@map` attributes
Consider a model:
```prisma
model A {
id Int @id
@@map("data_base_A_table")
}
```
We should move the `@@map` from the schema file to the new introspected data model.
## Pre-3.0 index `name` parameters
Before 3.0, an `@@index` attribute could have `name` parameter that would translate as the name of the method in the client. This is now handled by the `map` parameter, but we should transfer the `name` params to the introspected data model.
## Custom index names
`@index(map: "foo")` and `@@index(map: "foo")` should be mapped.
## Changed primary key names
MongoDB doesn't have custom primary key names. We might not need this one outside of the SQL databases.
## Changed scalar key names
`field Int @map("customField")` should be needed in MongoDB.
## Changed relation key names
This is not mandatory due to we do not introspect relations on MongoDB.
## Changed relation names
Likewise not needed for MongoDB due to not introspecting relations.
## Changed enum names
MongoDB doesn't support enums. Not needed.
## Changed enum values
Likewise not needed.
## MySQL enum names
MongoDB is not MySQL (thank god).
## Prisma level defaults
User might want to use `@default(cuid())`, `default(uuid())` or `@updated_at` with MongoDB. We should port this.
## Ignores
A model can be `@@ignore`d and a field can be `@ignore`d. It is not super important due to us not automatically ignore that many models/fields than we do on SQL databases. SQL databases can have tables with no primary/unique keys, MongoDB always has an implicit primary key.
The user might want to ignore though, and we should keep these changes.
## Comments
Anything that is commented by the user should follow between `db pull`s. The MongoDB introspection adds special comments for fields with multiple types. These should be merged correctly with the previous comments.
|
1.0
|
MongoDB re-introspection - When we run `db pull` after doing manual changes to a data model with MongoDB, these changes are gone. We should consider which parts of re-introspection we port from the SQL connector. Currently we do the following merging between the schema file and introspected data model:
## Relations
MongoDB doesn't store relations, so user writes them manually to the PSL. We should keep this feature for both: MongoDB and SQL databases without foreign keys.
## Model `@@map` attributes
Consider a model:
```prisma
model A {
id Int @id
@@map("data_base_A_table")
}
```
We should move the `@@map` from the schema file to the new introspected data model.
## Pre-3.0 index `name` parameters
Before 3.0, an `@@index` attribute could have `name` parameter that would translate as the name of the method in the client. This is now handled by the `map` parameter, but we should transfer the `name` params to the introspected data model.
## Custom index names
`@index(map: "foo")` and `@@index(map: "foo")` should be mapped.
## Changed primary key names
MongoDB doesn't have custom primary key names. We might not need this one outside of the SQL databases.
## Changed scalar key names
`field Int @map("customField")` should be needed in MongoDB.
## Changed relation key names
This is not mandatory due to we do not introspect relations on MongoDB.
## Changed relation names
Likewise not needed for MongoDB due to not introspecting relations.
## Changed enum names
MongoDB doesn't support enums. Not needed.
## Changed enum values
Likewise not needed.
## MySQL enum names
MongoDB is not MySQL (thank god).
## Prisma level defaults
User might want to use `@default(cuid())`, `default(uuid())` or `@updated_at` with MongoDB. We should port this.
## Ignores
A model can be `@@ignore`d and a field can be `@ignore`d. It is not super important due to us not automatically ignore that many models/fields than we do on SQL databases. SQL databases can have tables with no primary/unique keys, MongoDB always has an implicit primary key.
The user might want to ignore though, and we should keep these changes.
## Comments
Anything that is commented by the user should follow between `db pull`s. The MongoDB introspection adds special comments for fields with multiple types. These should be merged correctly with the previous comments.
|
process
|
mongodb re introspection when we run db pull after doing manual changes to a data model with mongodb these changes are gone we should consider which parts of re introspection we port from the sql connector currently we do the following merging between the schema file and introspected data model relations mongodb doesn t store relations so user writes them manually to the psl we should keep this feature for both mongodb and sql databases without foreign keys model map attributes consider a model prisma model a id int id map data base a table we should move the map from the schema file to the new introspected data model pre index name parameters before an index attribute could have name parameter that would translate as the name of the method in the client this is now handled by the map parameter but we should transfer the name params to the introspected data model custom index names index map foo and index map foo should be mapped changed primary key names mongodb doesn t have custom primary key names we might not need this one outside of the sql databases changed scalar key names field int map customfield should be needed in mongodb changed relation key names this is not mandatory due to we do not introspect relations on mongodb changed relation names likewise not needed for mongodb due to not introspecting relations changed enum names mongodb doesn t support enums not needed changed enum values likewise not needed mysql enum names mongodb is not mysql thank god prisma level defaults user might want to use default cuid default uuid or updated at with mongodb we should port this ignores a model can be ignore d and a field can be ignore d it is not super important due to us not automatically ignore that many models fields than we do on sql databases sql databases can have tables with no primary unique keys mongodb always has an implicit primary key the user might want to ignore though and we should keep these changes comments anything that is commented by the user should follow between db pull s the mongodb introspection adds special comments for fields with multiple types these should be merged correctly with the previous comments
| 1
|
288,913
| 31,931,002,073
|
IssuesEvent
|
2023-09-19 07:25:12
|
Trinadh465/linux-4.1.15_CVE-2023-4128
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
|
opened
|
CVE-2023-3390 (High) detected in linuxlinux-4.6
|
Mend: dependency security vulnerability
|
## CVE-2023-3390 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in the Linux kernel's netfilter subsystem in net/netfilter/nf_tables_api.c.
Mishandled error handling with NFT_MSG_NEWRULE makes it possible to use a dangling pointer in the same transaction causing a use-after-free vulnerability. This flaw allows a local attacker with user access to cause a privilege escalation issue.
We recommend upgrading past commit 1240eb93f0616b21c675416516ff3d74798fdc97.
<p>Publish Date: 2023-06-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3390>CVE-2023-3390</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3390">https://www.linuxkernelcves.com/cves/CVE-2023-3390</a></p>
<p>Release Date: 2023-06-28</p>
<p>Fix Resolution: v5.15.118,v6.1.35,v6.3.9,v6.4-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-3390 (High) detected in linuxlinux-4.6 - ## CVE-2023-3390 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_tables_api.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in the Linux kernel's netfilter subsystem in net/netfilter/nf_tables_api.c.
Mishandled error handling with NFT_MSG_NEWRULE makes it possible to use a dangling pointer in the same transaction causing a use-after-free vulnerability. This flaw allows a local attacker with user access to cause a privilege escalation issue.
We recommend upgrading past commit 1240eb93f0616b21c675416516ff3d74798fdc97.
<p>Publish Date: 2023-06-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3390>CVE-2023-3390</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3390">https://www.linuxkernelcves.com/cves/CVE-2023-3390</a></p>
<p>Release Date: 2023-06-28</p>
<p>Fix Resolution: v5.15.118,v6.1.35,v6.3.9,v6.4-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net netfilter nf tables api c net netfilter nf tables api c vulnerability details a use after free vulnerability was found in the linux kernel s netfilter subsystem in net netfilter nf tables api c mishandled error handling with nft msg newrule makes it possible to use a dangling pointer in the same transaction causing a use after free vulnerability this flaw allows a local attacker with user access to cause a privilege escalation issue we recommend upgrading past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
345,809
| 24,875,476,748
|
IssuesEvent
|
2022-10-27 18:41:37
|
Uniba-dev-projects/iot-image-transfer
|
https://api.github.com/repos/Uniba-dev-projects/iot-image-transfer
|
opened
|
Initial Infrastructure
|
documentation enhancement
|
Initial Infrastructure of PHP Http Server.
This is a first version of the microservices NDVI_Provider.
|
1.0
|
Initial Infrastructure - Initial Infrastructure of PHP Http Server.
This is a first version of the microservices NDVI_Provider.
|
non_process
|
initial infrastructure initial infrastructure of php http server this is a first version of the microservices ndvi provider
| 0
|
756,651
| 26,479,959,536
|
IssuesEvent
|
2023-01-17 14:02:16
|
mi6/ic-design-system
|
https://api.github.com/repos/mi6/ic-design-system
|
closed
|
Explore contrast of disabled button text and background
|
development priority: medium a11y
|
## Summary
Explore the contrast of labels on disabled buttons.
Note: Disabled buttons do not need to pass normal accessibility guidelines around colour contrast, however we want to try to make disabled button labels still readable.
## 💬 Description
Research and make the changes to improve the readability.
## 💰 Use value
So that all users can quickly identify disabled buttons.
## Additional info
- Changing colours to up contrast without making button seem 'enabled'.
- Introduce non-colour element to indicate disabled state (dashed line similar to text fields)
|
1.0
|
Explore contrast of disabled button text and background - ## Summary
Explore the contrast of labels on disabled buttons.
Note: Disabled buttons do not need to pass normal accessibility guidelines around colour contrast, however we want to try to make disabled button labels still readable.
## 💬 Description
Research and make the changes to improve the readability.
## 💰 Use value
So that all users can quickly identify disabled buttons.
## Additional info
- Changing colours to up contrast without making button seem 'enabled'.
- Introduce non-colour element to indicate disabled state (dashed line similar to text fields)
|
non_process
|
explore contrast of disabled button text and background summary explore the contrast of labels on disabled buttons note disabled buttons do not need to pass normal accessibility guidelines around colour contrast however we want to try to make disabled button labels still readable 💬 description research and make the changes to improve the readability 💰 use value so that all users can quickly identify disabled buttons additional info changing colours to up contrast without making button seem enabled introduce non colour element to indicate disabled state dashed line similar to text fields
| 0
|
6,791
| 9,922,252,904
|
IssuesEvent
|
2019-07-01 01:35:31
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Show the security deposit in the available offers view
|
in:gui in:trade-process was:dropped
|
Since the buyers security deposit is defined when a maker publishes their offer, it would be nice to see the security deposit for each offer in the available offers view. That way a buyer can see at a glance which offer they want to avoid due to a high security deposit, or a seller which offers provide more of a security deposit.
|
1.0
|
Show the security deposit in the available offers view - Since the buyers security deposit is defined when a maker publishes their offer, it would be nice to see the security deposit for each offer in the available offers view. That way a buyer can see at a glance which offer they want to avoid due to a high security deposit, or a seller which offers provide more of a security deposit.
|
process
|
show the security deposit in the available offers view since the buyers security deposit is defined when a maker publishes their offer it would be nice to see the security deposit for each offer in the available offers view that way a buyer can see at a glance which offer they want to avoid due to a high security deposit or a seller which offers provide more of a security deposit
| 1
|
22,633
| 3,670,966,529
|
IssuesEvent
|
2016-02-22 03:02:20
|
gperftools/gperftools
|
https://api.github.com/repos/gperftools/gperftools
|
closed
|
object name conflicts in archive in Cygwin on windows.
|
Priority-Medium Status-Started Type-Defect
|
Originally reported on Google Code with ID 469
```
What steps will reproduce the problem?
1.Downloaded gperftools-2.0.zip file.
2.Went to the shell in Cygwin.
3.Executed the commands "./configure" and then tried "make".
What is the expected output? What do you see instead?
Expected is to generate a shared object file.
But compilation is failing.
What version of the product are you using? On what operating system?
Cygwin on Windows7 - x64
Please provide any additional information below.
Attached the files
```
Reported by `suman.aluvala` on 2012-09-26 09:41:34
<hr>
* *Attachment: [ConfigLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/ConfigLog.txt)*
* *Attachment: [MakeLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/MakeLog.txt)*
|
1.0
|
object name conflicts in archive in Cygwin on windows. - Originally reported on Google Code with ID 469
```
What steps will reproduce the problem?
1.Downloaded gperftools-2.0.zip file.
2.Went to the shell in Cygwin.
3.Executed the commands "./configure" and then tried "make".
What is the expected output? What do you see instead?
Expected is to generate a shared object file.
But compilation is failing.
What version of the product are you using? On what operating system?
Cygwin on Windows7 - x64
Please provide any additional information below.
Attached the files
```
Reported by `suman.aluvala` on 2012-09-26 09:41:34
<hr>
* *Attachment: [ConfigLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/ConfigLog.txt)*
* *Attachment: [MakeLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/MakeLog.txt)*
|
non_process
|
object name conflicts in archive in cygwin on windows originally reported on google code with id what steps will reproduce the problem downloaded gperftools zip file went to the shell in cygwin executed the commands configure and then tried make what is the expected output what do you see instead expected is to generate a shared object file but compilation is failing what version of the product are you using on what operating system cygwin on please provide any additional information below attached the files reported by suman aluvala on attachment attachment
| 0
|
94,320
| 3,924,509,205
|
IssuesEvent
|
2016-04-22 15:26:24
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
Fix gross wall area for underground zones in IVRS report
|
EnergyPlus Priority S2 - Medium
|
See Issue #4065 for further details. The gross wall area for underground zones is still showing as 0 in IVRS report.
|
1.0
|
Fix gross wall area for underground zones in IVRS report - See Issue #4065 for further details. The gross wall area for underground zones is still showing as 0 in IVRS report.
|
non_process
|
fix gross wall area for underground zones in ivrs report see issue for further details the gross wall area for underground zones is still showing as in ivrs report
| 0
|
21,522
| 11,660,431,700
|
IssuesEvent
|
2020-03-03 03:19:12
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
opened
|
Import historical LPB records
|
Product: Banners Service: Apps Type: Enhancement Workgroup: SMB migrated
|
Joey sent me cleaned up spreadsheet to import back in the system :crossed_fingers:
*Migrated from [atd-knack-street-banner #9](https://github.com/cityofaustin/atd-knack-street-banner/issues/9)*
|
1.0
|
Import historical LPB records - Joey sent me cleaned up spreadsheet to import back in the system :crossed_fingers:
*Migrated from [atd-knack-street-banner #9](https://github.com/cityofaustin/atd-knack-street-banner/issues/9)*
|
non_process
|
import historical lpb records joey sent me cleaned up spreadsheet to import back in the system crossed fingers migrated from
| 0
|
9,129
| 12,200,394,414
|
IssuesEvent
|
2020-04-30 04:35:04
|
nkumar115/Data-Science
|
https://api.github.com/repos/nkumar115/Data-Science
|
opened
|
Working with Missing Data in Pandas
|
Data - PreProcessing
|
**TL;DR**
Working with Missing Data in Pandas
**Link**
https://www.geeksforgeeks.org/working-with-missing-data-in-pandas/
|
1.0
|
Working with Missing Data in Pandas - **TL;DR**
Working with Missing Data in Pandas
**Link**
https://www.geeksforgeeks.org/working-with-missing-data-in-pandas/
|
process
|
working with missing data in pandas tl dr working with missing data in pandas link
| 1
|
14,321
| 17,350,747,676
|
IssuesEvent
|
2021-07-29 08:25:26
|
alphagov/govuk-design-system
|
https://api.github.com/repos/alphagov/govuk-design-system
|
closed
|
Create repository for the GOV.UK Prototype team
|
refining team processes
|
<!--
This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate.
-->
## What
The GOV.UK Prototype team are creating a separate GitHub project, and would like to have a repository for private issues.
## Why
Part of the GitHub set up.
## Who needs to know about this
GOV.UK Prototype team, @36degrees
## Done when
- [x] private repository is created
- [x] create a GitHub team (Trang, Izabela, Rosie, Joe, Nora, Sara)
|
1.0
|
Create repository for the GOV.UK Prototype team - <!--
This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate.
-->
## What
The GOV.UK Prototype team are creating a separate GitHub project, and would like to have a repository for private issues.
## Why
Part of the GitHub set up.
## Who needs to know about this
GOV.UK Prototype team, @36degrees
## Done when
- [x] private repository is created
- [x] create a GitHub team (Trang, Izabela, Rosie, Joe, Nora, Sara)
|
process
|
create repository for the gov uk prototype team this is a template for any issues that aren’t bug reports or new feature requests the headings in this section provide examples of the information you might want to include but feel free to add delete sections where appropriate what the gov uk prototype team are creating a separate github project and would like to have a repository for private issues why part of the github set up who needs to know about this gov uk prototype team done when private repository is created create a github team trang izabela rosie joe nora sara
| 1
|
4,183
| 7,114,540,322
|
IssuesEvent
|
2018-01-18 01:18:11
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
MySQL_Logger
|
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR ROUTING
|
This new module will be responsible for logging all MySQL activities.
Specs still to be defined
|
1.0
|
MySQL_Logger - This new module will be responsible for logging all MySQL activities.
Specs still to be defined
|
process
|
mysql logger this new module will be responsible for logging all mysql activities specs still to be defined
| 1
|
172,650
| 13,326,061,463
|
IssuesEvent
|
2020-08-27 10:59:37
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed
|
C-test-failure O-roachtest O-robot branch-provisional_202007071743_v20.2.0-alpha.2 release-blocker
|
[(roachtest).tpccbench/nodes=9/cpu=4/chaos/partition failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2071705&tab=buildLog) on [provisional_202007071743_v20.2.0-alpha.2@0b6e118bc1bcba4cfb4fc6c660153ec5be3989e8](https://github.com/cockroachdb/cockroach/commits/0b6e118bc1bcba4cfb4fc6c660153ec5be3989e8):
```
test_runner.go:804: test timed out (10h0m0s)
cluster.go:2471,tpcc.go:731,tpcc.go:572,test_runner.go:757: monitor failure: monitor task failed: output in run_014957.172_n10_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2071705-1594163619-28-n10cpu4:10 -- ./workload fixtures load tpcc --warehouses=2000 --scatter --checks=false --partitions=3 {pgurl:1} returned: context canceled
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2459
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2467
| main.runTPCCBench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:731
| main.registerTPCCBenchSpec.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:572
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2515
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2119
| main.loadTPCCBench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:639
| main.runTPCCBench.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:729
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2449
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) 2 safe details enclosed
Wraps: (7) output in run_014957.172_n10_workload_fixtures_load_tpcc
Wraps: (8) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2071705-1594163619-28-n10cpu4:10 -- ./workload fixtures load tpcc --warehouses=2000 --scatter --checks=false --partitions=3 {pgurl:1} returned
| stderr:
| I200708 01:49:59.276217 1 ccl/workloadccl/cliccl/fixtures.go:279 starting restore of 9 tables
| I200708 01:50:00.064663 76 ccl/workloadccl/fixture.go:586 loaded 2.0 MiB table district in 774.420535ms (20000 rows, 0 index entries, 2.5 MiB)
| I200708 01:50:03.020172 75 ccl/workloadccl/fixture.go:586 loaded 106 KiB table warehouse in 3.730053965s (2000 rows, 0 index entries, 28 KiB)
| I200708 01:50:05.234023 81 ccl/workloadccl/fixture.go:586 loaded 7.8 MiB table item in 5.943847486s (100000 rows, 0 index entries, 1.3 MiB)
| I200708 01:55:06.867829 80 ccl/workloadccl/fixture.go:586 loaded 254 MiB table new_order in 5m7.577686735s (18000000 rows, 0 index entries, 847 KiB)
| I200708 01:58:59.977865 78 ccl/workloadccl/fixture.go:586 loaded 8.5 GiB table history in 9m0.687557281s (60000000 rows, 120000000 index entries, 16 MiB)
|
| stdout:
Wraps: (9) secondary error attachment
| signal: killed
| (1) signal: killed
| Error types: (1) *exec.ExitError
Wraps: (10) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *safedetails.withSafeDetails (7) *errutil.withMessage (8) *main.withCommandDetails (9) *secondary.withSecondaryError (10) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpccbench/nodes=9/cpu=4/chaos/partition](https://teamcity.cockroachdb.com/viewLog.html?buildId=2071705&tab=artifacts#/tpccbench/nodes=9/cpu=4/chaos/partition)
Related:
- #50910 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50814 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006292135_v19.2.9](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006292135_v19.2.9) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50590 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006230817_v19.2.8](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006230817_v19.2.8) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50512 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006220937_v19.2.8](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006220937_v19.2.8) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50062 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45722 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpccbench%2Fnodes%3D9%2Fcpu%3D4%2Fchaos%2Fpartition.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed - [(roachtest).tpccbench/nodes=9/cpu=4/chaos/partition failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2071705&tab=buildLog) on [provisional_202007071743_v20.2.0-alpha.2@0b6e118bc1bcba4cfb4fc6c660153ec5be3989e8](https://github.com/cockroachdb/cockroach/commits/0b6e118bc1bcba4cfb4fc6c660153ec5be3989e8):
```
test_runner.go:804: test timed out (10h0m0s)
cluster.go:2471,tpcc.go:731,tpcc.go:572,test_runner.go:757: monitor failure: monitor task failed: output in run_014957.172_n10_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2071705-1594163619-28-n10cpu4:10 -- ./workload fixtures load tpcc --warehouses=2000 --scatter --checks=false --partitions=3 {pgurl:1} returned: context canceled
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2459
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2467
| main.runTPCCBench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:731
| main.registerTPCCBenchSpec.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:572
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2515
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2119
| main.loadTPCCBench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:639
| main.runTPCCBench.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:729
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2449
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) 2 safe details enclosed
Wraps: (7) output in run_014957.172_n10_workload_fixtures_load_tpcc
Wraps: (8) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2071705-1594163619-28-n10cpu4:10 -- ./workload fixtures load tpcc --warehouses=2000 --scatter --checks=false --partitions=3 {pgurl:1} returned
| stderr:
| I200708 01:49:59.276217 1 ccl/workloadccl/cliccl/fixtures.go:279 starting restore of 9 tables
| I200708 01:50:00.064663 76 ccl/workloadccl/fixture.go:586 loaded 2.0 MiB table district in 774.420535ms (20000 rows, 0 index entries, 2.5 MiB)
| I200708 01:50:03.020172 75 ccl/workloadccl/fixture.go:586 loaded 106 KiB table warehouse in 3.730053965s (2000 rows, 0 index entries, 28 KiB)
| I200708 01:50:05.234023 81 ccl/workloadccl/fixture.go:586 loaded 7.8 MiB table item in 5.943847486s (100000 rows, 0 index entries, 1.3 MiB)
| I200708 01:55:06.867829 80 ccl/workloadccl/fixture.go:586 loaded 254 MiB table new_order in 5m7.577686735s (18000000 rows, 0 index entries, 847 KiB)
| I200708 01:58:59.977865 78 ccl/workloadccl/fixture.go:586 loaded 8.5 GiB table history in 9m0.687557281s (60000000 rows, 120000000 index entries, 16 MiB)
|
| stdout:
Wraps: (9) secondary error attachment
| signal: killed
| (1) signal: killed
| Error types: (1) *exec.ExitError
Wraps: (10) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *safedetails.withSafeDetails (7) *errutil.withMessage (8) *main.withCommandDetails (9) *secondary.withSecondaryError (10) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpccbench/nodes=9/cpu=4/chaos/partition](https://teamcity.cockroachdb.com/viewLog.html?buildId=2071705&tab=artifacts#/tpccbench/nodes=9/cpu=4/chaos/partition)
Related:
- #50910 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50814 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006292135_v19.2.9](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006292135_v19.2.9) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50590 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006230817_v19.2.8](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006230817_v19.2.8) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50512 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006220937_v19.2.8](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006220937_v19.2.8) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #50062 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #45722 roachtest: tpccbench/nodes=9/cpu=4/chaos/partition failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpccbench%2Fnodes%3D9%2Fcpu%3D4%2Fchaos%2Fpartition.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest tpccbench nodes cpu chaos partition failed on test runner go test timed out cluster go tpcc go tpcc go test runner go monitor failure monitor task failed output in run workload fixtures load tpcc home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses scatter checks false partitions pgurl returned context canceled attached stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runtpccbench home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main registertpccbenchspec home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main loadtpccbench home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main runtpccbench home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main monitor go home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go golang org x sync errgroup group go home agent work go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s wraps safe details enclosed wraps output in run workload fixtures load tpcc wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses scatter checks false partitions pgurl returned stderr ccl workloadccl cliccl fixtures go starting restore of tables ccl workloadccl fixture go loaded mib table district in rows index entries mib ccl workloadccl fixture go loaded kib table warehouse in rows index entries kib ccl workloadccl fixture go loaded mib table item in rows index entries mib ccl workloadccl fixture go loaded mib table new order in rows index entries kib ccl workloadccl fixture go loaded gib table history in rows index entries mib stdout wraps secondary error attachment signal killed signal killed error types exec exiterror wraps context canceled error types withstack withstack errutil withmessage withstack withstack errutil withmessage withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails secondary withsecondaryerror errors errorstring more artifacts related roachtest tpccbench nodes cpu chaos partition failed roachtest tpccbench nodes cpu chaos partition failed roachtest tpccbench nodes cpu chaos partition failed roachtest tpccbench nodes cpu chaos partition failed roachtest tpccbench nodes cpu chaos partition failed roachtest tpccbench nodes cpu chaos partition failed powered by
| 0
|
105,703
| 9,100,221,229
|
IssuesEvent
|
2019-02-20 07:52:10
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
opened
|
Test : ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive
|
test
|
Project : Test
Job : Default
Env : Default
Category : ABAC_Level5
Tags : [FX Top 10 - API Vulnerability, Data_Access_Control]
Severity : Major
Region : US_WEST
Result : fail
Status Code : 401
Headers : {WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/abac/project//add-abac2positive-rules
Request :
Response :
Logs :
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/batch]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Method [POST]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Request [[ {
"assertions" : [ "lVOeDjhL" ],
"assertionsText" : "lVOeDjhL",
"auth" : "lVOeDjhL",
"authors" : [ "lVOeDjhL" ],
"authorsText" : "lVOeDjhL",
"autoGenerated" : false,
"category" : "NoSQL_Injection",
"cleanup" : [ "lVOeDjhL" ],
"cleanupText" : "lVOeDjhL",
"createdBy" : "",
"createdDate" : "",
"description" : "lVOeDjhL",
"endpoint" : "lVOeDjhL",
"headers" : [ "lVOeDjhL" ],
"headersText" : "lVOeDjhL",
"id" : "",
"inactive" : false,
"init" : [ "lVOeDjhL" ],
"initText" : "lVOeDjhL",
"method" : "HEAD",
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"parent" : "lVOeDjhL",
"path" : "lVOeDjhL",
"policies" : {
"cleanupExec" : "lVOeDjhL",
"initExec" : "lVOeDjhL",
"logger" : "lVOeDjhL",
"repeat" : "1679005416",
"repeatDelay" : "1679005416",
"repeatModule" : "lVOeDjhL",
"repeatOnFailure" : "1679005416",
"timeoutSeconds" : "1679005416"
},
"project" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"version" : ""
},
"version" : ""
},
"props" : null,
"publishToMarketplace" : false,
"severity" : "Trivial",
"tags" : [ "lVOeDjhL" ],
"tagsText" : "lVOeDjhL",
"testCases" : [ {
"body" : "lVOeDjhL",
"id" : "",
"inactive" : false
} ],
"type" : "Suite",
"version" : "",
"yaml" : "lVOeDjhL"
} ]]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Response []
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : StatusCode [401]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Time [511]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Size [0]
2019-02-20 07:52:09 ERROR [null] : Assertion [@StatusCode == 200 OR @StatusCode == 201] resolved-to [401 == 200 OR 401 == 201] result [Failed]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/project//add-abac2positive-rules]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Method [GET]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Request []
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Response []
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : StatusCode [401]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Time [574]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Size [0]
2019-02-20 07:52:09 ERROR [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Assertion [@StatusCode == 200 AND @Response.errors == false] resolved-to [401 == 200 AND == false] result [Failed]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Method [DELETE]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Request [null]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Response []
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:10 GMT]}]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : StatusCode [401]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Time [290]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Size [0]
2019-02-20 07:52:10 ERROR [null] : Assertion [@StatusCode == 200] resolved-to [401 == 200] result [Failed]
--- FX Bot ---
|
1.0
|
Test : ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive - Project : Test
Job : Default
Env : Default
Category : ABAC_Level5
Tags : [FX Top 10 - API Vulnerability, Data_Access_Control]
Severity : Major
Region : US_WEST
Result : fail
Status Code : 401
Headers : {WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/abac/project//add-abac2positive-rules
Request :
Response :
Logs :
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/batch]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Method [POST]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Request [[ {
"assertions" : [ "lVOeDjhL" ],
"assertionsText" : "lVOeDjhL",
"auth" : "lVOeDjhL",
"authors" : [ "lVOeDjhL" ],
"authorsText" : "lVOeDjhL",
"autoGenerated" : false,
"category" : "NoSQL_Injection",
"cleanup" : [ "lVOeDjhL" ],
"cleanupText" : "lVOeDjhL",
"createdBy" : "",
"createdDate" : "",
"description" : "lVOeDjhL",
"endpoint" : "lVOeDjhL",
"headers" : [ "lVOeDjhL" ],
"headersText" : "lVOeDjhL",
"id" : "",
"inactive" : false,
"init" : [ "lVOeDjhL" ],
"initText" : "lVOeDjhL",
"method" : "HEAD",
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"parent" : "lVOeDjhL",
"path" : "lVOeDjhL",
"policies" : {
"cleanupExec" : "lVOeDjhL",
"initExec" : "lVOeDjhL",
"logger" : "lVOeDjhL",
"repeat" : "1679005416",
"repeatDelay" : "1679005416",
"repeatModule" : "lVOeDjhL",
"repeatOnFailure" : "1679005416",
"timeoutSeconds" : "1679005416"
},
"project" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"org" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "lVOeDjhL",
"version" : ""
},
"version" : ""
},
"props" : null,
"publishToMarketplace" : false,
"severity" : "Trivial",
"tags" : [ "lVOeDjhL" ],
"tagsText" : "lVOeDjhL",
"testCases" : [ {
"body" : "lVOeDjhL",
"id" : "",
"inactive" : false
} ],
"type" : "Suite",
"version" : "",
"yaml" : "lVOeDjhL"
} ]]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Response []
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : StatusCode [401]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Time [511]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive] : Size [0]
2019-02-20 07:52:09 ERROR [null] : Assertion [@StatusCode == 200 OR @StatusCode == 201] resolved-to [401 == 200 OR 401 == 201] result [Failed]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [BatchCategoryAbacLevel1CreateUserAInitAbact2Positive_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/project//add-abac2positive-rules]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Method [GET]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Request []
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Response []
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:09 GMT]}]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : StatusCode [401]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Time [574]
2019-02-20 07:52:09 DEBUG [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Size [0]
2019-02-20 07:52:09 ERROR [ApiV1AbacProjectRequestidAddAbac2positiveRulesGetBatchcategoryabaclevel1useraAllowAbact2positive] : Assertion [@StatusCode == 200 AND @Response.errors == false] resolved-to [401 == 200 AND == false] result [Failed]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : URL [http://13.56.210.25/api/v1/api/v1/abac/]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Method [DELETE]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Request [null]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOnVzZXJB]}]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Response []
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 20 Feb 2019 07:52:10 GMT]}]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : StatusCode [401]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Time [290]
2019-02-20 07:52:10 DEBUG [ApiV1AbacIdDeleteBatchcategoryabaclevel1AbstractAbact2positive] : Size [0]
2019-02-20 07:52:10 ERROR [null] : Assertion [@StatusCode == 200] resolved-to [401 == 200] result [Failed]
--- FX Bot ---
|
non_process
|
test project test job default env default category abac tags severity major region us west result fail status code headers www authenticate x content type options x xss protection cache control pragma expires x frame options content length date endpoint request response logs debug url debug method debug request assertions assertionstext lvoedjhl auth lvoedjhl authors authorstext lvoedjhl autogenerated false category nosql injection cleanup cleanuptext lvoedjhl createdby createddate description lvoedjhl endpoint lvoedjhl headers headerstext lvoedjhl id inactive false init inittext lvoedjhl method head modifiedby modifieddate name lvoedjhl parent lvoedjhl path lvoedjhl policies cleanupexec lvoedjhl initexec lvoedjhl logger lvoedjhl repeat repeatdelay repeatmodule lvoedjhl repeatonfailure timeoutseconds project createdby createddate id inactive false modifiedby modifieddate name lvoedjhl org createdby createddate id inactive false modifiedby modifieddate name lvoedjhl version version props null publishtomarketplace false severity trivial tags tagstext lvoedjhl testcases body lvoedjhl id inactive false type suite version yaml lvoedjhl debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size error assertion resolved to result debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug url debug method debug request debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size error assertion resolved to result debug url debug method debug request debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size error assertion resolved to result fx bot
| 0
|
5,255
| 8,047,982,687
|
IssuesEvent
|
2018-08-01 04:01:35
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Tests failing on Windows 10 for some developers
|
child_process inspector test windows
|
* **Version**: v11.0.0-pre (master)
* **Platform**: Windows 10
* **Subsystem**: test inspector child-process
<!-- Enter your issue details below this comment. -->
Sometimes when helping new developers trying to contribute to Node.js, they report errors along these lines, most recently (and possibly only?) on Windows 10. I'm not sure how to troubleshoot to find what's up. Any ideas? @nodejs/platform-windows
Not sure whether this should go here or in help, but I'm starting with the assumption that it's either a bug in our build/test stuff or else a gap in our docs for building.
```console
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- ''
+ 'hellO\r\nnOde\r\nwOrld\r\n'
at Socket.<anonymous> (C:\Users\fhqwhgads\projects\node\test\parallel\test-child-process-double-pipe.js:114:10)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1081:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
Command: C:\Users\fhqwhgads\projects\node\Release\node.exe C:\Users\fhqwhgads\projects\node\test\parallel\test-child-process-double-pipe.js
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- 12
+ 0
at Worker.checkExitCode (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:339:10)
at Worker.<anonymous> (C:\Users\fhqwhgads\projects\node\test\common\index.js:467:15)
at Worker.emit (events.js:182:13)
at ChildProcess.worker.process.once (internal/cluster/master.js:193:12)
at Object.onceWrapper (events.js:273:13)
at ChildProcess.emit (events.js:182:13)
at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
Debugger listening on ws://127.0.0.1:12352/623264cb-2081-43a5-9b4f-7c780b8fe295
For help, see: https://nodejs.org/en/docs/inspector
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- 1
+ 0
at checkExitCode (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:339:10)
at ChildProcess.childProcess.fork.on.common.mustCall (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:332:7)
at ChildProcess.<anonymous> (C:\Users\fhqwhgads\projects\node\test\common\index.js:467:15)
at ChildProcess.emit (events.js:182:13)
at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
Debugger listening on ws://127.0.0.1:62477/d5301ab5-8526-478a-baa9-9bd228a1cc08
```
|
1.0
|
Tests failing on Windows 10 for some developers - * **Version**: v11.0.0-pre (master)
* **Platform**: Windows 10
* **Subsystem**: test inspector child-process
<!-- Enter your issue details below this comment. -->
Sometimes when helping new developers trying to contribute to Node.js, they report errors along these lines, most recently (and possibly only?) on Windows 10. I'm not sure how to troubleshoot to find what's up. Any ideas? @nodejs/platform-windows
Not sure whether this should go here or in help, but I'm starting with the assumption that it's either a bug in our build/test stuff or else a gap in our docs for building.
```console
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- ''
+ 'hellO\r\nnOde\r\nwOrld\r\n'
at Socket.<anonymous> (C:\Users\fhqwhgads\projects\node\test\parallel\test-child-process-double-pipe.js:114:10)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1081:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
Command: C:\Users\fhqwhgads\projects\node\Release\node.exe C:\Users\fhqwhgads\projects\node\test\parallel\test-child-process-double-pipe.js
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- 12
+ 0
at Worker.checkExitCode (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:339:10)
at Worker.<anonymous> (C:\Users\fhqwhgads\projects\node\test\common\index.js:467:15)
at Worker.emit (events.js:182:13)
at ChildProcess.worker.process.once (internal/cluster/master.js:193:12)
at Object.onceWrapper (events.js:273:13)
at ChildProcess.emit (events.js:182:13)
at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
Debugger listening on ws://127.0.0.1:12352/623264cb-2081-43a5-9b4f-7c780b8fe295
For help, see: https://nodejs.org/en/docs/inspector
assert.js:84
throw new AssertionError(obj);
^
AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
+ expected - actual
- 1
+ 0
at checkExitCode (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:339:10)
at ChildProcess.childProcess.fork.on.common.mustCall (C:\Users\fhqwhgads\projects\node\test\sequential\test-inspector-port-cluster.js:332:7)
at ChildProcess.<anonymous> (C:\Users\fhqwhgads\projects\node\test\common\index.js:467:15)
at ChildProcess.emit (events.js:182:13)
at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
Debugger listening on ws://127.0.0.1:62477/d5301ab5-8526-478a-baa9-9bd228a1cc08
```
|
process
|
tests failing on windows for some developers version pre master platform windows subsystem test inspector child process sometimes when helping new developers trying to contribute to node js they report errors along these lines most recently and possibly only on windows i m not sure how to troubleshoot to find what s up any ideas nodejs platform windows not sure whether this should go here or in help but i m starting with the assumption that it s either a bug in our build test stuff or else a gap in our docs for building console assert js throw new assertionerror obj assertionerror input a expected to strictly equal input b expected actual hello r nnode r nworld r n at socket c users fhqwhgads projects node test parallel test child process double pipe js at socket emit events js at endreadablent stream readable js at process tickcallback internal process next tick js command c users fhqwhgads projects node release node exe c users fhqwhgads projects node test parallel test child process double pipe js assert js throw new assertionerror obj assertionerror input a expected to strictly equal input b expected actual at worker checkexitcode c users fhqwhgads projects node test sequential test inspector port cluster js at worker c users fhqwhgads projects node test common index js at worker emit events js at childprocess worker process once internal cluster master js at object oncewrapper events js at childprocess emit events js at process childprocess handle onexit internal child process js debugger listening on ws for help see assert js throw new assertionerror obj assertionerror input a expected to strictly equal input b expected actual at checkexitcode c users fhqwhgads projects node test sequential test inspector port cluster js at childprocess childprocess fork on common mustcall c users fhqwhgads projects node test sequential test inspector port cluster js at childprocess c users fhqwhgads projects node test common index js at childprocess emit events js at process childprocess handle onexit internal child process js debugger listening on ws
| 1
|
13,035
| 15,383,524,995
|
IssuesEvent
|
2021-03-03 02:54:02
|
aodn/imos-toolbox
|
https://api.github.com/repos/aodn/imos-toolbox
|
opened
|
update IGRF12 to IGRF13
|
Type:bug Unit:Processing
|
For 2021 datasets, we need to update geomag to use IGRF13.
Requirements:
- [ ] Include IGRF13.COF
- [ ] Update geomag70.txt
|
1.0
|
update IGRF12 to IGRF13 - For 2021 datasets, we need to update geomag to use IGRF13.
Requirements:
- [ ] Include IGRF13.COF
- [ ] Update geomag70.txt
|
process
|
update to for datasets we need to update geomag to use requirements include cof update txt
| 1
|
9,375
| 12,374,294,847
|
IssuesEvent
|
2020-05-19 01:07:50
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
tokyo cabinet and out of disk space notification
|
log-processing on-disk question
|
In continuing onward with the tcb performance tarpitting as described in: #886
To handle some monster logs, I have devised the following procedure to keep the parsing speeds over 1KB/s (late in the month), usually the average is around 2KB/s to 3KB/s until around the 25th day of the month.
The directory structure is simple:
logs
logs/.goa (long term storage for tcb files)
logs/.goaProcess (ephemerol shadow tmpfs for fast goaccess processing)
1. test if .goa is >3GB (if so continue on with a tmpfs shadow)
1. if > 3GB: create a tmpfs (du .goa + 3GB) then rsync all of .goa ==> .goaProcess
1. goaccess --db-path="./goaProcess/" ...
1. for tcb in ./goaProcess/*.tcb; do tcbmgr optimize -df "${tcb}"; done
1. rsync .goaProcess back to disk storage .goa
If I don't use the .goaProcess tmpfs, goaccess parsing drops down to ~100KB/s to ~900KB/s range with many stalls.
Anyhow, I made an error with creating the tmpfs .goaProcess, with du -bs .goa + 3GB, it was created at +1GB instead. During processing, it continued without any problems, however I found that the .goaProcess directory was out of disk space. This at least answered the question as to why the tcbmgr optimize (-df) was failing with invalid metadata errors.
So my question is, is there a way for the tokyo cabinet library to alert goaccess that where it is writing the tcb files to has run out of disk space and thereby the parsing operation can be halted? If it isn't halted, then the postrun operations can rsync back the corrupted tcb files to .goa long term storage.
|
1.0
|
tokyo cabinet and out of disk space notification - In continuing onward with the tcb performance tarpitting as described in: #886
To handle some monster logs, I have devised the following procedure to keep the parsing speeds over 1KB/s (late in the month), usually the average is around 2KB/s to 3KB/s until around the 25th day of the month.
The directory structure is simple:
logs
logs/.goa (long term storage for tcb files)
logs/.goaProcess (ephemerol shadow tmpfs for fast goaccess processing)
1. test if .goa is >3GB (if so continue on with a tmpfs shadow)
1. if > 3GB: create a tmpfs (du .goa + 3GB) then rsync all of .goa ==> .goaProcess
1. goaccess --db-path="./goaProcess/" ...
1. for tcb in ./goaProcess/*.tcb; do tcbmgr optimize -df "${tcb}"; done
1. rsync .goaProcess back to disk storage .goa
If I don't use the .goaProcess tmpfs, goaccess parsing drops down to ~100KB/s to ~900KB/s range with many stalls.
Anyhow, I made an error with creating the tmpfs .goaProcess, with du -bs .goa + 3GB, it was created at +1GB instead. During processing, it continued without any problems, however I found that the .goaProcess directory was out of disk space. This at least answered the question as to why the tcbmgr optimize (-df) was failing with invalid metadata errors.
So my question is, is there a way for the tokyo cabinet library to alert goaccess that where it is writing the tcb files to has run out of disk space and thereby the parsing operation can be halted? If it isn't halted, then the postrun operations can rsync back the corrupted tcb files to .goa long term storage.
|
process
|
tokyo cabinet and out of disk space notification in continuing onward with the tcb performance tarpitting as described in to handle some monster logs i have devised the following procedure to keep the parsing speeds over s late in the month usually the average is around s to s until around the day of the month the directory structure is simple logs logs goa long term storage for tcb files logs goaprocess ephemerol shadow tmpfs for fast goaccess processing test if goa is if so continue on with a tmpfs shadow if create a tmpfs du goa then rsync all of goa goaprocess goaccess db path goaprocess for tcb in goaprocess tcb do tcbmgr optimize df tcb done rsync goaprocess back to disk storage goa if i don t use the goaprocess tmpfs goaccess parsing drops down to s to s range with many stalls anyhow i made an error with creating the tmpfs goaprocess with du bs goa it was created at instead during processing it continued without any problems however i found that the goaprocess directory was out of disk space this at least answered the question as to why the tcbmgr optimize df was failing with invalid metadata errors so my question is is there a way for the tokyo cabinet library to alert goaccess that where it is writing the tcb files to has run out of disk space and thereby the parsing operation can be halted if it isn t halted then the postrun operations can rsync back the corrupted tcb files to goa long term storage
| 1
|
8,468
| 11,640,786,008
|
IssuesEvent
|
2020-02-29 00:05:04
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
hairbodysoul.ca
|
whitelisting process
|
*@sjjgd commented on Feb 26, 2020, 10:08 PM UTC:*
false positive [https://hairbodysoul.ca](https://hairbodysoul.ca) blacklist removal request
*This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#544](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/544).*
|
1.0
|
hairbodysoul.ca - *@sjjgd commented on Feb 26, 2020, 10:08 PM UTC:*
false positive [https://hairbodysoul.ca](https://hairbodysoul.ca) blacklist removal request
*This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#544](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/544).*
|
process
|
hairbodysoul ca sjjgd commented on feb pm utc false positive blacklist removal request this issue was moved by from
| 1
|
13,893
| 16,656,004,516
|
IssuesEvent
|
2021-06-05 14:37:44
|
laugharn/link
|
https://api.github.com/repos/laugharn/link
|
closed
|
Cleaner API Handling
|
kind/improvement process/selected size/sm team/back
|
Our API handlers are a little messy, we can clean them up.
- [x] Use [next-connect](https://npm.im/next-connect) which has a nice, familiar pattern and works well with [next-iron-session](https://npm.im/next-iron-session)
- [x] Create a default handler with session middleware for easy reuse
- [x] Move route functions to lib files
- [x] Update dependencies
- [x] Refactor error handling to use [http-errors](https://npm.im/http-errors)
In the future we might want to investigate using a destructured single file API instead of having index, [id], etc. But we can revisit that.
|
1.0
|
Cleaner API Handling - Our API handlers are a little messy, we can clean them up.
- [x] Use [next-connect](https://npm.im/next-connect) which has a nice, familiar pattern and works well with [next-iron-session](https://npm.im/next-iron-session)
- [x] Create a default handler with session middleware for easy reuse
- [x] Move route functions to lib files
- [x] Update dependencies
- [x] Refactor error handling to use [http-errors](https://npm.im/http-errors)
In the future we might want to investigate using a destructured single file API instead of having index, [id], etc. But we can revisit that.
|
process
|
cleaner api handling our api handlers are a little messy we can clean them up use which has a nice familiar pattern and works well with create a default handler with session middleware for easy reuse move route functions to lib files update dependencies refactor error handling to use in the future we might want to investigate using a destructured single file api instead of having index etc but we can revisit that
| 1
|
3,973
| 6,904,981,512
|
IssuesEvent
|
2017-11-27 03:46:28
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Doubt using getTrans
|
status-inprocess tools-getTrans type-question
|
When I run the following command:
./getTrans --raw 0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1
I obtain the following output:
getTrans argc: 3 [1:--raw] [2:0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1]
getTrans --raw 0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1
{"jsonrpc":"2.0","error":{"code":-32602,"message":"Invalid params: invalid length 1, expected a 0x-prefixed, padded, hex-encoded hash with length 64."},"id":4}
Seems that we receive an error saying, invalid length "1". I do not understand this behaviour, so I open a doubt for the same, because I am not sure if it is a bug or not.
If it is a bug, please go ahead and update the labels, if not just kindly tell me what is happening here and I will close the same.
I expected here a kind of validation to reject the hash if it is invalid, but I obtain the error detailed above.
|
1.0
|
Doubt using getTrans - When I run the following command:
./getTrans --raw 0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1
I obtain the following output:
getTrans argc: 3 [1:--raw] [2:0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1]
getTrans --raw 0x0766ec95bf12f9600afc6c33c287f9d2356d85f15f0d7eeeb6647f5d4ddfbce1
{"jsonrpc":"2.0","error":{"code":-32602,"message":"Invalid params: invalid length 1, expected a 0x-prefixed, padded, hex-encoded hash with length 64."},"id":4}
Seems that we receive an error saying, invalid length "1". I do not understand this behaviour, so I open a doubt for the same, because I am not sure if it is a bug or not.
If it is a bug, please go ahead and update the labels, if not just kindly tell me what is happening here and I will close the same.
I expected here a kind of validation to reject the hash if it is invalid, but I obtain the error detailed above.
|
process
|
doubt using gettrans when i run the following command gettrans raw i obtain the following output gettrans argc gettrans raw jsonrpc error code message invalid params invalid length expected a prefixed padded hex encoded hash with length id seems that we receive an error saying invalid length i do not understand this behaviour so i open a doubt for the same because i am not sure if it is a bug or not if it is a bug please go ahead and update the labels if not just kindly tell me what is happening here and i will close the same i expected here a kind of validation to reject the hash if it is invalid but i obtain the error detailed above
| 1
|
127,514
| 5,031,650,152
|
IssuesEvent
|
2016-12-16 08:10:42
|
kubernetes-incubator/kompose
|
https://api.github.com/repos/kubernetes-incubator/kompose
|
closed
|
Replace godep with glide
|
priority/P0
|
We had a lot of issues with `godep` it is time to switch to `glide`.
If we use `glide` we can get rid of ugly [godep-restore.sh](https://github.com/kubernetes-incubator/kompose/blob/master/script/godep-restore.sh)
|
1.0
|
Replace godep with glide - We had a lot of issues with `godep` it is time to switch to `glide`.
If we use `glide` we can get rid of ugly [godep-restore.sh](https://github.com/kubernetes-incubator/kompose/blob/master/script/godep-restore.sh)
|
non_process
|
replace godep with glide we had a lot of issues with godep it is time to switch to glide if we use glide we can get rid of ugly
| 0
|
17,129
| 9,968,689,980
|
IssuesEvent
|
2019-07-08 16:07:48
|
AOSC-Dev/aosc-os-abbs
|
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
|
opened
|
openssl: security update to 1.1.1c
|
security to-stable upgrade
|
<!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2019-1543
**Other security advisory IDs:** openSUSE-SU-2019:1147-1, ASA-201906-7, DSA-4475-1
**Descriptions:**
Joran Dirk Greef discovered that overly long nonces used with
ChaCha20-Poly1305 were incorrectly processed and could result in nonce
reuse. This doesn't affect OpenSSL-internal uses of ChaCha20-Poly1305
such as TLS.
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
True
|
openssl: security update to 1.1.1c - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2019-1543
**Other security advisory IDs:** openSUSE-SU-2019:1147-1, ASA-201906-7, DSA-4475-1
**Descriptions:**
Joran Dirk Greef discovered that overly long nonces used with
ChaCha20-Poly1305 were incorrectly processed and could result in nonce
reuse. This doesn't affect OpenSSL-internal uses of ChaCha20-Poly1305
such as TLS.
**Architectural progress:**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
non_process
|
openssl security update to cve ids cve other security advisory ids opensuse su asa dsa descriptions joran dirk greef discovered that overly long nonces used with were incorrectly processed and could result in nonce reuse this doesn t affect openssl internal uses of such as tls architectural progress bit optional environment armel powerpc bit be powerpc bit be powerpc
| 0
|
20,679
| 6,912,186,191
|
IssuesEvent
|
2017-11-28 11:02:20
|
reactor/reactor-core
|
https://api.github.com/repos/reactor/reactor-core
|
closed
|
Upgrade to Kotlin-Gradle-Plugin 1.1.60 when available
|
chores/build enhancement kotlin on-hold
|
Follow-up to #887:
- upgrade to the 1.1.60 version when available
- verify it fixes the duplicate entries in sources jar (` unzip reactor-core-3.1.2.BUILD-SNAPSHOT-sources.jar -d ./temp`)
- verify it doesn't cause any other issue
- remove the "IGNORE_DUPLICATES" workaround from the `build.gradle`
- do the same for `reactor-addons`
|
1.0
|
Upgrade to Kotlin-Gradle-Plugin 1.1.60 when available - Follow-up to #887:
- upgrade to the 1.1.60 version when available
- verify it fixes the duplicate entries in sources jar (` unzip reactor-core-3.1.2.BUILD-SNAPSHOT-sources.jar -d ./temp`)
- verify it doesn't cause any other issue
- remove the "IGNORE_DUPLICATES" workaround from the `build.gradle`
- do the same for `reactor-addons`
|
non_process
|
upgrade to kotlin gradle plugin when available follow up to upgrade to the version when available verify it fixes the duplicate entries in sources jar unzip reactor core build snapshot sources jar d temp verify it doesn t cause any other issue remove the ignore duplicates workaround from the build gradle do the same for reactor addons
| 0
|
168,356
| 20,754,717,732
|
IssuesEvent
|
2022-03-15 11:03:45
|
arngrimur/computersaysno
|
https://api.github.com/repos/arngrimur/computersaysno
|
closed
|
CVE-2021-32760 (Medium) detected in github.com/docker/docker-v20.10.9 - autoclosed
|
security vulnerability
|
## CVE-2021-32760 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/docker-v20.10.9</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- github.com/ory/dockertest/v3-v3.8.1 (Root Library)
- github.com/docker/cli-v20.10.11
- :x: **github.com/docker/docker-v20.10.9** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
containerd is a container runtime. A bug was found in containerd versions prior to 1.4.8 and 1.5.4 where pulling and extracting a specially-crafted container image can result in Unix file permission changes for existing files in the host’s filesystem. Changes to file permissions can deny access to the expected owner of the file, widen access to others, or set extended bits like setuid, setgid, and sticky. This bug does not directly allow files to be read, modified, or executed without an additional cooperating process. This bug has been fixed in containerd 1.5.4 and 1.4.8. As a workaround, ensure that users only pull images from trusted sources. Linux security modules (LSMs) like SELinux and AppArmor can limit the files potentially affected by this bug through policies and profiles that prevent containerd from interacting with specific files.
<p>Publish Date: 2021-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32760>CVE-2021-32760</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/containerd/containerd/security/advisories/GHSA-c72p-9xmj-rx3w">https://github.com/containerd/containerd/security/advisories/GHSA-c72p-9xmj-rx3w</a></p>
<p>Release Date: 2021-07-19</p>
<p>Fix Resolution: v1.4.8 ,v1.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-32760 (Medium) detected in github.com/docker/docker-v20.10.9 - autoclosed - ## CVE-2021-32760 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/docker-v20.10.9</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- github.com/ory/dockertest/v3-v3.8.1 (Root Library)
- github.com/docker/cli-v20.10.11
- :x: **github.com/docker/docker-v20.10.9** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
containerd is a container runtime. A bug was found in containerd versions prior to 1.4.8 and 1.5.4 where pulling and extracting a specially-crafted container image can result in Unix file permission changes for existing files in the host’s filesystem. Changes to file permissions can deny access to the expected owner of the file, widen access to others, or set extended bits like setuid, setgid, and sticky. This bug does not directly allow files to be read, modified, or executed without an additional cooperating process. This bug has been fixed in containerd 1.5.4 and 1.4.8. As a workaround, ensure that users only pull images from trusted sources. Linux security modules (LSMs) like SELinux and AppArmor can limit the files potentially affected by this bug through policies and profiles that prevent containerd from interacting with specific files.
<p>Publish Date: 2021-07-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32760>CVE-2021-32760</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/containerd/containerd/security/advisories/GHSA-c72p-9xmj-rx3w">https://github.com/containerd/containerd/security/advisories/GHSA-c72p-9xmj-rx3w</a></p>
<p>Release Date: 2021-07-19</p>
<p>Fix Resolution: v1.4.8 ,v1.5.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in github com docker docker autoclosed cve medium severity vulnerability vulnerable library github com docker docker moby project a collaborative project for the container ecosystem to assemble container based systems dependency hierarchy github com ory dockertest root library github com docker cli x github com docker docker vulnerable library found in head commit a href found in base branch main vulnerability details containerd is a container runtime a bug was found in containerd versions prior to and where pulling and extracting a specially crafted container image can result in unix file permission changes for existing files in the host’s filesystem changes to file permissions can deny access to the expected owner of the file widen access to others or set extended bits like setuid setgid and sticky this bug does not directly allow files to be read modified or executed without an additional cooperating process this bug has been fixed in containerd and as a workaround ensure that users only pull images from trusted sources linux security modules lsms like selinux and apparmor can limit the files potentially affected by this bug through policies and profiles that prevent containerd from interacting with specific files publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
20,173
| 26,727,805,954
|
IssuesEvent
|
2023-01-29 22:50:21
|
evidence-dev/evidence
|
https://api.github.com/repos/evidence-dev/evidence
|
opened
|
Begin adopting typescript
|
dev-process
|
* Permissive TS config in dev workspace
* TS adopted in one component
|
1.0
|
Begin adopting typescript - * Permissive TS config in dev workspace
* TS adopted in one component
|
process
|
begin adopting typescript permissive ts config in dev workspace ts adopted in one component
| 1
|
16,792
| 22,037,249,383
|
IssuesEvent
|
2022-05-28 19:44:17
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Add simple Helm chart to support deployment
|
enhancement P1 process
|
Chart should utilize deployed docker image and support simple k8s deployment for varied environments
Can draw inspiration from [Mirror Node charts](https://github.com/hashgraph/hedera-mirror-node/tree/main/charts)
`hedera-mirror-web3` should be exactly what we need to start off with.
`hedera-mirror-rest` may show more mature logic and tie closely since it's also a node app
|
1.0
|
Add simple Helm chart to support deployment - Chart should utilize deployed docker image and support simple k8s deployment for varied environments
Can draw inspiration from [Mirror Node charts](https://github.com/hashgraph/hedera-mirror-node/tree/main/charts)
`hedera-mirror-web3` should be exactly what we need to start off with.
`hedera-mirror-rest` may show more mature logic and tie closely since it's also a node app
|
process
|
add simple helm chart to support deployment chart should utilize deployed docker image and support simple deployment for varied environments can draw inspiration from hedera mirror should be exactly what we need to start off with hedera mirror rest may show more mature logic and tie closely since it s also a node app
| 1
|
23,090
| 20,996,648,447
|
IssuesEvent
|
2022-03-29 14:01:03
|
bgo-bioimagerie/platformmanager
|
https://api.github.com/repos/bgo-bioimagerie/platformmanager
|
closed
|
[space config] when coming from todo list, validating a form should redirect to todo list
|
usability fixed_in_dev
|
all in the title !
|
True
|
[space config] when coming from todo list, validating a form should redirect to todo list - all in the title !
|
non_process
|
when coming from todo list validating a form should redirect to todo list all in the title
| 0
|
107,650
| 11,567,368,390
|
IssuesEvent
|
2020-02-20 14:12:15
|
ibm-garage-cloud/planning
|
https://api.github.com/repos/ibm-garage-cloud/planning
|
opened
|
Chore: Update the instructions for Artifactory setup to include a step to expose protected url
|
documentation
|
When Artifactory is installed, we expose the url with https. The Artifactory tool internally still refers to the http endpoint when you walk through the instructions to complete the manual setup steps.
However, with the change to routes in 3.11 and 4.3 http traffic will automatically redirect to https which causes problems in the pipeline when publishing the helm chart. The fix for this is to update Artifactory to use the https endpoint.
The new step should be added after https://ibm-garage-cloud.github.io/ibm-garage-developer-guide/admin/artifactory-setup#allow-anonymous-access-to-artifactory but prior to https://ibm-garage-cloud.github.io/ibm-garage-developer-guide/admin/artifactory-setup#obtain-encrypted-password
Instructions:
1. Click on the ‘Admin’ icon at the bottom of the left nav then select ‘General Configuration’
2. In the `Custom Base Url` field, add the https url to the Artifactory (including /artifactory at the end) - e.g. https://artifactory.garage-dev-iks-cl-143931-0143c5dd31acd8e030a1d6e0ab1380e3-0000.us-east.containers.appdomain.cloud/artifactory
3. Click `Save`
|
1.0
|
Chore: Update the instructions for Artifactory setup to include a step to expose protected url - When Artifactory is installed, we expose the url with https. The Artifactory tool internally still refers to the http endpoint when you walk through the instructions to complete the manual setup steps.
However, with the change to routes in 3.11 and 4.3 http traffic will automatically redirect to https which causes problems in the pipeline when publishing the helm chart. The fix for this is to update Artifactory to use the https endpoint.
The new step should be added after https://ibm-garage-cloud.github.io/ibm-garage-developer-guide/admin/artifactory-setup#allow-anonymous-access-to-artifactory but prior to https://ibm-garage-cloud.github.io/ibm-garage-developer-guide/admin/artifactory-setup#obtain-encrypted-password
Instructions:
1. Click on the ‘Admin’ icon at the bottom of the left nav then select ‘General Configuration’
2. In the `Custom Base Url` field, add the https url to the Artifactory (including /artifactory at the end) - e.g. https://artifactory.garage-dev-iks-cl-143931-0143c5dd31acd8e030a1d6e0ab1380e3-0000.us-east.containers.appdomain.cloud/artifactory
3. Click `Save`
|
non_process
|
chore update the instructions for artifactory setup to include a step to expose protected url when artifactory is installed we expose the url with https the artifactory tool internally still refers to the http endpoint when you walk through the instructions to complete the manual setup steps however with the change to routes in and http traffic will automatically redirect to https which causes problems in the pipeline when publishing the helm chart the fix for this is to update artifactory to use the https endpoint the new step should be added after but prior to instructions click on the ‘admin’ icon at the bottom of the left nav then select ‘general configuration’ in the custom base url field add the https url to the artifactory including artifactory at the end e g click save
| 0
|
28,254
| 23,114,233,787
|
IssuesEvent
|
2022-07-27 15:18:47
|
MinaProtocol/mina
|
https://api.github.com/repos/MinaProtocol/mina
|
closed
|
Make the (or another) Dockerfile that supports a live network w/ Rosetta
|
Size: XL ~ 1.5-2 wks infrastructure rosetta stale backlog
|
This entails at least, but likely not exclusively:
1. Bake runtime genesis with the proper config file
2. Ensure snark keys can be downloaded at runtime or bake them into the container
3. Update docker-start.sh to support the latest set of flags
4. Make sure `rosetta-cli check:data` passes on a live network where this is deployed. Open issues as required for fixes and repeat.
|
1.0
|
Make the (or another) Dockerfile that supports a live network w/ Rosetta - This entails at least, but likely not exclusively:
1. Bake runtime genesis with the proper config file
2. Ensure snark keys can be downloaded at runtime or bake them into the container
3. Update docker-start.sh to support the latest set of flags
4. Make sure `rosetta-cli check:data` passes on a live network where this is deployed. Open issues as required for fixes and repeat.
|
non_process
|
make the or another dockerfile that supports a live network w rosetta this entails at least but likely not exclusively bake runtime genesis with the proper config file ensure snark keys can be downloaded at runtime or bake them into the container update docker start sh to support the latest set of flags make sure rosetta cli check data passes on a live network where this is deployed open issues as required for fixes and repeat
| 0
|
500,748
| 14,513,890,869
|
IssuesEvent
|
2020-12-13 06:26:21
|
Stooberton/ACF-3
|
https://api.github.com/repos/Stooberton/ACF-3
|
closed
|
[BUG] Menu tool stage and operation reset on player death
|
bug dev branch high priority
|
**Short Description**
Whenever the player is killed when doing something with the menu tool, the stage and operation of it will be entirely reset. This leads to the tool being unusable until you select another menu item.
**How to Reproduce (Optional)**
Select the menu tool and just kill yourself, then try to perform the action the current menu item is supposed to perform.
|
1.0
|
[BUG] Menu tool stage and operation reset on player death - **Short Description**
Whenever the player is killed when doing something with the menu tool, the stage and operation of it will be entirely reset. This leads to the tool being unusable until you select another menu item.
**How to Reproduce (Optional)**
Select the menu tool and just kill yourself, then try to perform the action the current menu item is supposed to perform.
|
non_process
|
menu tool stage and operation reset on player death short description whenever the player is killed when doing something with the menu tool the stage and operation of it will be entirely reset this leads to the tool being unusable until you select another menu item how to reproduce optional select the menu tool and just kill yourself then try to perform the action the current menu item is supposed to perform
| 0
|
19,197
| 25,328,138,219
|
IssuesEvent
|
2022-11-18 10:59:34
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
`multi_model_statistics` fail when only a single dataset is given
|
preprocessor
|
(Apologies for posting this again in case this was already discussed, but I didn't find an issue about this.)
While testing #673 I realized that `multi_model_statistics` fails when only a single input dataset is given:
https://github.com/ESMValGroup/ESMValCore/blob/9c008eeaa574642358e9f9555552577ab51f588b/esmvalcore/preprocessor/_multimodel.py#L294-L296
Especially with the introduction of `groupy` and `ensemble_statistics`, I don't think this is good behavior. For most statistics we support there are reasonable default values (i.e. the formulas used to calculate these values work perfectly fine for just one value), e.g. simply return the input data for `mean`, `gmean`, `hmean`, `min`, `max`, `sum`, `rms`; or return 0 for `std`, `var`. I'm not entirely sure about others like percentiles.
Would it make sense to just "hardcode" these defaults for these special cases? Unfortunately `iris` also raises an error in this case:
`iris.exceptions.CoordinateCollapseError: Cannot collapse a dimension which does not describe any data`. @ESMValGroup/esmvaltool-coreteam
Note: It is possible to avoid this by using the `exclude` key, but since this only takes dataset names into account it might fail for more complex ensembles and `groupby` settings.
|
1.0
|
`multi_model_statistics` fail when only a single dataset is given - (Apologies for posting this again in case this was already discussed, but I didn't find an issue about this.)
While testing #673 I realized that `multi_model_statistics` fails when only a single input dataset is given:
https://github.com/ESMValGroup/ESMValCore/blob/9c008eeaa574642358e9f9555552577ab51f588b/esmvalcore/preprocessor/_multimodel.py#L294-L296
Especially with the introduction of `groupy` and `ensemble_statistics`, I don't think this is good behavior. For most statistics we support there are reasonable default values (i.e. the formulas used to calculate these values work perfectly fine for just one value), e.g. simply return the input data for `mean`, `gmean`, `hmean`, `min`, `max`, `sum`, `rms`; or return 0 for `std`, `var`. I'm not entirely sure about others like percentiles.
Would it make sense to just "hardcode" these defaults for these special cases? Unfortunately `iris` also raises an error in this case:
`iris.exceptions.CoordinateCollapseError: Cannot collapse a dimension which does not describe any data`. @ESMValGroup/esmvaltool-coreteam
Note: It is possible to avoid this by using the `exclude` key, but since this only takes dataset names into account it might fail for more complex ensembles and `groupby` settings.
|
process
|
multi model statistics fail when only a single dataset is given apologies for posting this again in case this was already discussed but i didn t find an issue about this while testing i realized that multi model statistics fails when only a single input dataset is given especially with the introduction of groupy and ensemble statistics i don t think this is good behavior for most statistics we support there are reasonable default values i e the formulas used to calculate these values work perfectly fine for just one value e g simply return the input data for mean gmean hmean min max sum rms or return for std var i m not entirely sure about others like percentiles would it make sense to just hardcode these defaults for these special cases unfortunately iris also raises an error in this case iris exceptions coordinatecollapseerror cannot collapse a dimension which does not describe any data esmvalgroup esmvaltool coreteam note it is possible to avoid this by using the exclude key but since this only takes dataset names into account it might fail for more complex ensembles and groupby settings
| 1
|
6,315
| 9,320,856,468
|
IssuesEvent
|
2019-03-27 01:10:46
|
Poohblah/gymnastics_conditioning_buddy
|
https://api.github.com/repos/Poohblah/gymnastics_conditioning_buddy
|
opened
|
Add license to project
|
process
|
Decide which license(s) this project should be licensed under and add them where appropriate.
|
1.0
|
Add license to project - Decide which license(s) this project should be licensed under and add them where appropriate.
|
process
|
add license to project decide which license s this project should be licensed under and add them where appropriate
| 1
|
212,258
| 23,880,852,019
|
IssuesEvent
|
2022-09-08 01:04:06
|
LalithK90/phonesAndAccessories
|
https://api.github.com/repos/LalithK90/phonesAndAccessories
|
opened
|
CVE-2022-38749 (Medium) detected in snakeyaml-1.25.jar
|
security vulnerability
|
## CVE-2022-38749 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.25.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210310154435_MRQHFX/downloadResource_KZZRJE/20210310154513/snakeyaml-1.25.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-aop-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-2.2.4.RELEASE.jar
- :x: **snakeyaml-1.25.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749>CVE-2022-38749</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/525/got-stackoverflowerror-for-many-open">https://bitbucket.org/snakeyaml/snakeyaml/issues/525/got-stackoverflowerror-for-many-open</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-38749 (Medium) detected in snakeyaml-1.25.jar - ## CVE-2022-38749 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.25.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210310154435_MRQHFX/downloadResource_KZZRJE/20210310154513/snakeyaml-1.25.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-aop-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-2.2.4.RELEASE.jar
- :x: **snakeyaml-1.25.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38749>CVE-2022-38749</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/525/got-stackoverflowerror-for-many-open">https://bitbucket.org/snakeyaml/snakeyaml/issues/525/got-stackoverflowerror-for-many-open</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle path to vulnerable library tmp ws ua mrqhfx downloadresource kzzrje snakeyaml jar dependency hierarchy spring boot starter aop release jar root library spring boot starter release jar x snakeyaml jar vulnerable library found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
| 0
|
15,434
| 5,961,947,747
|
IssuesEvent
|
2017-05-29 19:47:49
|
DynamoRIO/dynamorio
|
https://api.github.com/repos/DynamoRIO/dynamorio
|
closed
|
add lint rules for automated code style checks
|
Component-Build Maintainability Type-Feature
|
Today we have some simple hand-written checks in runsuite such as for tabs,
but we could use a linter for automated checks of more code style
conventions.
|
1.0
|
add lint rules for automated code style checks - Today we have some simple hand-written checks in runsuite such as for tabs,
but we could use a linter for automated checks of more code style
conventions.
|
non_process
|
add lint rules for automated code style checks today we have some simple hand written checks in runsuite such as for tabs but we could use a linter for automated checks of more code style conventions
| 0
|
4,561
| 7,390,064,196
|
IssuesEvent
|
2018-03-16 10:56:42
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Does the App Client Key expire?
|
active-directory-b2c cxp in-process product-question triaged
|
The docs talk about how to create an App Client key.
> Create an application secret by going to the Keys blade and clicking the Generate Key button. Make note of the App key value. You use the value as the application secret in your application's code.
There seems to be no way to set the expiration of the key. Is there a default expiration or do these keys never expire?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b1c2ea43-4468-6ce2-c963-6940f5cb1d34
* Version Independent ID: e91ad7a8-0dbd-79b4-0f1b-b3c5a66d6d78
* Content: [Application registration - Azure Active Directory B2C | Microsoft Docs](https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-app-registration#navigate-to-b2c-settings)
* Content Source: [articles/active-directory-b2c/active-directory-b2c-app-registration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory-b2c/active-directory-b2c-app-registration.md)
* Service: **active-directory-b2c**
* GitHub Login: @PatAltimore
* Microsoft Alias: **parakhj**
|
1.0
|
Does the App Client Key expire? - The docs talk about how to create an App Client key.
> Create an application secret by going to the Keys blade and clicking the Generate Key button. Make note of the App key value. You use the value as the application secret in your application's code.
There seems to be no way to set the expiration of the key. Is there a default expiration or do these keys never expire?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b1c2ea43-4468-6ce2-c963-6940f5cb1d34
* Version Independent ID: e91ad7a8-0dbd-79b4-0f1b-b3c5a66d6d78
* Content: [Application registration - Azure Active Directory B2C | Microsoft Docs](https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-app-registration#navigate-to-b2c-settings)
* Content Source: [articles/active-directory-b2c/active-directory-b2c-app-registration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory-b2c/active-directory-b2c-app-registration.md)
* Service: **active-directory-b2c**
* GitHub Login: @PatAltimore
* Microsoft Alias: **parakhj**
|
process
|
does the app client key expire the docs talk about how to create an app client key create an application secret by going to the keys blade and clicking the generate key button make note of the app key value you use the value as the application secret in your application s code there seems to be no way to set the expiration of the key is there a default expiration or do these keys never expire document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service active directory github login pataltimore microsoft alias parakhj
| 1
|
11,394
| 3,202,365,655
|
IssuesEvent
|
2015-10-02 13:43:08
|
RocketChat/Rocket.Chat
|
https://api.github.com/repos/RocketChat/Rocket.Chat
|
closed
|
Private group doesn't display in room list
|
stat: needs testing type: bug
|
Steps to repeat an problem.
1. Add a user to a private group Group1.
2. User can see a line "More private groups" in Private Groups div
3. User leaves room Group1
4. User readded to Group1
5. Now he can't reenter this room any more via interface, because there is no line "More private groups" anymore. But if User enter http://host:3000/group/Group1 in web browser he reenters this group chat.
Can you fix this problem, so a User can reenter this group via web interface?
|
1.0
|
Private group doesn't display in room list - Steps to repeat an problem.
1. Add a user to a private group Group1.
2. User can see a line "More private groups" in Private Groups div
3. User leaves room Group1
4. User readded to Group1
5. Now he can't reenter this room any more via interface, because there is no line "More private groups" anymore. But if User enter http://host:3000/group/Group1 in web browser he reenters this group chat.
Can you fix this problem, so a User can reenter this group via web interface?
|
non_process
|
private group doesn t display in room list steps to repeat an problem add a user to a private group user can see a line more private groups in private groups div user leaves room user readded to now he can t reenter this room any more via interface because there is no line more private groups anymore but if user enter in web browser he reenters this group chat can you fix this problem so a user can reenter this group via web interface
| 0
|
22,047
| 30,570,381,993
|
IssuesEvent
|
2023-07-20 21:33:03
|
cohenlabUNC/clpipe
|
https://api.github.com/repos/cohenlabUNC/clpipe
|
opened
|
Remove use of fMRIPrep Masks as default due to Poor Quality
|
postprocess2 1.9 Req
|
- Remove masks pass through to individual postprocessing steps
- Add ability to use template mask instead
|
1.0
|
Remove use of fMRIPrep Masks as default due to Poor Quality - - Remove masks pass through to individual postprocessing steps
- Add ability to use template mask instead
|
process
|
remove use of fmriprep masks as default due to poor quality remove masks pass through to individual postprocessing steps add ability to use template mask instead
| 1
|
138,090
| 18,770,653,035
|
IssuesEvent
|
2021-11-06 19:28:33
|
samqws-marketing/box_mojito
|
https://api.github.com/repos/samqws-marketing/box_mojito
|
opened
|
CVE-2020-15366 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ajv-5.5.2.tgz</b>, <b>ajv-4.11.8.tgz</b>, <b>ajv-6.12.0.tgz</b>, <b>ajv-6.10.0.tgz</b></p></summary>
<p>
<details><summary><b>ajv-5.5.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- style-loader-0.18.2.tgz (Root Library)
- schema-utils-0.3.0.tgz
- :x: **ajv-5.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-4.11.8.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-4.11.8.tgz">https://registry.npmjs.org/ajv/-/ajv-4.11.8.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/webpack/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- webpack-2.7.0.tgz (Root Library)
- :x: **ajv-4.11.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-6.12.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.0.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/node-sass/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.13.1.tgz (Root Library)
- request-2.88.2.tgz
- har-validator-5.1.3.tgz
- :x: **ajv-6.12.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-6.10.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/css-loader/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- css-loader-3.0.0.tgz (Root Library)
- schema-utils-1.0.0.tgz
- :x: **ajv-6.10.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_mojito/commit/3c2e2cd902af2e1370eccd53d260a4a3ca2da9a7">3c2e2cd902af2e1370eccd53d260a4a3ca2da9a7</a></p>
<p>Found in base branch: <b>0.110</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"5.5.2","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"style-loader:0.18.2;schema-utils:0.3.0;ajv:5.5.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"4.11.8","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack:2.7.0;ajv:4.11.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"6.12.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"node-sass:4.13.1;request:2.88.2;har-validator:5.1.3;ajv:6.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"6.10.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"css-loader:3.0.0;schema-utils:1.0.0;ajv:6.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"}],"baseBranches":["0.110"],"vulnerabilityIdentifier":"CVE-2020-15366","vulnerabilityDetails":"An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-15366 (Medium) detected in multiple libraries - ## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ajv-5.5.2.tgz</b>, <b>ajv-4.11.8.tgz</b>, <b>ajv-6.12.0.tgz</b>, <b>ajv-6.10.0.tgz</b></p></summary>
<p>
<details><summary><b>ajv-5.5.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz">https://registry.npmjs.org/ajv/-/ajv-5.5.2.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- style-loader-0.18.2.tgz (Root Library)
- schema-utils-0.3.0.tgz
- :x: **ajv-5.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-4.11.8.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-4.11.8.tgz">https://registry.npmjs.org/ajv/-/ajv-4.11.8.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/webpack/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- webpack-2.7.0.tgz (Root Library)
- :x: **ajv-4.11.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-6.12.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.0.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/node-sass/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.13.1.tgz (Root Library)
- request-2.88.2.tgz
- har-validator-5.1.3.tgz
- :x: **ajv-6.12.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ajv-6.10.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p>
<p>Path to dependency file: box_mojito/webapp/package.json</p>
<p>Path to vulnerable library: box_mojito/webapp/node_modules/css-loader/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- css-loader-3.0.0.tgz (Root Library)
- schema-utils-1.0.0.tgz
- :x: **ajv-6.10.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_mojito/commit/3c2e2cd902af2e1370eccd53d260a4a3ca2da9a7">3c2e2cd902af2e1370eccd53d260a4a3ca2da9a7</a></p>
<p>Found in base branch: <b>0.110</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"5.5.2","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"style-loader:0.18.2;schema-utils:0.3.0;ajv:5.5.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"4.11.8","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack:2.7.0;ajv:4.11.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"6.12.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"node-sass:4.13.1;request:2.88.2;har-validator:5.1.3;ajv:6.12.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"},{"packageType":"javascript/Node.js","packageName":"ajv","packageVersion":"6.10.0","packageFilePaths":["/webapp/package.json"],"isTransitiveDependency":true,"dependencyTree":"css-loader:3.0.0;schema-utils:1.0.0;ajv:6.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ajv - 6.12.3"}],"baseBranches":["0.110"],"vulnerabilityIdentifier":"CVE-2020-15366","vulnerabilityDetails":"An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries ajv tgz ajv tgz ajv tgz ajv tgz ajv tgz another json schema validator library home page a href path to dependency file box mojito webapp package json path to vulnerable library box mojito webapp node modules ajv package json dependency hierarchy style loader tgz root library schema utils tgz x ajv tgz vulnerable library ajv tgz another json schema validator library home page a href path to dependency file box mojito webapp package json path to vulnerable library box mojito webapp node modules webpack node modules ajv package json dependency hierarchy webpack tgz root library x ajv tgz vulnerable library ajv tgz another json schema validator library home page a href path to dependency file box mojito webapp package json path to vulnerable library box mojito webapp node modules node sass node modules ajv package json dependency hierarchy node sass tgz root library request tgz har validator tgz x ajv tgz vulnerable library ajv tgz another json schema validator library home page a href path to dependency file box mojito webapp package json path to vulnerable library box mojito webapp node modules css loader node modules ajv package json dependency hierarchy css loader tgz root library schema utils tgz x ajv tgz vulnerable library found in head commit a href found in base branch vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree style loader schema utils ajv isminimumfixversionavailable true minimumfixversion ajv packagetype javascript node js packagename ajv packageversion packagefilepaths istransitivedependency true dependencytree webpack ajv isminimumfixversionavailable true minimumfixversion ajv packagetype javascript node js packagename ajv packageversion packagefilepaths istransitivedependency true dependencytree node sass request har validator ajv isminimumfixversionavailable true minimumfixversion ajv packagetype javascript node js packagename ajv packageversion packagefilepaths istransitivedependency true dependencytree css loader schema utils ajv isminimumfixversionavailable true minimumfixversion ajv basebranches vulnerabilityidentifier cve vulnerabilitydetails an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code vulnerabilityurl
| 0
|
132,022
| 18,470,539,512
|
IssuesEvent
|
2021-10-17 17:04:07
|
openfaas/faasd
|
https://api.github.com/repos/openfaas/faasd
|
closed
|
Support raw binary secrets
|
design/approved
|
<!--- Provide a general summary of the issue in the Title above -->
Raw binary data was introduced into the faas-provider secrets model in https://github.com/openfaas/faas-provider/pull/60
While expanding the certifier secrets test to include support for Raw values, I discovered that this feature has not been ported to faasd. We need this to maintain compatibility wtih faas-netes.
## Expected Behaviour
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
The new certifier tests should pass and create a secret from raw bytes
## Current Behaviour
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
The certifier tests are failing and it is creating an empty secret

## Are you a GitHub Sponsor (Yes/No?)
Check at: https://github.com/sponsors/openfaas
- [x] Yes
- [ ] No
## List all Possible Solutions
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Update the provider version and then add the requied `if`-block to handle the two fields.
## List the one solution that you would recommend
<!--- If you were to be on the hook for this change. -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Run this certifier branch https://github.com/openfaas/certifier/pull/79
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Can't finish the certifier
|
1.0
|
Support raw binary secrets - <!--- Provide a general summary of the issue in the Title above -->
Raw binary data was introduced into the faas-provider secrets model in https://github.com/openfaas/faas-provider/pull/60
While expanding the certifier secrets test to include support for Raw values, I discovered that this feature has not been ported to faasd. We need this to maintain compatibility wtih faas-netes.
## Expected Behaviour
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
The new certifier tests should pass and create a secret from raw bytes
## Current Behaviour
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
The certifier tests are failing and it is creating an empty secret

## Are you a GitHub Sponsor (Yes/No?)
Check at: https://github.com/sponsors/openfaas
- [x] Yes
- [ ] No
## List all Possible Solutions
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Update the provider version and then add the requied `if`-block to handle the two fields.
## List the one solution that you would recommend
<!--- If you were to be on the hook for this change. -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Run this certifier branch https://github.com/openfaas/certifier/pull/79
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Can't finish the certifier
|
non_process
|
support raw binary secrets raw binary data was introduced into the faas provider secrets model in while expanding the certifier secrets test to include support for raw values i discovered that this feature has not been ported to faasd we need this to maintain compatibility wtih faas netes expected behaviour the new certifier tests should pass and create a secret from raw bytes current behaviour the certifier tests are failing and it is creating an empty secret are you a github sponsor yes no check at yes no list all possible solutions update the provider version and then add the requied if block to handle the two fields list the one solution that you would recommend steps to reproduce for bugs run this certifier branch context can t finish the certifier
| 0
|
155,235
| 12,244,344,429
|
IssuesEvent
|
2020-05-05 10:56:54
|
WoWManiaUK/Redemption
|
https://api.github.com/repos/WoWManiaUK/Redemption
|
closed
|
Taunt - Diminishing Returns
|
Fix - Tester Confirmed
|
>3.3.0
Taunt Diminishing Returns: We've revised the system for diminishing returns on Taunt so that creatures do not become immune to Taunt until after 5 Taunts have landed. The duration of the Taunt effect will be reduced by 35% instead of 50% for each taunt landed. In addition, most creatures in the world will not be affected by Taunt diminishing returns at all. Creatures will only have Taunt diminishing returns if they have been specifically flagged for that behavior based on the design of a given encounter.
I am positive that there are encounters that are missing this mechanic. However, I don't know which ones.
Please help me out if you know more about it! :D
|
1.0
|
Taunt - Diminishing Returns - >3.3.0
Taunt Diminishing Returns: We've revised the system for diminishing returns on Taunt so that creatures do not become immune to Taunt until after 5 Taunts have landed. The duration of the Taunt effect will be reduced by 35% instead of 50% for each taunt landed. In addition, most creatures in the world will not be affected by Taunt diminishing returns at all. Creatures will only have Taunt diminishing returns if they have been specifically flagged for that behavior based on the design of a given encounter.
I am positive that there are encounters that are missing this mechanic. However, I don't know which ones.
Please help me out if you know more about it! :D
|
non_process
|
taunt diminishing returns taunt diminishing returns we ve revised the system for diminishing returns on taunt so that creatures do not become immune to taunt until after taunts have landed the duration of the taunt effect will be reduced by instead of for each taunt landed in addition most creatures in the world will not be affected by taunt diminishing returns at all creatures will only have taunt diminishing returns if they have been specifically flagged for that behavior based on the design of a given encounter i am positive that there are encounters that are missing this mechanic however i don t know which ones please help me out if you know more about it d
| 0
|
8,478
| 11,643,052,141
|
IssuesEvent
|
2020-02-29 11:05:24
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
opened
|
UCP: Migrate scalar function `UnHex` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `UnHex` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `UnHex` from TiDB -
## Description
Port the scalar function `UnHex` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function unhex from tidb description port the scalar function unhex from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
19,705
| 26,053,191,957
|
IssuesEvent
|
2022-12-22 21:08:51
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Interface de passos com Vue.js - Filtragem de lista de passos
|
[1] Bug [0] Desenvolvimento [2] Média Prioridade [3] Processamento Dinâmico
|
## Comportamento Esperado
Os selects de escolha de passo na interface de passos devem limitar as escolhas disponíveis para o usuário dependendo do contexto. Alguns contextos a considerar são:
- Passos gerais
- Passos dentro de uma nova aba
- Passos dentro de um iframe
- Passos usados como iteradores
- Passos usados como condições
- Passos usados como "sources" na atribuição
## Comportamento Atual
Independentemente do contexto, os selects de definição de passo permitem selecionar qualquer passo existente.
## Sistema
Branch `issue-882`.
|
1.0
|
Interface de passos com Vue.js - Filtragem de lista de passos - ## Comportamento Esperado
Os selects de escolha de passo na interface de passos devem limitar as escolhas disponíveis para o usuário dependendo do contexto. Alguns contextos a considerar são:
- Passos gerais
- Passos dentro de uma nova aba
- Passos dentro de um iframe
- Passos usados como iteradores
- Passos usados como condições
- Passos usados como "sources" na atribuição
## Comportamento Atual
Independentemente do contexto, os selects de definição de passo permitem selecionar qualquer passo existente.
## Sistema
Branch `issue-882`.
|
process
|
interface de passos com vue js filtragem de lista de passos comportamento esperado os selects de escolha de passo na interface de passos devem limitar as escolhas disponíveis para o usuário dependendo do contexto alguns contextos a considerar são passos gerais passos dentro de uma nova aba passos dentro de um iframe passos usados como iteradores passos usados como condições passos usados como sources na atribuição comportamento atual independentemente do contexto os selects de definição de passo permitem selecionar qualquer passo existente sistema branch issue
| 1
|
6,240
| 9,196,798,958
|
IssuesEvent
|
2019-03-07 08:18:37
|
eobermuhlner/big-math
|
https://api.github.com/repos/eobermuhlner/big-math
|
opened
|
Prepare release 2.1.0
|
development process
|
- [ ] add release number header to release note
- [ ] rename release note
- [ ] change version in `build.gradle`
- [ ] upload artifacts to maven central
- [ ] uncomment task `uploadArtifacts` in `build.gradle`
- [ ] run `./gradlew :ch.obermuhlner.math.big:uploadArchives`
- [ ] go to https://oss.sonatype.org/
- [ ] in tab 'Staging Repositories' locate own Repository (typically at the end of the list)
- [ ] verify content of own Repository (version number!)
- [ ] `Close` own Repository
- [ ] `Refresh`
- [ ] `Release` own Repository
- [ ] create github release from same artifacts
- [ ] Create new draft release
- [ ] Copy content of release note into draft release
- [ ] Add artefacts from gradle build to draft release
- [ ] big-math-*.jar
- [ ] big-math-*-javadoc.jar
- [ ] big-math-*-sources.jar
- [ ] Publish release
- [ ] update readme
- [ ] add generated javadoc to `docs/javadoc`
- [ ] update `docs/index.md`
- [ ] update dependent projects
- [ ] create empty release note for next release
|
1.0
|
Prepare release 2.1.0 - - [ ] add release number header to release note
- [ ] rename release note
- [ ] change version in `build.gradle`
- [ ] upload artifacts to maven central
- [ ] uncomment task `uploadArtifacts` in `build.gradle`
- [ ] run `./gradlew :ch.obermuhlner.math.big:uploadArchives`
- [ ] go to https://oss.sonatype.org/
- [ ] in tab 'Staging Repositories' locate own Repository (typically at the end of the list)
- [ ] verify content of own Repository (version number!)
- [ ] `Close` own Repository
- [ ] `Refresh`
- [ ] `Release` own Repository
- [ ] create github release from same artifacts
- [ ] Create new draft release
- [ ] Copy content of release note into draft release
- [ ] Add artefacts from gradle build to draft release
- [ ] big-math-*.jar
- [ ] big-math-*-javadoc.jar
- [ ] big-math-*-sources.jar
- [ ] Publish release
- [ ] update readme
- [ ] add generated javadoc to `docs/javadoc`
- [ ] update `docs/index.md`
- [ ] update dependent projects
- [ ] create empty release note for next release
|
process
|
prepare release add release number header to release note rename release note change version in build gradle upload artifacts to maven central uncomment task uploadartifacts in build gradle run gradlew ch obermuhlner math big uploadarchives go to in tab staging repositories locate own repository typically at the end of the list verify content of own repository version number close own repository refresh release own repository create github release from same artifacts create new draft release copy content of release note into draft release add artefacts from gradle build to draft release big math jar big math javadoc jar big math sources jar publish release update readme add generated javadoc to docs javadoc update docs index md update dependent projects create empty release note for next release
| 1
|
13,776
| 16,532,315,308
|
IssuesEvent
|
2021-05-27 07:44:36
|
openservicemesh/osm-docs
|
https://api.github.com/repos/openservicemesh/osm-docs
|
closed
|
clarify banner regarding v0.8 docs
|
process
|
Release v0.8.4 of OSM was released on 5/18/2021. The docs for release-v0.8 should not be marked as archived. We should only mark a version of documentation as archived when we are no longer supporting it.
The banner reads as follows, but a user using `release-v0.8.x` of OSM should not use the `latest` docs pointing to the `main` branch because `main` docs might not be compatible with the code in `release-v0.8.x`.

|
1.0
|
clarify banner regarding v0.8 docs - Release v0.8.4 of OSM was released on 5/18/2021. The docs for release-v0.8 should not be marked as archived. We should only mark a version of documentation as archived when we are no longer supporting it.
The banner reads as follows, but a user using `release-v0.8.x` of OSM should not use the `latest` docs pointing to the `main` branch because `main` docs might not be compatible with the code in `release-v0.8.x`.

|
process
|
clarify banner regarding docs release of osm was released on the docs for release should not be marked as archived we should only mark a version of documentation as archived when we are no longer supporting it the banner reads as follows but a user using release x of osm should not use the latest docs pointing to the main branch because main docs might not be compatible with the code in release x
| 1
|
16,582
| 21,625,475,201
|
IssuesEvent
|
2022-05-05 01:06:54
|
googleapis/java-bigquery
|
https://api.github.com/repos/googleapis/java-bigquery
|
reopened
|
Investigate GraalVM IT failure with Apache Arrow
|
type: process api: bigquery
|
We are observing GraalVM IT failure on [this PR](https://github.com/googleapis/java-bigquery/pull/1374).
Some initial investigation was done: https://github.com/googleapis/java-bigquery/pull/1374#issuecomment-1105342700
[Full stack trace](https://source.cloud.google.com/results/invocations/24a2cba4-9627-48c5-8dd6-cb880dfb0797/targets/cloud-devrel%2Fclient-libraries%2Fjava%2Fjava-bigquery%2Fpresubmit%2Fgraalvm-native/log)
cc @suztomo
|
1.0
|
Investigate GraalVM IT failure with Apache Arrow - We are observing GraalVM IT failure on [this PR](https://github.com/googleapis/java-bigquery/pull/1374).
Some initial investigation was done: https://github.com/googleapis/java-bigquery/pull/1374#issuecomment-1105342700
[Full stack trace](https://source.cloud.google.com/results/invocations/24a2cba4-9627-48c5-8dd6-cb880dfb0797/targets/cloud-devrel%2Fclient-libraries%2Fjava%2Fjava-bigquery%2Fpresubmit%2Fgraalvm-native/log)
cc @suztomo
|
process
|
investigate graalvm it failure with apache arrow we are observing graalvm it failure on some initial investigation was done cc suztomo
| 1
|
12,601
| 15,001,667,030
|
IssuesEvent
|
2021-01-30 00:53:47
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Import-PfxCertificate access denied
|
automation/svc cxp process-automation/subsvc product-question triaged
|
I guess that's not an issue, but I can't find a forum where I can ask for help. So, running Export-RunAsCertificateToHybridWorker gives me Access Denied while trying to Import-PfxCertificate. The credentials the runbook uses are of a domain admin but somewhere on the web I've red this cmdlet needs elevated prompt. Anyway I imported the certificate manually with mmc, but I was wondering why this happened.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Import-PfxCertificate access denied - I guess that's not an issue, but I can't find a forum where I can ask for help. So, running Export-RunAsCertificateToHybridWorker gives me Access Denied while trying to Import-PfxCertificate. The credentials the runbook uses are of a domain admin but somewhere on the web I've red this cmdlet needs elevated prompt. Anyway I imported the certificate manually with mmc, but I was wondering why this happened.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
import pfxcertificate access denied i guess that s not an issue but i can t find a forum where i can ask for help so running export runascertificatetohybridworker gives me access denied while trying to import pfxcertificate the credentials the runbook uses are of a domain admin but somewhere on the web i ve red this cmdlet needs elevated prompt anyway i imported the certificate manually with mmc but i was wondering why this happened document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
246,544
| 20,871,429,520
|
IssuesEvent
|
2022-03-22 12:20:50
|
byemc/status
|
https://api.github.com/repos/byemc/status
|
closed
|
🛑 Bye's Site (testing) is down
|
status bye-s-site-testing
|
In [`6c42a1a`](https://github.com/byemc/status/commit/6c42a1ae76ea8bafac7d39ffb837ce8dbbd87d0a
), Bye's Site (testing) (https://byemc-xyz-testing.pages.dev) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Bye's Site (testing) is down - In [`6c42a1a`](https://github.com/byemc/status/commit/6c42a1ae76ea8bafac7d39ffb837ce8dbbd87d0a
), Bye's Site (testing) (https://byemc-xyz-testing.pages.dev) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
non_process
|
🛑 bye s site testing is down in bye s site testing was down http code response time ms
| 0
|
14,214
| 5,583,555,724
|
IssuesEvent
|
2017-03-29 00:52:19
|
google/instrumentation-java
|
https://api.github.com/repos/google/instrumentation-java
|
closed
|
Remove Bazel build files, and only build with Maven.
|
build
|
Maintaining two builds is time-consuming. Since Bazel currently doesn't support deploying to Maven Central or running different static analysis tools, we should use Maven for now. We will need to fix #147 and possibly add Error Prone to the Maven build first.
|
1.0
|
Remove Bazel build files, and only build with Maven. - Maintaining two builds is time-consuming. Since Bazel currently doesn't support deploying to Maven Central or running different static analysis tools, we should use Maven for now. We will need to fix #147 and possibly add Error Prone to the Maven build first.
|
non_process
|
remove bazel build files and only build with maven maintaining two builds is time consuming since bazel currently doesn t support deploying to maven central or running different static analysis tools we should use maven for now we will need to fix and possibly add error prone to the maven build first
| 0
|
22,492
| 31,465,955,242
|
IssuesEvent
|
2023-08-30 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 30 Aug 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### NSF: Neural Surface Fields for Human Modeling from Monocular Depth
- **Authors:** Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos, Yuanlu Xu, Gerard Pons-Moll, Tony Tung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14847
- **Pdf link:** https://arxiv.org/pdf/2308.14847
- **Abstract**
Obtaining personalized 3D animatable avatars from a monocular camera has several real world applications in gaming, virtual try-on, animation, and VR/XR, etc. However, it is very challenging to model dynamic and fine-grained clothing deformations from such sparse data. Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology. For instance, reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames. Moreover, predicting per-vertex deformations on a pre-designed human template with a discrete surface lacks flexibility in resolution and topology. To overcome these limitations, we propose a novel method `\keyfeature: Neural Surface Fields' for modeling 3D clothed humans from monocular depth. NSF defines a neural field solely on the base surface which models a continuous and flexible displacement field. NSF can be adapted to the base surface with different resolution and topology without retraining at inference time. Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining. To foster research in this direction, we release our code in project page at: https://yuxuan-xue.com/nsf.
### Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation
- **Authors:** Chengyin Li, Prashant Khanduri, Yao Qiang, Rafi Ibn Sultan, Indrin Chetty, Dongxiao Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.14936
- **Pdf link:** https://arxiv.org/pdf/2308.14936
- **Abstract**
The Segment Anything Model (SAM) has rapidly been adopted for segmenting a wide range of natural images. However, recent studies have indicated that SAM exhibits subpar performance on 3D medical image segmentation tasks. In addition to the domain gaps between natural and medical images, disparities in the spatial arrangement between 2D and 3D images, the substantial computational burden imposed by powerful GPU servers, and the time-consuming manual prompt generation impede the extension of SAM to a broader spectrum of medical image segmentation applications. To address these challenges, in this work, we introduce a novel method, AutoSAM Adapter, designed specifically for 3D multi-organ CT-based segmentation. We employ parameter-efficient adaptation techniques in developing an automatic prompt learning paradigm to facilitate the transformation of the SAM model's capabilities to 3D medical image segmentation, eliminating the need for manually generated prompts. Furthermore, we effectively transfer the acquired knowledge of the AutoSAM Adapter to other lightweight models specifically tailored for 3D medical image analysis, achieving state-of-the-art (SOTA) performance on medical image segmentation tasks. Through extensive experimental evaluation, we demonstrate the AutoSAM Adapter as a critical foundation for effectively leveraging the emerging ability of foundation models in 2D natural image segmentation for 3D medical image segmentation.
### Read-only Prompt Optimization for Vision-Language Few-shot Learning
- **Authors:** Dongjun Lee, Seokwon Song, Jihee Suh, Joonmyeong Choi, Sanghyeok Lee, Hyunwoo J.Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14960
- **Pdf link:** https://arxiv.org/pdf/2308.14960
- **Abstract**
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pre-trained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.
### A Multimodal Visual Encoding Model Aided by Introducing Verbal Semantic Information
- **Authors:** Shuxiao Ma, Linyuan Wang, Bin Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Neurons and Cognition (q-bio.NC)
- **Arxiv link:** https://arxiv.org/abs/2308.15142
- **Pdf link:** https://arxiv.org/pdf/2308.15142
- **Abstract**
Biological research has revealed that the verbal semantic information in the brain cortex, as an additional source, participates in nonverbal semantic tasks, such as visual encoding. However, previous visual encoding models did not incorporate verbal semantic information, contradicting this biological finding. This paper proposes a multimodal visual information encoding network model based on stimulus images and associated textual information in response to this issue. Our visual information encoding network model takes stimulus images as input and leverages textual information generated by a text-image generation model as verbal semantic information. This approach injects new information into the visual encoding model. Subsequently, a Transformer network aligns image and text feature information, creating a multimodal feature space. A convolutional network then maps from this multimodal feature space to voxel space, constructing the multimodal visual information encoding network model. Experimental results demonstrate that the proposed multimodal visual information encoding network model outperforms previous models under the exact training cost. In voxel prediction of the left hemisphere of subject 1's brain, the performance improves by approximately 15.87%, while in the right hemisphere, the performance improves by about 4.6%. The multimodal visual encoding network model exhibits superior encoding performance. Additionally, ablation experiments indicate that our proposed model better simulates the brain's visual information processing.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance
- **Authors:** Mackenzie J. Meni, Ryan T. White, Michael Mayo, Kevin Pilkiewicz
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2308.14938
- **Pdf link:** https://arxiv.org/pdf/2308.14938
- **Abstract**
Neural networks have dramatically increased our capacity to learn from large, high-dimensional datasets across innumerable disciplines. However, their decisions are not easily interpretable, their computational costs are high, and building and training them are uncertain processes. To add structure to these efforts, we derive new mathematical results to efficiently measure the changes in entropy as fully-connected and convolutional neural networks process data, and introduce entropy-based loss terms. Experiments in image compression and image classification on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions, converge in fewer training epochs, and achieve better test metrics.
### Is it an i or an l: Test-time Adaptation of Text Line Recognition Models
- **Authors:** Debapriya Tula, Sujoy Paul, Gagan Madan, Peter Garst, Reeve Ingle, Gaurav Aggarwal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.15037
- **Pdf link:** https://arxiv.org/pdf/2308.15037
- **Abstract**
Recognizing text lines from images is a challenging problem, especially for handwritten documents due to large variations in writing styles. While text line recognition models are generally trained on large corpora of real and synthetic data, such models can still make frequent mistakes if the handwriting is inscrutable or the image acquisition process adds corruptions, such as noise, blur, compression, etc. Writing style is generally quite consistent for an individual, which can be leveraged to correct mistakes made by such models. Motivated by this, we introduce the problem of adapting text line recognition models during test time. We focus on a challenging and realistic setting where, given only a single test image consisting of multiple text lines, the task is to adapt the model such that it performs better on the image, without any labels. We propose an iterative self-training approach that uses feedback from the language model to update the optical model, with confident self-labels in each iteration. The confidence measure is based on an augmentation mechanism that evaluates the divergence of the prediction of the model in a local region. We perform rigorous evaluation of our method on several benchmark datasets as well as their corrupted versions. Experimental results on multiple datasets spanning multiple scripts show that the proposed adaptation method offers an absolute improvement of up to 8% in character error rate with just a few iterations of self-training at test time.
## Keyword: RAW
### CEFHRI: A Communication Efficient Federated Learning Framework for Recognizing Industrial Human-Robot Interaction
- **Authors:** Umar Khalid, Hasan Iqbal, Saeed Vahidian, Jing Hua, Chen Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14965
- **Pdf link:** https://arxiv.org/pdf/2308.14965
- **Abstract**
Human-robot interaction (HRI) is a rapidly growing field that encompasses social and industrial applications. Machine learning plays a vital role in industrial HRI by enhancing the adaptability and autonomy of robots in complex environments. However, data privacy is a crucial concern in the interaction between humans and robots, as companies need to protect sensitive data while machine learning algorithms require access to large datasets. Federated Learning (FL) offers a solution by enabling the distributed training of models without sharing raw data. Despite extensive research on Federated learning (FL) for tasks such as natural language processing (NLP) and image classification, the question of how to use FL for HRI remains an open research problem. The traditional FL approach involves transmitting large neural network parameter matrices between the server and clients, which can lead to high communication costs and often becomes a bottleneck in FL. This paper proposes a communication-efficient FL framework for human-robot interaction (CEFHRI) to address the challenges of data heterogeneity and communication costs. The framework leverages pre-trained models and introduces a trainable spatiotemporal adapter for video understanding tasks in HRI. Experimental results on three human-robot interaction benchmark datasets: HRI30, InHARD, and COIN demonstrate the superiority of CEFHRI over full fine-tuning in terms of communication costs. The proposed methodology provides a secure and efficient approach to HRI federated learning, particularly in industrial environments with data privacy concerns and limited communication bandwidth. Our code is available at https://github.com/umarkhalidAI/CEFHRI-Efficient-Federated-Learning.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Wed, 30 Aug 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### NSF: Neural Surface Fields for Human Modeling from Monocular Depth
- **Authors:** Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos, Yuanlu Xu, Gerard Pons-Moll, Tony Tung
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14847
- **Pdf link:** https://arxiv.org/pdf/2308.14847
- **Abstract**
Obtaining personalized 3D animatable avatars from a monocular camera has several real world applications in gaming, virtual try-on, animation, and VR/XR, etc. However, it is very challenging to model dynamic and fine-grained clothing deformations from such sparse data. Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology. For instance, reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames. Moreover, predicting per-vertex deformations on a pre-designed human template with a discrete surface lacks flexibility in resolution and topology. To overcome these limitations, we propose a novel method `\keyfeature: Neural Surface Fields' for modeling 3D clothed humans from monocular depth. NSF defines a neural field solely on the base surface which models a continuous and flexible displacement field. NSF can be adapted to the base surface with different resolution and topology without retraining at inference time. Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining. To foster research in this direction, we release our code in project page at: https://yuxuan-xue.com/nsf.
### Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation
- **Authors:** Chengyin Li, Prashant Khanduri, Yao Qiang, Rafi Ibn Sultan, Indrin Chetty, Dongxiao Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2308.14936
- **Pdf link:** https://arxiv.org/pdf/2308.14936
- **Abstract**
The Segment Anything Model (SAM) has rapidly been adopted for segmenting a wide range of natural images. However, recent studies have indicated that SAM exhibits subpar performance on 3D medical image segmentation tasks. In addition to the domain gaps between natural and medical images, disparities in the spatial arrangement between 2D and 3D images, the substantial computational burden imposed by powerful GPU servers, and the time-consuming manual prompt generation impede the extension of SAM to a broader spectrum of medical image segmentation applications. To address these challenges, in this work, we introduce a novel method, AutoSAM Adapter, designed specifically for 3D multi-organ CT-based segmentation. We employ parameter-efficient adaptation techniques in developing an automatic prompt learning paradigm to facilitate the transformation of the SAM model's capabilities to 3D medical image segmentation, eliminating the need for manually generated prompts. Furthermore, we effectively transfer the acquired knowledge of the AutoSAM Adapter to other lightweight models specifically tailored for 3D medical image analysis, achieving state-of-the-art (SOTA) performance on medical image segmentation tasks. Through extensive experimental evaluation, we demonstrate the AutoSAM Adapter as a critical foundation for effectively leveraging the emerging ability of foundation models in 2D natural image segmentation for 3D medical image segmentation.
### Read-only Prompt Optimization for Vision-Language Few-shot Learning
- **Authors:** Dongjun Lee, Seokwon Song, Jihee Suh, Joonmyeong Choi, Sanghyeok Lee, Hyunwoo J.Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14960
- **Pdf link:** https://arxiv.org/pdf/2308.14960
- **Abstract**
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pre-trained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.
### A Multimodal Visual Encoding Model Aided by Introducing Verbal Semantic Information
- **Authors:** Shuxiao Ma, Linyuan Wang, Bin Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Neurons and Cognition (q-bio.NC)
- **Arxiv link:** https://arxiv.org/abs/2308.15142
- **Pdf link:** https://arxiv.org/pdf/2308.15142
- **Abstract**
Biological research has revealed that the verbal semantic information in the brain cortex, as an additional source, participates in nonverbal semantic tasks, such as visual encoding. However, previous visual encoding models did not incorporate verbal semantic information, contradicting this biological finding. This paper proposes a multimodal visual information encoding network model based on stimulus images and associated textual information in response to this issue. Our visual information encoding network model takes stimulus images as input and leverages textual information generated by a text-image generation model as verbal semantic information. This approach injects new information into the visual encoding model. Subsequently, a Transformer network aligns image and text feature information, creating a multimodal feature space. A convolutional network then maps from this multimodal feature space to voxel space, constructing the multimodal visual information encoding network model. Experimental results demonstrate that the proposed multimodal visual information encoding network model outperforms previous models under the exact training cost. In voxel prediction of the left hemisphere of subject 1's brain, the performance improves by approximately 15.87%, while in the right hemisphere, the performance improves by about 4.6%. The multimodal visual encoding network model exhibits superior encoding performance. Additionally, ablation experiments indicate that our proposed model better simulates the brain's visual information processing.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance
- **Authors:** Mackenzie J. Meni, Ryan T. White, Michael Mayo, Kevin Pilkiewicz
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2308.14938
- **Pdf link:** https://arxiv.org/pdf/2308.14938
- **Abstract**
Neural networks have dramatically increased our capacity to learn from large, high-dimensional datasets across innumerable disciplines. However, their decisions are not easily interpretable, their computational costs are high, and building and training them are uncertain processes. To add structure to these efforts, we derive new mathematical results to efficiently measure the changes in entropy as fully-connected and convolutional neural networks process data, and introduce entropy-based loss terms. Experiments in image compression and image classification on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions, converge in fewer training epochs, and achieve better test metrics.
### Is it an i or an l: Test-time Adaptation of Text Line Recognition Models
- **Authors:** Debapriya Tula, Sujoy Paul, Gagan Madan, Peter Garst, Reeve Ingle, Gaurav Aggarwal
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.15037
- **Pdf link:** https://arxiv.org/pdf/2308.15037
- **Abstract**
Recognizing text lines from images is a challenging problem, especially for handwritten documents due to large variations in writing styles. While text line recognition models are generally trained on large corpora of real and synthetic data, such models can still make frequent mistakes if the handwriting is inscrutable or the image acquisition process adds corruptions, such as noise, blur, compression, etc. Writing style is generally quite consistent for an individual, which can be leveraged to correct mistakes made by such models. Motivated by this, we introduce the problem of adapting text line recognition models during test time. We focus on a challenging and realistic setting where, given only a single test image consisting of multiple text lines, the task is to adapt the model such that it performs better on the image, without any labels. We propose an iterative self-training approach that uses feedback from the language model to update the optical model, with confident self-labels in each iteration. The confidence measure is based on an augmentation mechanism that evaluates the divergence of the prediction of the model in a local region. We perform rigorous evaluation of our method on several benchmark datasets as well as their corrupted versions. Experimental results on multiple datasets spanning multiple scripts show that the proposed adaptation method offers an absolute improvement of up to 8% in character error rate with just a few iterations of self-training at test time.
## Keyword: RAW
### CEFHRI: A Communication Efficient Federated Learning Framework for Recognizing Industrial Human-Robot Interaction
- **Authors:** Umar Khalid, Hasan Iqbal, Saeed Vahidian, Jing Hua, Chen Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2308.14965
- **Pdf link:** https://arxiv.org/pdf/2308.14965
- **Abstract**
Human-robot interaction (HRI) is a rapidly growing field that encompasses social and industrial applications. Machine learning plays a vital role in industrial HRI by enhancing the adaptability and autonomy of robots in complex environments. However, data privacy is a crucial concern in the interaction between humans and robots, as companies need to protect sensitive data while machine learning algorithms require access to large datasets. Federated Learning (FL) offers a solution by enabling the distributed training of models without sharing raw data. Despite extensive research on Federated learning (FL) for tasks such as natural language processing (NLP) and image classification, the question of how to use FL for HRI remains an open research problem. The traditional FL approach involves transmitting large neural network parameter matrices between the server and clients, which can lead to high communication costs and often becomes a bottleneck in FL. This paper proposes a communication-efficient FL framework for human-robot interaction (CEFHRI) to address the challenges of data heterogeneity and communication costs. The framework leverages pre-trained models and introduces a trainable spatiotemporal adapter for video understanding tasks in HRI. Experimental results on three human-robot interaction benchmark datasets: HRI30, InHARD, and COIN demonstrate the superiority of CEFHRI over full fine-tuning in terms of communication costs. The proposed methodology provides a secure and efficient approach to HRI federated learning, particularly in industrial environments with data privacy concerns and limited communication bandwidth. Our code is available at https://github.com/umarkhalidAI/CEFHRI-Efficient-Federated-Learning.
## Keyword: raw image
There is no result
|
process
|
new submissions for wed aug keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp nsf neural surface fields for human modeling from monocular depth authors yuxuan xue bharat lal bhatnagar riccardo marin nikolaos sarafianos yuanlu xu gerard pons moll tony tung subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract obtaining personalized animatable avatars from a monocular camera has several real world applications in gaming virtual try on animation and vr xr etc however it is very challenging to model dynamic and fine grained clothing deformations from such sparse data existing methods for modeling humans from depth data have limitations in terms of computational efficiency mesh coherency and flexibility in resolution and topology for instance reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames moreover predicting per vertex deformations on a pre designed human template with a discrete surface lacks flexibility in resolution and topology to overcome these limitations we propose a novel method keyfeature neural surface fields for modeling clothed humans from monocular depth nsf defines a neural field solely on the base surface which models a continuous and flexible displacement field nsf can be adapted to the base surface with different resolution and topology without retraining at inference time compared to existing approaches our method eliminates the expensive per frame surface extraction while maintaining mesh coherency and is capable of reconstructing meshes with arbitrary resolution without retraining to foster research in this direction we release our code in project page at auto prompting sam for mobile friendly medical image segmentation authors chengyin li prashant khanduri yao qiang rafi ibn sultan indrin chetty dongxiao zhu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract the segment anything model sam has rapidly been adopted for segmenting a wide range of natural images however recent studies have indicated that sam exhibits subpar performance on medical image segmentation tasks in addition to the domain gaps between natural and medical images disparities in the spatial arrangement between and images the substantial computational burden imposed by powerful gpu servers and the time consuming manual prompt generation impede the extension of sam to a broader spectrum of medical image segmentation applications to address these challenges in this work we introduce a novel method autosam adapter designed specifically for multi organ ct based segmentation we employ parameter efficient adaptation techniques in developing an automatic prompt learning paradigm to facilitate the transformation of the sam model s capabilities to medical image segmentation eliminating the need for manually generated prompts furthermore we effectively transfer the acquired knowledge of the autosam adapter to other lightweight models specifically tailored for medical image analysis achieving state of the art sota performance on medical image segmentation tasks through extensive experimental evaluation we demonstrate the autosam adapter as a critical foundation for effectively leveraging the emerging ability of foundation models in natural image segmentation for medical image segmentation read only prompt optimization for vision language few shot learning authors dongjun lee seokwon song jihee suh joonmyeong choi sanghyeok lee hyunwoo j kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in recent years prompt tuning has proven effective in adapting pre trained vision language models to downstream tasks these methods aim to adapt the pre trained models by introducing learnable prompts while keeping pre trained weights frozen however learnable prompts can affect the internal representation within the self attention module which may negatively impact performance variance and generalization especially in data deficient settings to address these issues we propose a novel approach read only prompt optimization rpo rpo leverages masked attention to prevent the internal representation shift in the pre trained model further to facilitate the optimization of rpo the read only prompts are initialized based on special tokens of the pre trained model our extensive experiments demonstrate that rpo outperforms clip and cocoop in base to new generalization and domain generalization while displaying better robustness also the proposed method achieves better generalization on extremely data deficient settings while improving parameter efficiency and computational overhead code is available at a multimodal visual encoding model aided by introducing verbal semantic information authors shuxiao ma linyuan wang bin yan subjects computer vision and pattern recognition cs cv artificial intelligence cs ai neurons and cognition q bio nc arxiv link pdf link abstract biological research has revealed that the verbal semantic information in the brain cortex as an additional source participates in nonverbal semantic tasks such as visual encoding however previous visual encoding models did not incorporate verbal semantic information contradicting this biological finding this paper proposes a multimodal visual information encoding network model based on stimulus images and associated textual information in response to this issue our visual information encoding network model takes stimulus images as input and leverages textual information generated by a text image generation model as verbal semantic information this approach injects new information into the visual encoding model subsequently a transformer network aligns image and text feature information creating a multimodal feature space a convolutional network then maps from this multimodal feature space to voxel space constructing the multimodal visual information encoding network model experimental results demonstrate that the proposed multimodal visual information encoding network model outperforms previous models under the exact training cost in voxel prediction of the left hemisphere of subject s brain the performance improves by approximately while in the right hemisphere the performance improves by about the multimodal visual encoding network model exhibits superior encoding performance additionally ablation experiments indicate that our proposed model better simulates the brain s visual information processing keyword image signal processing there is no result keyword image signal process there is no result keyword compression entropy based guidance of deep neural networks for accelerated convergence and improved performance authors mackenzie j meni ryan t white michael mayo kevin pilkiewicz subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract neural networks have dramatically increased our capacity to learn from large high dimensional datasets across innumerable disciplines however their decisions are not easily interpretable their computational costs are high and building and training them are uncertain processes to add structure to these efforts we derive new mathematical results to efficiently measure the changes in entropy as fully connected and convolutional neural networks process data and introduce entropy based loss terms experiments in image compression and image classification on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions converge in fewer training epochs and achieve better test metrics is it an i or an l test time adaptation of text line recognition models authors debapriya tula sujoy paul gagan madan peter garst reeve ingle gaurav aggarwal subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recognizing text lines from images is a challenging problem especially for handwritten documents due to large variations in writing styles while text line recognition models are generally trained on large corpora of real and synthetic data such models can still make frequent mistakes if the handwriting is inscrutable or the image acquisition process adds corruptions such as noise blur compression etc writing style is generally quite consistent for an individual which can be leveraged to correct mistakes made by such models motivated by this we introduce the problem of adapting text line recognition models during test time we focus on a challenging and realistic setting where given only a single test image consisting of multiple text lines the task is to adapt the model such that it performs better on the image without any labels we propose an iterative self training approach that uses feedback from the language model to update the optical model with confident self labels in each iteration the confidence measure is based on an augmentation mechanism that evaluates the divergence of the prediction of the model in a local region we perform rigorous evaluation of our method on several benchmark datasets as well as their corrupted versions experimental results on multiple datasets spanning multiple scripts show that the proposed adaptation method offers an absolute improvement of up to in character error rate with just a few iterations of self training at test time keyword raw cefhri a communication efficient federated learning framework for recognizing industrial human robot interaction authors umar khalid hasan iqbal saeed vahidian jing hua chen chen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract human robot interaction hri is a rapidly growing field that encompasses social and industrial applications machine learning plays a vital role in industrial hri by enhancing the adaptability and autonomy of robots in complex environments however data privacy is a crucial concern in the interaction between humans and robots as companies need to protect sensitive data while machine learning algorithms require access to large datasets federated learning fl offers a solution by enabling the distributed training of models without sharing raw data despite extensive research on federated learning fl for tasks such as natural language processing nlp and image classification the question of how to use fl for hri remains an open research problem the traditional fl approach involves transmitting large neural network parameter matrices between the server and clients which can lead to high communication costs and often becomes a bottleneck in fl this paper proposes a communication efficient fl framework for human robot interaction cefhri to address the challenges of data heterogeneity and communication costs the framework leverages pre trained models and introduces a trainable spatiotemporal adapter for video understanding tasks in hri experimental results on three human robot interaction benchmark datasets inhard and coin demonstrate the superiority of cefhri over full fine tuning in terms of communication costs the proposed methodology provides a secure and efficient approach to hri federated learning particularly in industrial environments with data privacy concerns and limited communication bandwidth our code is available at keyword raw image there is no result
| 1
|
78,630
| 3,511,934,488
|
IssuesEvent
|
2016-01-10 17:19:30
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
MESOS: Possible to define Service port on Node
|
priority/P2 team/mesosphere
|
Sorry if the title is vague. I'm running Kubernetes v1 on Mesos (0.24), and I'm having some trouble getting service ports mapped correctly. Whenever I create a service to serve as an endpoint for a pod, it always chooses the first available node port on it's scheduled node. I have my range defined to start at port 1000, so it just chooses sequentially from there. Shouldn't it be using the value I provide? Or is this a limitation in the Mesos platform?
|
1.0
|
MESOS: Possible to define Service port on Node - Sorry if the title is vague. I'm running Kubernetes v1 on Mesos (0.24), and I'm having some trouble getting service ports mapped correctly. Whenever I create a service to serve as an endpoint for a pod, it always chooses the first available node port on it's scheduled node. I have my range defined to start at port 1000, so it just chooses sequentially from there. Shouldn't it be using the value I provide? Or is this a limitation in the Mesos platform?
|
non_process
|
mesos possible to define service port on node sorry if the title is vague i m running kubernetes on mesos and i m having some trouble getting service ports mapped correctly whenever i create a service to serve as an endpoint for a pod it always chooses the first available node port on it s scheduled node i have my range defined to start at port so it just chooses sequentially from there shouldn t it be using the value i provide or is this a limitation in the mesos platform
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.