Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,498
| 24,551,122,177
|
IssuesEvent
|
2022-10-12 12:41:34
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] [Offline indicator] Toast message is displayed instead of pop-up message in below scenario
|
Bug P2 iOS Process: Fixed Process: Tested dev
|
**Steps:**
1. Install the app
2. Don't sign in/signup
3. Click on any study
4. Switch off the internet
5. Click on 'Participate'
6. Observe
**Actual:** Toast message 'You are offline' is displayed in sign in screen
**Expected:** Pop-up message 'You seem to be offline. Please connect to a network to proceed with this action.' pop-up should be displayed
**Issue not observed for logged in users**
Actual:

Expected pop-up screen:

|
2.0
|
[iOS] [Offline indicator] Toast message is displayed instead of pop-up message in below scenario - **Steps:**
1. Install the app
2. Don't sign in/signup
3. Click on any study
4. Switch off the internet
5. Click on 'Participate'
6. Observe
**Actual:** Toast message 'You are offline' is displayed in sign in screen
**Expected:** Pop-up message 'You seem to be offline. Please connect to a network to proceed with this action.' pop-up should be displayed
**Issue not observed for logged in users**
Actual:

Expected pop-up screen:

|
process
|
toast message is displayed instead of pop up message in below scenario steps install the app don t sign in signup click on any study switch off the internet click on participate observe actual toast message you are offline is displayed in sign in screen expected pop up message you seem to be offline please connect to a network to proceed with this action pop up should be displayed issue not observed for logged in users actual expected pop up screen
| 1
|
94,072
| 11,846,407,413
|
IssuesEvent
|
2020-03-24 10:08:36
|
zilahir/teleprompter
|
https://api.github.com/repos/zilahir/teleprompter
|
closed
|
need design for error states
|
:nail_care: design :rocket: waiting for deploy
|
both of them in the header modals:
1) login error (error mesage: `email or password is not correct`
2) registration error (error message a) `password mismatch` b) `email is in use`
|
1.0
|
need design for error states - both of them in the header modals:
1) login error (error mesage: `email or password is not correct`
2) registration error (error message a) `password mismatch` b) `email is in use`
|
non_process
|
need design for error states both of them in the header modals login error error mesage email or password is not correct registration error error message a password mismatch b email is in use
| 0
|
120,342
| 4,788,151,579
|
IssuesEvent
|
2016-10-30 12:12:02
|
cyclejs/cyclejs
|
https://api.github.com/repos/cyclejs/cyclejs
|
closed
|
cycle/dom: cannot use the same isolate scope for parent and child components
|
issue is bug priority 4 (must) scope: dom
|
**Code to reproduce the issue:**
```js
import Cycle from '@cycle/xstream-run';
import {div, button, makeDOMDriver} from '@cycle/dom';
import isolate from '@cycle/isolate';
import xs from 'xstream';
function Item(sources, count) {
const childVdom$ = count > 0 ?
isolate(Item, '0')(sources, count-1).DOM : // REMOVE SCOPE '0' AND IT WILL WORK AS INTENDED
xs.of(undefined);
const highlight$ = sources.DOM.select('button').events('click')
.fold((x, _) => !x, false);
const vdom$ = xs.combine(childVdom$, highlight$)
.map(([childVdom, highlight]) =>
div({style: {'margin': '10px', border: '1px solid #999'}}, [
button({style: {'background-color': highlight ? '#f00' : '#fff'}}, 'click me'),
childVdom
])
);
return { DOM: vdom$ };
}
function main(sources) {
const vdom$ = Item(sources, 3).DOM;
return { DOM: vdom$ };
}
Cycle.run(main, {
DOM: makeDOMDriver('#main')
});
```
**Expected behavior:**
Each button is toggled red/white when clicked.
**Actual behavior:**
The inner `Item`s are not isolated from each other. Clicking any button toggles all the others.
**Versions of packages used:**
@cycle/dom: 12.2.5
@cycle/xstream-run: 3.1.0
@cycle/isolate: 1.4.0
xstream: 6.1.0
**Why anyone would want to do this:**
Because of cycle-onionify. In onionify components in a list are isolated from each other by their index (0, 1, 2, ...). Thus if you make a list of lists of lists... (i.e. a tree) of the same component, you'll end up with many parent-child-grandchild-... with the same numerical isolate scope, like in the example above.
|
1.0
|
cycle/dom: cannot use the same isolate scope for parent and child components - **Code to reproduce the issue:**
```js
import Cycle from '@cycle/xstream-run';
import {div, button, makeDOMDriver} from '@cycle/dom';
import isolate from '@cycle/isolate';
import xs from 'xstream';
function Item(sources, count) {
const childVdom$ = count > 0 ?
isolate(Item, '0')(sources, count-1).DOM : // REMOVE SCOPE '0' AND IT WILL WORK AS INTENDED
xs.of(undefined);
const highlight$ = sources.DOM.select('button').events('click')
.fold((x, _) => !x, false);
const vdom$ = xs.combine(childVdom$, highlight$)
.map(([childVdom, highlight]) =>
div({style: {'margin': '10px', border: '1px solid #999'}}, [
button({style: {'background-color': highlight ? '#f00' : '#fff'}}, 'click me'),
childVdom
])
);
return { DOM: vdom$ };
}
function main(sources) {
const vdom$ = Item(sources, 3).DOM;
return { DOM: vdom$ };
}
Cycle.run(main, {
DOM: makeDOMDriver('#main')
});
```
**Expected behavior:**
Each button is toggled red/white when clicked.
**Actual behavior:**
The inner `Item`s are not isolated from each other. Clicking any button toggles all the others.
**Versions of packages used:**
@cycle/dom: 12.2.5
@cycle/xstream-run: 3.1.0
@cycle/isolate: 1.4.0
xstream: 6.1.0
**Why anyone would want to do this:**
Because of cycle-onionify. In onionify components in a list are isolated from each other by their index (0, 1, 2, ...). Thus if you make a list of lists of lists... (i.e. a tree) of the same component, you'll end up with many parent-child-grandchild-... with the same numerical isolate scope, like in the example above.
|
non_process
|
cycle dom cannot use the same isolate scope for parent and child components code to reproduce the issue js import cycle from cycle xstream run import div button makedomdriver from cycle dom import isolate from cycle isolate import xs from xstream function item sources count const childvdom count isolate item sources count dom remove scope and it will work as intended xs of undefined const highlight sources dom select button events click fold x x false const vdom xs combine childvdom highlight map div style margin border solid button style background color highlight fff click me childvdom return dom vdom function main sources const vdom item sources dom return dom vdom cycle run main dom makedomdriver main expected behavior each button is toggled red white when clicked actual behavior the inner item s are not isolated from each other clicking any button toggles all the others versions of packages used cycle dom cycle xstream run cycle isolate xstream why anyone would want to do this because of cycle onionify in onionify components in a list are isolated from each other by their index thus if you make a list of lists of lists i e a tree of the same component you ll end up with many parent child grandchild with the same numerical isolate scope like in the example above
| 0
|
199,564
| 6,991,686,464
|
IssuesEvent
|
2017-12-15 01:28:58
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Problem with replacing nodes
|
lifecycle/stale priority/backlog sig/api-machinery
|
When I run kubectl replace command follow [here](http://kubernetes.io/v1.0/docs/admin/cluster-management.html), I got the below error
```
kubectl replace nodes 10.162.64.136 --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
Error: unknown flag: --patch
Run 'kubectl help' for usage.
```
|
1.0
|
Problem with replacing nodes - When I run kubectl replace command follow [here](http://kubernetes.io/v1.0/docs/admin/cluster-management.html), I got the below error
```
kubectl replace nodes 10.162.64.136 --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
Error: unknown flag: --patch
Run 'kubectl help' for usage.
```
|
non_process
|
problem with replacing nodes when i run kubectl replace command follow i got the below error kubectl replace nodes patch apiversion spec unschedulable true error unknown flag patch run kubectl help for usage
| 0
|
225,537
| 7,482,406,736
|
IssuesEvent
|
2018-04-05 01:10:51
|
SETI/pds-opus
|
https://api.github.com/repos/SETI/pds-opus
|
opened
|
Handling of ert_sec is totally broken
|
Bug Effort-Easy Priority 3
|
See apps/metadata/views.py around line 288. The original version works only for obs_general TIME fields (normal observation time), not for ert, which occurs in BOTH Cassini and Voyage mission tables.
|
1.0
|
Handling of ert_sec is totally broken - See apps/metadata/views.py around line 288. The original version works only for obs_general TIME fields (normal observation time), not for ert, which occurs in BOTH Cassini and Voyage mission tables.
|
non_process
|
handling of ert sec is totally broken see apps metadata views py around line the original version works only for obs general time fields normal observation time not for ert which occurs in both cassini and voyage mission tables
| 0
|
6,188
| 9,102,663,928
|
IssuesEvent
|
2019-02-20 14:17:50
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Document how to `make install`.
|
type: process
|
Our README file just describes how to compile the code, but not how to install it. Installing it requires having the dependencies installed themselves, and some of them do not document well how to do this.
|
1.0
|
Document how to `make install`. - Our README file just describes how to compile the code, but not how to install it. Installing it requires having the dependencies installed themselves, and some of them do not document well how to do this.
|
process
|
document how to make install our readme file just describes how to compile the code but not how to install it installing it requires having the dependencies installed themselves and some of them do not document well how to do this
| 1
|
11,121
| 13,957,685,612
|
IssuesEvent
|
2020-10-24 08:08:41
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
MT: Harvesting Process
|
Geoportal Harvesting process MT - Malta
|
Dear Geoportal Team,
On Friday 8th November 2019 I started a harvesting request but it seems that this request failed along the process. Is something wrong with the process or is it an issue from our side?
Regards,
Rene
|
1.0
|
MT: Harvesting Process - Dear Geoportal Team,
On Friday 8th November 2019 I started a harvesting request but it seems that this request failed along the process. Is something wrong with the process or is it an issue from our side?
Regards,
Rene
|
process
|
mt harvesting process dear geoportal team on friday november i started a harvesting request but it seems that this request failed along the process is something wrong with the process or is it an issue from our side regards rene
| 1
|
59,223
| 14,369,086,787
|
IssuesEvent
|
2020-12-01 09:18:44
|
ignatandrei/stankins
|
https://api.github.com/repos/ignatandrei/stankins
|
closed
|
CVE-2019-6283 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2019-6283 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.10.0.tgz</b>, <b>node-sass-4.9.3.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.10.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.10.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.10.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.11.4.tgz (Root Library)
- :x: **node-sass-4.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>node-sass-4.9.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- :x: **node-sass-4.9.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283>CVE-2019-6283</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-6283 (Medium) detected in multiple libraries - ## CVE-2019-6283 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.10.0.tgz</b>, <b>node-sass-4.9.3.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.10.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.10.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.10.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.11.4.tgz (Root Library)
- :x: **node-sass-4.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>node-sass-4.9.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- :x: **node-sass-4.9.3.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::parenthese_scope in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6283>CVE-2019-6283</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm stankins solution stankinsdatawebangular package json path to vulnerable library tmp ws scm stankins solution stankinsdatawebangular node modules node sass package json dependency hierarchy build angular tgz root library x node sass tgz vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm stankins solution stankinsaliveangular package json path to vulnerable library tmp ws scm stankins solution stankinsaliveangular node modules node sass package json dependency hierarchy build angular tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass a heap based buffer over read exists in sass prelexer parenthese scope in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
54,064
| 29,502,783,405
|
IssuesEvent
|
2023-06-03 01:04:42
|
jqlang/jq
|
https://api.github.com/repos/jqlang/jq
|
closed
|
`first` definition is extremely complicated
|
performance
|
In the file [builtin.jq](https://github.com/stedolan/jq/blob/master/src/builtin.jq) I find the definition of `limit` extremely complicated:
```
def first(g): label $out | foreach g as $item ([false, null]; if .[0]==true then break $out else [true, $item] end; .[1]) ;
```
I thik this one will do the work, or I'm missing something?
```
def first(g): label $pipe | g | ., break $pipe ;
```
|
True
|
`first` definition is extremely complicated - In the file [builtin.jq](https://github.com/stedolan/jq/blob/master/src/builtin.jq) I find the definition of `limit` extremely complicated:
```
def first(g): label $out | foreach g as $item ([false, null]; if .[0]==true then break $out else [true, $item] end; .[1]) ;
```
I thik this one will do the work, or I'm missing something?
```
def first(g): label $pipe | g | ., break $pipe ;
```
|
non_process
|
first definition is extremely complicated in the file i find the definition of limit extremely complicated def first g label out foreach g as item if true then break out else end i thik this one will do the work or i m missing something def first g label pipe g break pipe
| 0
|
603,663
| 18,670,399,361
|
IssuesEvent
|
2021-10-30 15:49:08
|
AY2122S1-CS2103-T14-1/tp
|
https://api.github.com/repos/AY2122S1-CS2103-T14-1/tp
|
closed
|
[PE-D] Formatting of Input Parameter details
|
priority.High type.Document type.Duplicate
|

The annotated section is a slightly difficult to read and follow (especially from point 7 - `FREQUENCY` onwards). Perhaps it could be formatted into a table?
Nit - there is a line break between points 1 and 2.
<!--session: 1635494469100-6c9cee70-f4f6-40a9-8e5b-7874850be29d-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: shaliniseshadri/ped#5
|
1.0
|
[PE-D] Formatting of Input Parameter details - 
The annotated section is a slightly difficult to read and follow (especially from point 7 - `FREQUENCY` onwards). Perhaps it could be formatted into a table?
Nit - there is a line break between points 1 and 2.
<!--session: 1635494469100-6c9cee70-f4f6-40a9-8e5b-7874850be29d-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: shaliniseshadri/ped#5
|
non_process
|
formatting of input parameter details the annotated section is a slightly difficult to read and follow especially from point frequency onwards perhaps it could be formatted into a table nit there is a line break between points and labels severity low type documentationbug original shaliniseshadri ped
| 0
|
7,570
| 10,684,603,698
|
IssuesEvent
|
2019-10-22 10:51:08
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Except submitted processes whose class cannot be loaded
|
priority/nice-to-have topic/engine topic/processes type/accepted feature
|
Currently the task is just acknowledged if the node cannot be loaded. This is necessary because often the reason for the node failure is that it no longer exists. Since the node cannot be loaded, its state can also not be changed. However, in the case where it is the loading of the class that fails, the node _can_ be loaded. In this case it is better to set the process state to excepted instead of leaving it in created. This will depend on [this issue in plumpy](https://github.com/aiidateam/plumpy/issues/123) to be fixed such that the specific import error can be caught.
|
1.0
|
Except submitted processes whose class cannot be loaded - Currently the task is just acknowledged if the node cannot be loaded. This is necessary because often the reason for the node failure is that it no longer exists. Since the node cannot be loaded, its state can also not be changed. However, in the case where it is the loading of the class that fails, the node _can_ be loaded. In this case it is better to set the process state to excepted instead of leaving it in created. This will depend on [this issue in plumpy](https://github.com/aiidateam/plumpy/issues/123) to be fixed such that the specific import error can be caught.
|
process
|
except submitted processes whose class cannot be loaded currently the task is just acknowledged if the node cannot be loaded this is necessary because often the reason for the node failure is that it no longer exists since the node cannot be loaded its state can also not be changed however in the case where it is the loading of the class that fails the node can be loaded in this case it is better to set the process state to excepted instead of leaving it in created this will depend on to be fixed such that the specific import error can be caught
| 1
|
149,994
| 13,307,049,285
|
IssuesEvent
|
2020-08-25 21:21:57
|
clastix/capsule
|
https://api.github.com/repos/clastix/capsule
|
closed
|
Fix typos in the docs
|
documentation good first issue help wanted
|
Pretty straightforward, there are several typos in the documentation that sounds terrible in the English language.
Help is wanted, so please don't hesitate to refer to this issue while contributing to fixing small typos on Capsule! :tada:
|
1.0
|
Fix typos in the docs - Pretty straightforward, there are several typos in the documentation that sounds terrible in the English language.
Help is wanted, so please don't hesitate to refer to this issue while contributing to fixing small typos on Capsule! :tada:
|
non_process
|
fix typos in the docs pretty straightforward there are several typos in the documentation that sounds terrible in the english language help is wanted so please don t hesitate to refer to this issue while contributing to fixing small typos on capsule tada
| 0
|
119,968
| 17,644,002,799
|
IssuesEvent
|
2021-08-20 01:26:10
|
AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
|
https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
|
opened
|
CVE-2021-29528 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2021-29528 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a division by 0 in `tf.raw_ops.QuantizedMul`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/55900e961ed4a23b438392024912154a2c2f5e85/tensorflow/core/kernels/quantized_mul_op.cc#L188-L198) does a division by a quantity that is controlled by the caller. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29528>CVE-2021-29528</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6f84-42vf-ppwp">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6f84-42vf-ppwp</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-29528 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-29528 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a division by 0 in `tf.raw_ops.QuantizedMul`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/55900e961ed4a23b438392024912154a2c2f5e85/tensorflow/core/kernels/quantized_mul_op.cc#L188-L198) does a division by a quantity that is controlled by the caller. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29528>CVE-2021-29528</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6f84-42vf-ppwp">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-6f84-42vf-ppwp</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning an attacker can trigger a division by in tf raw ops quantizedmul this is because the implementation does a division by a quantity that is controlled by the caller the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
20,269
| 26,897,828,892
|
IssuesEvent
|
2023-02-06 13:43:26
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
VSCode integrated terminal no longer loads bash .profile on startup
|
confirmation-pending terminal-process new release
|
Type: <b>Bug</b>
When using a custom ~/.profile in WSL, the initial load of the integrated terminal won't load (It seems to be ignoring `terminal.integrated.profiles.linux` settings, since the custom icon I set is not working either) but subsequent terminal creation (not with split/add but with the dropdown clicking on the profile directly) loads the profile fine.
VS Code version: Code 1.75.0 (e2816fe719a4026ffa1ee0189dc89bdfdbafb164, 2023-02-01T15:23:45.584Z)
OS version: Windows_NT x64 10.0.19044
Modes:
Sandboxed: Yes
Remote OS version: Linux x64 5.15.79.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 3700)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.92GB (39.72GB free)|
|Process Argv|--log=trace --folder-uri=vscode-remote://wsl+Ubuntu/home/wunder/repos/Farm-Smart --remote=wsl+Ubuntu --crash-reporter-id 0fe24daf-6a84-45c5-8bd5-d3682756f872|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu|
|OS|Linux x64 5.15.79.1-microsoft-standard-WSL2|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 3700)|
|Memory (System)|31.30GB (28.66GB free)|
|VM|0%|
</details><details><summary>Extensions (62)</summary>
Extension|Author (truncated)|Version
---|---|---
vsc-material-theme|Equ|33.6.0
vsc-material-theme-icons|equ|2.5.0
remotehub|Git|0.50.0
vscode-graphql-syntax|Gra|1.0.6
vscode-cloak|joh|0.5.0
vscode-peacock|joh|4.2.2
edgedb|mag|0.1.5
theme-1|Mar|1.2.6
dotenv|mik|1.0.1
jupyter-keymap|ms-|1.0.0
remote-containers|ms-|0.275.0
remote-ssh|ms-|0.96.0
remote-ssh-edit|ms-|0.84.0
remote-wsl|ms-|0.75.1
azure-repos|ms-|0.26.0
remote-explorer|ms-|0.2.0
remote-repositories|ms-|0.28.0
material-icon-theme|PKi|4.23.1
vscode-caniuse|aga|0.5.0
vscode-color-picker|Ant|0.0.4
vscode-tailwindcss|bra|0.9.7
turbo-console-log|Cha|2.6.2
hide-node-modules|chr|1.1.4
regex|chr|0.4.0
vscode-eslint|dba|2.2.6
vscode-html-css|ecm|1.13.1
prettier-vscode|esb|9.10.4
vscode-highlight|fab|1.7.2
vscode-todo-plus|fab|4.19.1
copilot|Git|1.71.8269
vscode-pull-request-github|Git|0.58.0
go|gol|0.37.1
vscode-graphql|Gra|0.8.5
vscode-graphql-syntax|Gra|1.0.6
vscode-sort|hen|0.2.5
vscode-edit-csv|jan|0.7.2
svg|joc|1.5.0
launchdarkly|lau|3.0.6
gitless|maa|11.7.2
rainbow-csv|mec|3.5.0
template-string-converter|meg|0.6.0
inline-fold|moa|0.2.2
vscode-docker|ms-|1.23.3
isort|ms-|2022.8.0
python|ms-|2023.2.0
vscode-pylance|ms-|2023.2.10
jupyter|ms-|2023.1.2000312134
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.14
vscode-jupyter-cell-tags|ms-|0.1.6
vscode-jupyter-slideshow|ms-|0.1.5
powershell|ms-|2023.1.0
vscode-typescript-next|ms-|5.0.202302010
vetur|oct|0.36.1
heroku-command|pko|0.0.8
prisma|Pri|4.9.0
rust-analyzer|rus|0.3.1386
vs-code-prettier-eslint|rve|5.0.4
vscode-javascript-booster|sbu|14.0.1
even-better-toml|tam|0.19.0
tauri-vscode|tau|0.2.1
vscode-js-console-utils|wht|0.7.0
(8 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vsdfh931cf:30280410
vshan820:30294714
vstes263:30335439
pythondataviewer:30285071
vscod805cf:30301675
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411:30581797
vsaa593:30376534
pythonvs932:30410667
cppdebug:30492333
vsclangdc:30486549
c4g48928:30535728
dsvsc012cf:30540253
azure-dev_surveyone:30548225
vscccc:30610679
pyindex848:30577860
nodejswelcome1cf:30587006
282f8724:30602487
pyind779:30657576
89544117:30613380
pythonsymbol12:30657548
vscsb:30628654
```
</details>
<!-- generated by issue reporter -->
|
1.0
|
VSCode integrated terminal no longer loads bash .profile on startup - Type: <b>Bug</b>
When using a custom ~/.profile in WSL, the initial load of the integrated terminal won't load (It seems to be ignoring `terminal.integrated.profiles.linux` settings, since the custom icon I set is not working either) but subsequent terminal creation (not with split/add but with the dropdown clicking on the profile directly) loads the profile fine.
VS Code version: Code 1.75.0 (e2816fe719a4026ffa1ee0189dc89bdfdbafb164, 2023-02-01T15:23:45.584Z)
OS version: Windows_NT x64 10.0.19044
Modes:
Sandboxed: Yes
Remote OS version: Linux x64 5.15.79.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 3700)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.92GB (39.72GB free)|
|Process Argv|--log=trace --folder-uri=vscode-remote://wsl+Ubuntu/home/wunder/repos/Farm-Smart --remote=wsl+Ubuntu --crash-reporter-id 0fe24daf-6a84-45c5-8bd5-d3682756f872|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu|
|OS|Linux x64 5.15.79.1-microsoft-standard-WSL2|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 3700)|
|Memory (System)|31.30GB (28.66GB free)|
|VM|0%|
</details><details><summary>Extensions (62)</summary>
Extension|Author (truncated)|Version
---|---|---
vsc-material-theme|Equ|33.6.0
vsc-material-theme-icons|equ|2.5.0
remotehub|Git|0.50.0
vscode-graphql-syntax|Gra|1.0.6
vscode-cloak|joh|0.5.0
vscode-peacock|joh|4.2.2
edgedb|mag|0.1.5
theme-1|Mar|1.2.6
dotenv|mik|1.0.1
jupyter-keymap|ms-|1.0.0
remote-containers|ms-|0.275.0
remote-ssh|ms-|0.96.0
remote-ssh-edit|ms-|0.84.0
remote-wsl|ms-|0.75.1
azure-repos|ms-|0.26.0
remote-explorer|ms-|0.2.0
remote-repositories|ms-|0.28.0
material-icon-theme|PKi|4.23.1
vscode-caniuse|aga|0.5.0
vscode-color-picker|Ant|0.0.4
vscode-tailwindcss|bra|0.9.7
turbo-console-log|Cha|2.6.2
hide-node-modules|chr|1.1.4
regex|chr|0.4.0
vscode-eslint|dba|2.2.6
vscode-html-css|ecm|1.13.1
prettier-vscode|esb|9.10.4
vscode-highlight|fab|1.7.2
vscode-todo-plus|fab|4.19.1
copilot|Git|1.71.8269
vscode-pull-request-github|Git|0.58.0
go|gol|0.37.1
vscode-graphql|Gra|0.8.5
vscode-graphql-syntax|Gra|1.0.6
vscode-sort|hen|0.2.5
vscode-edit-csv|jan|0.7.2
svg|joc|1.5.0
launchdarkly|lau|3.0.6
gitless|maa|11.7.2
rainbow-csv|mec|3.5.0
template-string-converter|meg|0.6.0
inline-fold|moa|0.2.2
vscode-docker|ms-|1.23.3
isort|ms-|2022.8.0
python|ms-|2023.2.0
vscode-pylance|ms-|2023.2.10
jupyter|ms-|2023.1.2000312134
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.14
vscode-jupyter-cell-tags|ms-|0.1.6
vscode-jupyter-slideshow|ms-|0.1.5
powershell|ms-|2023.1.0
vscode-typescript-next|ms-|5.0.202302010
vetur|oct|0.36.1
heroku-command|pko|0.0.8
prisma|Pri|4.9.0
rust-analyzer|rus|0.3.1386
vs-code-prettier-eslint|rve|5.0.4
vscode-javascript-booster|sbu|14.0.1
even-better-toml|tam|0.19.0
tauri-vscode|tau|0.2.1
vscode-js-console-utils|wht|0.7.0
(8 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vsdfh931cf:30280410
vshan820:30294714
vstes263:30335439
pythondataviewer:30285071
vscod805cf:30301675
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411:30581797
vsaa593:30376534
pythonvs932:30410667
cppdebug:30492333
vsclangdc:30486549
c4g48928:30535728
dsvsc012cf:30540253
azure-dev_surveyone:30548225
vscccc:30610679
pyindex848:30577860
nodejswelcome1cf:30587006
282f8724:30602487
pyind779:30657576
89544117:30613380
pythonsymbol12:30657548
vscsb:30628654
```
</details>
<!-- generated by issue reporter -->
|
process
|
vscode integrated terminal no longer loads bash profile on startup type bug when using a custom profile in wsl the initial load of the integrated terminal won t load it seems to be ignoring terminal integrated profiles linux settings since the custom icon i set is not working either but subsequent terminal creation not with split add but with the dropdown clicking on the profile directly loads the profile fine vs code version code os version windows nt modes sandboxed yes remote os version linux microsoft standard system info item value cpus amd ryzen core processor x gpu status canvas enabled canvas oop rasterization disabled off direct rendering display compositor disabled off ok gpu compositing enabled multiple raster threads enabled on opengl enabled on rasterization enabled raw draw disabled off ok skia renderer enabled on video decode enabled video encode enabled vulkan disabled off webgl enabled enabled webgpu disabled off load avg undefined memory system free process argv log trace folder uri vscode remote wsl ubuntu home wunder repos farm smart remote wsl ubuntu crash reporter id screen reader no vm item value remote wsl ubuntu os linux microsoft standard cpus amd ryzen core processor x memory system free vm extensions extension author truncated version vsc material theme equ vsc material theme icons equ remotehub git vscode graphql syntax gra vscode cloak joh vscode peacock joh edgedb mag theme mar dotenv mik jupyter keymap ms remote containers ms remote ssh ms remote ssh edit ms remote wsl ms azure repos ms remote explorer ms remote repositories ms material icon theme pki vscode caniuse aga vscode color picker ant vscode tailwindcss bra turbo console log cha hide node modules chr regex chr vscode eslint dba vscode html css ecm prettier vscode esb vscode highlight fab vscode todo plus fab copilot git vscode pull request github git go gol vscode graphql gra vscode graphql syntax gra vscode sort hen vscode edit csv jan svg joc launchdarkly lau gitless maa rainbow csv mec template string converter meg inline fold moa vscode docker ms isort ms python ms vscode pylance ms jupyter ms jupyter keymap ms jupyter renderers ms vscode jupyter cell tags ms vscode jupyter slideshow ms powershell ms vscode typescript next ms vetur oct heroku command pko prisma pri rust analyzer rus vs code prettier eslint rve vscode javascript booster sbu even better toml tam tauri vscode tau vscode js console utils wht theme extensions excluded a b experiments pythontb pythonptprofiler pythondataviewer cmake cppdebug vsclangdc azure dev surveyone vscccc vscsb
| 1
|
79,442
| 15,586,152,136
|
IssuesEvent
|
2021-03-18 01:17:26
|
jrshutske/unit-conversion-api
|
https://api.github.com/repos/jrshutske/unit-conversion-api
|
opened
|
CVE-2017-18640 (High) detected in snakeyaml-1.23.jar
|
security vulnerability
|
## CVE-2017-18640 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /unit-conversion-api/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-actuator-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.2.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p>
<p>Release Date: 2019-12-12</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.26</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-18640 (High) detected in snakeyaml-1.23.jar - ## CVE-2017-18640 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /unit-conversion-api/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-actuator-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.2.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p>
<p>Release Date: 2019-12-12</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.26</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file unit conversion api pom xml path to vulnerable library root repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter actuator release jar root library spring boot starter release jar x snakeyaml jar vulnerable library vulnerability details the alias feature in snakeyaml allows entity expansion during a load operation a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with whitesource
| 0
|
62,569
| 3,188,927,689
|
IssuesEvent
|
2015-09-29 01:01:53
|
ddurieux/redminetest
|
https://api.github.com/repos/ddurieux/redminetest
|
closed
|
there should be no dependencies from this plugin on the snmp plugin
|
Component: For junior contributor Component: Found in version Priority: Normal Status: Closed Tracker: Bug
|
---
Author Name: **Fabrice Flore-Thebault** (@themr0c)
Original Redmine Issue: 482, http://forge.fusioninventory.org/issues/482
Original Date: 2010-11-10
Original Assignee: David Durieux
---
when installing @fusinvinventory@ plugin without the @fusinvsnmp@ plugin, the whole plugins page is blocked in glpi and i get this error :
```
PHP Warning: require_once(../plugins/fusinvsnmp/inc/communicationsnmp.class.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/glpi/plugins/fusinvinventory/inc/import_controller.class.php at line 45
Fatal error: require_once() [function.require]: Failed opening required '../plugins/fusinvsnmp/inc/communicationsnmp.class.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/glpi/plugins/fusinvinventory/inc/import_controller.class.php on line 45
```
there should be no such dependencies between plugins, and such an error should not block the listing of the plugins in glpi ...
|
1.0
|
there should be no dependencies from this plugin on the snmp plugin - ---
Author Name: **Fabrice Flore-Thebault** (@themr0c)
Original Redmine Issue: 482, http://forge.fusioninventory.org/issues/482
Original Date: 2010-11-10
Original Assignee: David Durieux
---
when installing @fusinvinventory@ plugin without the @fusinvsnmp@ plugin, the whole plugins page is blocked in glpi and i get this error :
```
PHP Warning: require_once(../plugins/fusinvsnmp/inc/communicationsnmp.class.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/glpi/plugins/fusinvinventory/inc/import_controller.class.php at line 45
Fatal error: require_once() [function.require]: Failed opening required '../plugins/fusinvsnmp/inc/communicationsnmp.class.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/glpi/plugins/fusinvinventory/inc/import_controller.class.php on line 45
```
there should be no such dependencies between plugins, and such an error should not block the listing of the plugins in glpi ...
|
non_process
|
there should be no dependencies from this plugin on the snmp plugin author name fabrice flore thebault original redmine issue original date original assignee david durieux when installing fusinvinventory plugin without the fusinvsnmp plugin the whole plugins page is blocked in glpi and i get this error php warning require once plugins fusinvsnmp inc communicationsnmp class php failed to open stream no such file or directory in var www glpi plugins fusinvinventory inc import controller class php at line fatal error require once failed opening required plugins fusinvsnmp inc communicationsnmp class php include path usr share php usr share pear in var www glpi plugins fusinvinventory inc import controller class php on line there should be no such dependencies between plugins and such an error should not block the listing of the plugins in glpi
| 0
|
106,223
| 13,256,369,094
|
IssuesEvent
|
2020-08-20 12:35:07
|
raiden-network/light-client
|
https://api.github.com/repos/raiden-network/light-client
|
closed
|
Change the header according to the new design throughout the dApp
|
Design 🎨 dApp 📱
|
## Description
Within #1460 @sashseurat made a nice proposal to change the header of the dApp.

https://www.figma.com/file/3UX5wudtGolsdh5QVyuoTS/LC-Header?node-id=0%3A1
## Acceptance criteria
-
## Tasks
- [ ]
|
1.0
|
Change the header according to the new design throughout the dApp - ## Description
Within #1460 @sashseurat made a nice proposal to change the header of the dApp.

https://www.figma.com/file/3UX5wudtGolsdh5QVyuoTS/LC-Header?node-id=0%3A1
## Acceptance criteria
-
## Tasks
- [ ]
|
non_process
|
change the header according to the new design throughout the dapp description within sashseurat made a nice proposal to change the header of the dapp acceptance criteria tasks
| 0
|
16,006
| 20,188,222,097
|
IssuesEvent
|
2022-02-11 01:19:18
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Establish an incident response plan and perform periodically a simulated execution
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Procedures Incident Response
|
<a href="https://info.microsoft.com/rs/157-GQE-382/images/EN-US-CNTNT-emergency-doc-digital.pdf">Establish an incident response plan and perform periodically a simulated execution</a>
<p><b>Why Consider This?</b></p>
Actions executed during an incident and response investigation could impact application availability or performance. It is recommended to define these processes and align them with the responsible (and in most cases central) SecOps team. The impact of such an investigation on the application has to be analyzed.
<p><b>Context</b></p>
<p><span>It is important for an organization to accept the fact that compromises occur, and being prepared is a necessity."nbsp; Preparation categories include the technical, operational, legal, and communication aspects of a major cybersecurity incident.</p><p>Lastly, simulating the execution of the incident response plan is paramount in ensuring the organization can be confident that the plan contains all necessary components, and that the intended outcomes are achieved."nbsp; </span></p>
<p><b>Suggested Actions</b></p>
<p><span>Develop or enhance existing incident response plan to ensure it is comprehensive, tested, and receives continual updates based on the current threat landscape.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://github.com/MarkSimos/MicrosoftSecurity/blob/master/IR%20Reference%20Guide.pdf" target="_blank"><span>Incident response reference guide</span></a><span /></p>
|
1.0
|
Establish an incident response plan and perform periodically a simulated execution - <a href="https://info.microsoft.com/rs/157-GQE-382/images/EN-US-CNTNT-emergency-doc-digital.pdf">Establish an incident response plan and perform periodically a simulated execution</a>
<p><b>Why Consider This?</b></p>
Actions executed during an incident and response investigation could impact application availability or performance. It is recommended to define these processes and align them with the responsible (and in most cases central) SecOps team. The impact of such an investigation on the application has to be analyzed.
<p><b>Context</b></p>
<p><span>It is important for an organization to accept the fact that compromises occur, and being prepared is a necessity."nbsp; Preparation categories include the technical, operational, legal, and communication aspects of a major cybersecurity incident.</p><p>Lastly, simulating the execution of the incident response plan is paramount in ensuring the organization can be confident that the plan contains all necessary components, and that the intended outcomes are achieved."nbsp; </span></p>
<p><b>Suggested Actions</b></p>
<p><span>Develop or enhance existing incident response plan to ensure it is comprehensive, tested, and receives continual updates based on the current threat landscape.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://github.com/MarkSimos/MicrosoftSecurity/blob/master/IR%20Reference%20Guide.pdf" target="_blank"><span>Incident response reference guide</span></a><span /></p>
|
process
|
establish an incident response plan and perform periodically a simulated execution why consider this actions executed during an incident and response investigation could impact application availability or performance it is recommended to define these processes and align them with the responsible and in most cases central secops team the impact of such an investigation on the application has to be analyzed context it is important for an organization to accept the fact that compromises occur and being prepared is a necessity nbsp preparation categories include the technical operational legal and communication aspects of a major cybersecurity incident lastly simulating the execution of the incident response plan is paramount in ensuring the organization can be confident that the plan contains all necessary components and that the intended outcomes are achieved nbsp suggested actions develop or enhance existing incident response plan to ensure it is comprehensive tested and receives continual updates based on the current threat landscape learn more incident response reference guide
| 1
|
15,851
| 20,031,989,637
|
IssuesEvent
|
2022-02-02 07:36:14
|
googleapis/gax-dotnet
|
https://api.github.com/repos/googleapis/gax-dotnet
|
closed
|
Check all TODOs in Google.Api.Gax.Grpc.Rest before GA
|
type: process
|
There are currently 14 TODO comments in Google.Api.Gax.Grpc.Rest.
We don't need to necessarily implement everything, but we should check each one.
|
1.0
|
Check all TODOs in Google.Api.Gax.Grpc.Rest before GA - There are currently 14 TODO comments in Google.Api.Gax.Grpc.Rest.
We don't need to necessarily implement everything, but we should check each one.
|
process
|
check all todos in google api gax grpc rest before ga there are currently todo comments in google api gax grpc rest we don t need to necessarily implement everything but we should check each one
| 1
|
13,823
| 16,587,868,162
|
IssuesEvent
|
2021-06-01 01:24:09
|
lsmacedo/spotifyt-back-end
|
https://api.github.com/repos/lsmacedo/spotifyt-back-end
|
opened
|
Rodar scripts de pre-commit e pre-push
|
process
|
Pre-commit: rodar lint
Pre-push: rodar testes unitários e E2E
|
1.0
|
Rodar scripts de pre-commit e pre-push - Pre-commit: rodar lint
Pre-push: rodar testes unitários e E2E
|
process
|
rodar scripts de pre commit e pre push pre commit rodar lint pre push rodar testes unitários e
| 1
|
201,565
| 7,033,658,585
|
IssuesEvent
|
2017-12-27 12:13:22
|
meetalva/alva
|
https://api.github.com/repos/meetalva/alva
|
closed
|
App is rendered useless when attempting to add component to playground
|
priority: high type: bug
|
Every time I try to add a component to the Playground the screen blanks to white, the component isn't editable and I can no longer add any elements. This is just using the newly created space. I'm on Mac OSX High Sierra. Turning on developer tools yields this error stack:
```
/Applications/Alva.app/Contents/Resources/app.asar/node_modules/fbjs/lib/warning.js:33 Warning: Each child in an array or iterator should have a unique "key" prop.
Check the render method of `PatternListContainer`. See https://fb.me/react-warning-keys for more information.
in PatternList (created by PatternListContainer)
in PatternListContainer (created by App)
in div (created by styled.div)
in styled.div (created by PatternsPane)
in PatternsPane (created by App)
in div (created by styled.div)
in styled.div (created by Styled(styled.div))
in Styled(styled.div)
in Unknown (created by App)
in div (created by styled.div)
in styled.div (created by Styled(styled.div))
in Styled(styled.div)
in Unknown (created by App)
in div (created by styled.div)
in styled.div (created by Layout)
in Layout (created by App)
in App```
|
1.0
|
App is rendered useless when attempting to add component to playground - Every time I try to add a component to the Playground the screen blanks to white, the component isn't editable and I can no longer add any elements. This is just using the newly created space. I'm on Mac OSX High Sierra. Turning on developer tools yields this error stack:
```
/Applications/Alva.app/Contents/Resources/app.asar/node_modules/fbjs/lib/warning.js:33 Warning: Each child in an array or iterator should have a unique "key" prop.
Check the render method of `PatternListContainer`. See https://fb.me/react-warning-keys for more information.
in PatternList (created by PatternListContainer)
in PatternListContainer (created by App)
in div (created by styled.div)
in styled.div (created by PatternsPane)
in PatternsPane (created by App)
in div (created by styled.div)
in styled.div (created by Styled(styled.div))
in Styled(styled.div)
in Unknown (created by App)
in div (created by styled.div)
in styled.div (created by Styled(styled.div))
in Styled(styled.div)
in Unknown (created by App)
in div (created by styled.div)
in styled.div (created by Layout)
in Layout (created by App)
in App```
|
non_process
|
app is rendered useless when attempting to add component to playground every time i try to add a component to the playground the screen blanks to white the component isn t editable and i can no longer add any elements this is just using the newly created space i m on mac osx high sierra turning on developer tools yields this error stack applications alva app contents resources app asar node modules fbjs lib warning js warning each child in an array or iterator should have a unique key prop check the render method of patternlistcontainer see for more information in patternlist created by patternlistcontainer in patternlistcontainer created by app in div created by styled div in styled div created by patternspane in patternspane created by app in div created by styled div in styled div created by styled styled div in styled styled div in unknown created by app in div created by styled div in styled div created by styled styled div in styled styled div in unknown created by app in div created by styled div in styled div created by layout in layout created by app in app
| 0
|
78,799
| 10,089,376,014
|
IssuesEvent
|
2019-07-26 08:46:20
|
JeongChaeEun/2019study
|
https://api.github.com/repos/JeongChaeEun/2019study
|
closed
|
Golang program using windows command 'tasklist'
|
documentation
|
# Tasklist on Windows's cmd prompt
- If I typed the tasklist as command on cmd prompt, can see many tasks like this screenshot.

- Using go language, I made a program doing like this.
## Result _ TaskListPath/TT/tasklist.go
- code
~~~
package main
import (
"bytes"
"fmt"
"log"
"os"
"os/exec"
"path/filepath"
)
//This function is same as 'tasklist' command on cmd prompt
func tasklistCommand() []byte {
cmd := exec.Command("tasklist.exe")
stdoutStderr, err := cmd.CombinedOutput() //CombinedOutput runs the command and returns its combined standard output and standard error
if err != nil {
log.Fatal(err)
}
return stdoutStderr
}
func printOutput(outs []byte) {
output := byteSplitbyNewLine(outs)
//get each task's line using for loop
for index, element := range output {
fmt.Printf("%d, %s\n", index, string(element))
}
}
//split parameter type of []byte every newlines character
func byteSplitbyNewLine(outs []byte) [][]byte {
join := []byte{'\n'}
output := bytes.Split(outs, join)
return output
}
func main() {
//command input variable initialized
args := []string{"0", "0", "0"}
//for printing path on prompt
for {
path, _ := os.Getwd()
_, file := filepath.Split(path)
fmt.Print("[ Jeong @ ", file, "] $ ")
fmt.Scanln(&args[0], &args[1], &args[2])
switch args[0] {
case "tasklist":
stdout := tasklistCommand()
printOutput(stdout)
case "exit":
os.Exit(1)
default:
fmt.Println("not command")
}
}
}
~~~
- result screenshot

|
1.0
|
Golang program using windows command 'tasklist' - # Tasklist on Windows's cmd prompt
- If I typed the tasklist as command on cmd prompt, can see many tasks like this screenshot.

- Using go language, I made a program doing like this.
## Result _ TaskListPath/TT/tasklist.go
- code
~~~
package main
import (
"bytes"
"fmt"
"log"
"os"
"os/exec"
"path/filepath"
)
//This function is same as 'tasklist' command on cmd prompt
func tasklistCommand() []byte {
cmd := exec.Command("tasklist.exe")
stdoutStderr, err := cmd.CombinedOutput() //CombinedOutput runs the command and returns its combined standard output and standard error
if err != nil {
log.Fatal(err)
}
return stdoutStderr
}
func printOutput(outs []byte) {
output := byteSplitbyNewLine(outs)
//get each task's line using for loop
for index, element := range output {
fmt.Printf("%d, %s\n", index, string(element))
}
}
//split parameter type of []byte every newlines character
func byteSplitbyNewLine(outs []byte) [][]byte {
join := []byte{'\n'}
output := bytes.Split(outs, join)
return output
}
func main() {
//command input variable initialized
args := []string{"0", "0", "0"}
//for printing path on prompt
for {
path, _ := os.Getwd()
_, file := filepath.Split(path)
fmt.Print("[ Jeong @ ", file, "] $ ")
fmt.Scanln(&args[0], &args[1], &args[2])
switch args[0] {
case "tasklist":
stdout := tasklistCommand()
printOutput(stdout)
case "exit":
os.Exit(1)
default:
fmt.Println("not command")
}
}
}
~~~
- result screenshot

|
non_process
|
golang program using windows command tasklist tasklist on windows s cmd prompt if i typed the tasklist as command on cmd prompt can see many tasks like this screenshot using go language i made a program doing like this result tasklistpath tt tasklist go code package main import bytes fmt log os os exec path filepath this function is same as tasklist command on cmd prompt func tasklistcommand byte cmd exec command tasklist exe stdoutstderr err cmd combinedoutput combinedoutput runs the command and returns its combined standard output and standard error if err nil log fatal err return stdoutstderr func printoutput outs byte output bytesplitbynewline outs get each task s line using for loop for index element range output fmt printf d s n index string element split parameter type of byte every newlines character func bytesplitbynewline outs byte byte join byte n output bytes split outs join return output func main command input variable initialized args string for printing path on prompt for path os getwd file filepath split path fmt print fmt scanln args args args switch args case tasklist stdout tasklistcommand printoutput stdout case exit os exit default fmt println not command result screenshot
| 0
|
610,522
| 18,910,638,930
|
IssuesEvent
|
2021-11-16 13:48:17
|
gazprom-neft/consta-uikit
|
https://api.github.com/repos/gazprom-neft/consta-uikit
|
closed
|
DatePicker: date-range — перенести иконку у компонента в Storybook
|
🔥🔥 priority improvement
|
**Описание бага**
в Сторибуке в вариации date-range у левой части иконочка оказалась слева (см скрин)
<img width="646" alt="Снимок экрана 2021-10-19 в 15 21 46" src="https://user-images.githubusercontent.com/51019093/138077353-16be6748-47f6-4271-b916-b6d0c4bffbf9.png">
Должна быть **справа** =) см макет в фигме
https://www.figma.com/file/v9Jkm2GrymD277dIGpRBSH/Consta-UI-Kit?node-id=11302%3A58
<img width="487" alt="Снимок экрана 2021-10-20 в 13 33 54" src="https://user-images.githubusercontent.com/51019093/138077704-1075fb21-42b4-4ccd-a596-2ae2a31b94d4.png">
|
1.0
|
DatePicker: date-range — перенести иконку у компонента в Storybook -
**Описание бага**
в Сторибуке в вариации date-range у левой части иконочка оказалась слева (см скрин)
<img width="646" alt="Снимок экрана 2021-10-19 в 15 21 46" src="https://user-images.githubusercontent.com/51019093/138077353-16be6748-47f6-4271-b916-b6d0c4bffbf9.png">
Должна быть **справа** =) см макет в фигме
https://www.figma.com/file/v9Jkm2GrymD277dIGpRBSH/Consta-UI-Kit?node-id=11302%3A58
<img width="487" alt="Снимок экрана 2021-10-20 в 13 33 54" src="https://user-images.githubusercontent.com/51019093/138077704-1075fb21-42b4-4ccd-a596-2ae2a31b94d4.png">
|
non_process
|
datepicker date range — перенести иконку у компонента в storybook описание бага в сторибуке в вариации date range у левой части иконочка оказалась слева см скрин img width alt снимок экрана в src должна быть справа см макет в фигме img width alt снимок экрана в src
| 0
|
18,809
| 24,708,071,010
|
IssuesEvent
|
2022-10-19 20:57:41
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
opened
|
storage: refactor RetentionPolicy.EffectiveTime handling
|
api: storage type: process
|
While working on #6828, we discussed about removing unnecessary client side validations. Revisit how we're handling the read only [timestamp](https://github.com/googleapis/google-cloud-go/blob/main/storage/bucket.go#L1371-L1374) for RetentionPolicy. There's a wider surface to cover with this change; Filing an issue to track this.
|
1.0
|
storage: refactor RetentionPolicy.EffectiveTime handling - While working on #6828, we discussed about removing unnecessary client side validations. Revisit how we're handling the read only [timestamp](https://github.com/googleapis/google-cloud-go/blob/main/storage/bucket.go#L1371-L1374) for RetentionPolicy. There's a wider surface to cover with this change; Filing an issue to track this.
|
process
|
storage refactor retentionpolicy effectivetime handling while working on we discussed about removing unnecessary client side validations revisit how we re handling the read only for retentionpolicy there s a wider surface to cover with this change filing an issue to track this
| 1
|
231,399
| 18,765,069,177
|
IssuesEvent
|
2021-11-05 22:05:58
|
MohistMC/Mohist
|
https://api.github.com/repos/MohistMC/Mohist
|
closed
|
[1.16.5] Bug of player teleportation between worlds through plugins
|
Wait Needs Testing
|
<!-- ISSUE_TEMPLATE_3 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://paste.ubuntu.com/ (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.16.5
**Mohist Version :** 842
**Operating System :** Linux
**Logs :** None
**Description of issue :** After the 786 update, it stopped correctly teleporting the player between worlds through the plugin using the "teleport" method, noticed that in build 791 the teleportation method was changed from "changeDimension" to "moveToWorld" and I suspect that this was the cause of this error
Checked: Returned as it was in the previous version (teleport via "changeDimension") from the one indicated above, teleportation began to work correctly
|
1.0
|
[1.16.5] Bug of player teleportation between worlds through plugins - <!-- ISSUE_TEMPLATE_3 -> IMPORTANT: DO NOT DELETE THIS LINE.-->
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://paste.ubuntu.com/ (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** 1.16.5
**Mohist Version :** 842
**Operating System :** Linux
**Logs :** None
**Description of issue :** After the 786 update, it stopped correctly teleporting the player between worlds through the plugin using the "teleport" method, noticed that in build 791 the teleportation method was changed from "changeDimension" to "moveToWorld" and I suspect that this was the cause of this error
Checked: Returned as it was in the previous version (teleport via "changeDimension") from the one indicated above, teleportation began to work correctly
|
non_process
|
bug of player teleportation between worlds through plugins important do not delete this line minecraft version mohist version operating system linux logs none description of issue after the update it stopped correctly teleporting the player between worlds through the plugin using the teleport method noticed that in build the teleportation method was changed from changedimension to movetoworld and i suspect that this was the cause of this error checked returned as it was in the previous version teleport via changedimension from the one indicated above teleportation began to work correctly
| 0
|
7,910
| 11,091,185,050
|
IssuesEvent
|
2019-12-15 10:29:57
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
[Introspection] Schema parser should collect all errors before failing
|
kind/improvement process/candidate process/product topic: introspection
|
Schema parser fails fast in some scenarios. See [these](https://www.notion.so/prismaio/shopware-da730f7228294f108ba65d22677659fe) introspection notes for example.
It should collect all parsing errors and report them together.
This might need some spec work.
|
2.0
|
[Introspection] Schema parser should collect all errors before failing - Schema parser fails fast in some scenarios. See [these](https://www.notion.so/prismaio/shopware-da730f7228294f108ba65d22677659fe) introspection notes for example.
It should collect all parsing errors and report them together.
This might need some spec work.
|
process
|
schema parser should collect all errors before failing schema parser fails fast in some scenarios see introspection notes for example it should collect all parsing errors and report them together this might need some spec work
| 1
|
36,162
| 8,056,580,591
|
IssuesEvent
|
2018-08-02 13:09:35
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0] Side menu does not work anymore
|
No Code Attached Yet
|
Since the update of this night (02/08/2018) Joomla! 4.0 alpha 5, the left side menu no longer works for "content, menus, components, users". On all browsers.
|
1.0
|
[4.0] Side menu does not work anymore - Since the update of this night (02/08/2018) Joomla! 4.0 alpha 5, the left side menu no longer works for "content, menus, components, users". On all browsers.
|
non_process
|
side menu does not work anymore since the update of this night joomla alpha the left side menu no longer works for content menus components users on all browsers
| 0
|
16,438
| 21,316,759,001
|
IssuesEvent
|
2022-04-16 12:16:42
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Getting DOTX061W when chunking map with keydefs setting hrefs to topic files
|
bug priority/medium preprocess stale
|
## Expected Behavior
The publishing process should run without warnings.
## Actual Behavior
When publishing a ditamap with `<keydef>` elements referencing topics via the `@href` attribute and setting `@chunk=to-content` on the `<map>` element warnings of type
`[DOTX061W][WARN] ID '%1' was used in topicref tag but did not reference a topic element. The href attribute on a topicref element should only reference topic level elements.`
are shown in the command line.
The output looks okay, but the warning is wrong. There are topics referenced. See code example at **Steps to Reproduce**
## Steps to Reproduce
<!-- Test case, Gist, set of files or steps required to reproduce the issue. -->
[example.zip](https://github.com/dita-ot/dita-ot/files/2836238/example.zip)
Remove the `@chunk` attribute in the `<map>` in `root.ditamap` and everything runs fine without warnings
<!-- Create a Gist via <https://gist.github.com/> to upload your test files. -->
<!-- Link to the Gist from the issue or attach a .zip archive of your files. -->
## Copy of the error message, log file or stack trace
<!-- Long logs should be attached or in linked Gists, not in the issue body. -->
## Environment
<!-- Include relevant details about the environment you experienced this in. -->
* DITA-OT version: Tested with dita-ot-3.2.1 and dita-ot-2.5.4 and dita-ot-3.3.0-SNAPSHOT+07cf66a
* Operating system and version:
Windows 10

* How did you run DITA-OT?
Warnings occur:
dita-ot-3.3.0\bin\dita --input=root.ditamap --format=html5 (SNAPSHOT+07cf66a)
dita-ot-3.2.1\bin\dita --input=root.ditamap --format=html5
dita-ot-2.5.4\bin\dita --input=root.ditamap --format=html5
dita-ot-2.5.4\bin\dita --input=root.ditamap --format=pdf2
No warnings occur:
dita-ot-3.3.0\bin\dita --input=root.ditamap --format=pdf2 (SNAPSHOT+07cf66a)
dita-ot-3.2.1\bin\dita --input=root.ditamap --format=pdf2
* Transformation type:
HTML5
PDF
<!--
Before submitting, check the Preview tab above to verify the XML markup appears
correctly and remember you can edit the description later to add information.
-->
|
1.0
|
Getting DOTX061W when chunking map with keydefs setting hrefs to topic files - ## Expected Behavior
The publishing process should run without warnings.
## Actual Behavior
When publishing a ditamap with `<keydef>` elements referencing topics via the `@href` attribute and setting `@chunk=to-content` on the `<map>` element warnings of type
`[DOTX061W][WARN] ID '%1' was used in topicref tag but did not reference a topic element. The href attribute on a topicref element should only reference topic level elements.`
are shown in the command line.
The output looks okay, but the warning is wrong. There are topics referenced. See code example at **Steps to Reproduce**
## Steps to Reproduce
<!-- Test case, Gist, set of files or steps required to reproduce the issue. -->
[example.zip](https://github.com/dita-ot/dita-ot/files/2836238/example.zip)
Remove the `@chunk` attribute in the `<map>` in `root.ditamap` and everything runs fine without warnings
<!-- Create a Gist via <https://gist.github.com/> to upload your test files. -->
<!-- Link to the Gist from the issue or attach a .zip archive of your files. -->
## Copy of the error message, log file or stack trace
<!-- Long logs should be attached or in linked Gists, not in the issue body. -->
## Environment
<!-- Include relevant details about the environment you experienced this in. -->
* DITA-OT version: Tested with dita-ot-3.2.1 and dita-ot-2.5.4 and dita-ot-3.3.0-SNAPSHOT+07cf66a
* Operating system and version:
Windows 10

* How did you run DITA-OT?
Warnings occur:
dita-ot-3.3.0\bin\dita --input=root.ditamap --format=html5 (SNAPSHOT+07cf66a)
dita-ot-3.2.1\bin\dita --input=root.ditamap --format=html5
dita-ot-2.5.4\bin\dita --input=root.ditamap --format=html5
dita-ot-2.5.4\bin\dita --input=root.ditamap --format=pdf2
No warnings occur:
dita-ot-3.3.0\bin\dita --input=root.ditamap --format=pdf2 (SNAPSHOT+07cf66a)
dita-ot-3.2.1\bin\dita --input=root.ditamap --format=pdf2
* Transformation type:
HTML5
PDF
<!--
Before submitting, check the Preview tab above to verify the XML markup appears
correctly and remember you can edit the description later to add information.
-->
|
process
|
getting when chunking map with keydefs setting hrefs to topic files expected behavior the publishing process should run without warnings actual behavior when publishing a ditamap with elements referencing topics via the href attribute and setting chunk to content on the element warnings of type id was used in topicref tag but did not reference a topic element the href attribute on a topicref element should only reference topic level elements are shown in the command line the output looks okay but the warning is wrong there are topics referenced see code example at steps to reproduce steps to reproduce remove the chunk attribute in the in root ditamap and everything runs fine without warnings copy of the error message log file or stack trace environment dita ot version tested with dita ot and dita ot and dita ot snapshot operating system and version windows how did you run dita ot warnings occur dita ot bin dita input root ditamap format snapshot dita ot bin dita input root ditamap format dita ot bin dita input root ditamap format dita ot bin dita input root ditamap format no warnings occur dita ot bin dita input root ditamap format snapshot dita ot bin dita input root ditamap format transformation type pdf before submitting check the preview tab above to verify the xml markup appears correctly and remember you can edit the description later to add information
| 1
|
20,015
| 26,486,899,441
|
IssuesEvent
|
2023-01-17 18:49:56
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
VEX TM Control
|
NOT YET PROCESSED
|
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
VEX Tournament Manager
What you would like to be able to make it do from Companion:
Control the Audience Display functionality and the Match Status. It is already possible using a NodeJS server if configured, but i do not know how that would be of any help to the BitFocus team.
Direct links or attachments to the ethernet control protocol or API:
https://vextm.dwabtech.com/
https://www.npmjs.com/package/vex-tm-client?activeTab=readme
|
1.0
|
VEX TM Control - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
VEX Tournament Manager
What you would like to be able to make it do from Companion:
Control the Audience Display functionality and the Match Status. It is already possible using a NodeJS server if configured, but i do not know how that would be of any help to the BitFocus team.
Direct links or attachments to the ethernet control protocol or API:
https://vextm.dwabtech.com/
https://www.npmjs.com/package/vex-tm-client?activeTab=readme
|
process
|
vex tm control i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control vex tournament manager what you would like to be able to make it do from companion control the audience display functionality and the match status it is already possible using a nodejs server if configured but i do not know how that would be of any help to the bitfocus team direct links or attachments to the ethernet control protocol or api
| 1
|
20,177
| 26,732,899,166
|
IssuesEvent
|
2023-01-30 06:55:53
|
OpenEnergyPlatform/open-MaStR
|
https://api.github.com/repos/OpenEnergyPlatform/open-MaStR
|
closed
|
Convert plain SQL to SQLAlchemy ORM where duplicated code exists
|
:scissors: post processing
|
In postprocessing the same queries are explicitly written for each technology including hard-coded table names
- Apply DRY by converting these SQL parts to SQLAlchemy ORM code
- Make table names (table objects) parametrizable
|
1.0
|
Convert plain SQL to SQLAlchemy ORM where duplicated code exists - In postprocessing the same queries are explicitly written for each technology including hard-coded table names
- Apply DRY by converting these SQL parts to SQLAlchemy ORM code
- Make table names (table objects) parametrizable
|
process
|
convert plain sql to sqlalchemy orm where duplicated code exists in postprocessing the same queries are explicitly written for each technology including hard coded table names apply dry by converting these sql parts to sqlalchemy orm code make table names table objects parametrizable
| 1
|
260,144
| 22,595,539,794
|
IssuesEvent
|
2022-06-29 02:18:13
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
There is an extra option 'Timestamp' in the 'Column Options' dialog for one empty table
|
🧪 testing :gear: tables :beetle: regression
|
**Storage Explorer Version**: 1.25.0-dev
**Build Number**: 20220622.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Architecture** ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.24.3)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a new table -> Click 'Column Options'.
3. Check there is no extra option 'Timestamp' in the 'Column Options' dialog.
## Expected Experience ##
There is no extra option 'Timestamp' in the 'Column Options' dialog.

## Actual Experience ##
There is an extra option 'Timestamp' in the 'Column Options' dialog.

## Additional Context ##
This issue also reproduces for 'Select Columns' dialog.

|
1.0
|
There is an extra option 'Timestamp' in the 'Column Options' dialog for one empty table - **Storage Explorer Version**: 1.25.0-dev
**Build Number**: 20220622.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Architecture** ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.24.3)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a new table -> Click 'Column Options'.
3. Check there is no extra option 'Timestamp' in the 'Column Options' dialog.
## Expected Experience ##
There is no extra option 'Timestamp' in the 'Column Options' dialog.

## Actual Experience ##
There is an extra option 'Timestamp' in the 'Column Options' dialog.

## Additional Context ##
This issue also reproduces for 'Select Columns' dialog.

|
non_process
|
there is an extra option timestamp in the column options dialog for one empty table storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey apple pro architecture how found from running test cases regression from previous release steps to reproduce expand one storage account tables create a new table click column options check there is no extra option timestamp in the column options dialog expected experience there is no extra option timestamp in the column options dialog actual experience there is an extra option timestamp in the column options dialog additional context this issue also reproduces for select columns dialog
| 0
|
15,689
| 19,847,979,200
|
IssuesEvent
|
2022-01-21 09:05:10
|
ooi-data/RS01SBPD-DP01A-01-CTDPFL104-recovered_wfp-dpc_ctd_instrument_recovered
|
https://api.github.com/repos/ooi-data/RS01SBPD-DP01A-01-CTDPFL104-recovered_wfp-dpc_ctd_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:05:09.665225.
## Details
Flow name: `RS01SBPD-DP01A-01-CTDPFL104-recovered_wfp-dpc_ctd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:05:09.665225.
## Details
Flow name: `RS01SBPD-DP01A-01-CTDPFL104-recovered_wfp-dpc_ctd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered wfp dpc ctd instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
22,068
| 30,592,376,490
|
IssuesEvent
|
2023-07-21 18:16:19
|
0xPolygonMiden/miden-vm
|
https://api.github.com/repos/0xPolygonMiden/miden-vm
|
closed
|
Programatically control when the VM should stop its current execution
|
processor
|
# Feature request
The Miden assembly is turning complete and it is trivial to write an infinite loop with it:
```
push.1 while.true push.1 end
```
This is a problem for transactions executed by the network. An operator must be able to stop execution of the VM under some policy, for example the current number of cycles, or if a fee schedule similar to gas is implemented, once the gas is depleted.
|
1.0
|
Programatically control when the VM should stop its current execution - # Feature request
The Miden assembly is turning complete and it is trivial to write an infinite loop with it:
```
push.1 while.true push.1 end
```
This is a problem for transactions executed by the network. An operator must be able to stop execution of the VM under some policy, for example the current number of cycles, or if a fee schedule similar to gas is implemented, once the gas is depleted.
|
process
|
programatically control when the vm should stop its current execution feature request the miden assembly is turning complete and it is trivial to write an infinite loop with it push while true push end this is a problem for transactions executed by the network an operator must be able to stop execution of the vm under some policy for example the current number of cycles or if a fee schedule similar to gas is implemented once the gas is depleted
| 1
|
14,416
| 17,466,403,168
|
IssuesEvent
|
2021-08-06 17:31:13
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
opened
|
Add Clear Bins button
|
enhancement Process Cooling
|
User should be able to keep current weather data in memory but clear all bins
|
1.0
|
Add Clear Bins button - User should be able to keep current weather data in memory but clear all bins
|
process
|
add clear bins button user should be able to keep current weather data in memory but clear all bins
| 1
|
79,125
| 10,112,807,900
|
IssuesEvent
|
2019-07-30 15:23:24
|
ShailenNaidoo/hydrogen
|
https://api.github.com/repos/ShailenNaidoo/hydrogen
|
closed
|
Docs: Restructure and split up documentation
|
documentation enhancement
|
Separate instructions into Installation and Usage sections. I usually have `docs/installation.md` and `docs/usage.md` which I link to on Docs section of `README.md` to keep it light.
You can have configuration of package.json in Installation, website / project layout in a Web Dev or Static Site Development section and move the build commands to a Build or Run section.
|
1.0
|
Docs: Restructure and split up documentation - Separate instructions into Installation and Usage sections. I usually have `docs/installation.md` and `docs/usage.md` which I link to on Docs section of `README.md` to keep it light.
You can have configuration of package.json in Installation, website / project layout in a Web Dev or Static Site Development section and move the build commands to a Build or Run section.
|
non_process
|
docs restructure and split up documentation separate instructions into installation and usage sections i usually have docs installation md and docs usage md which i link to on docs section of readme md to keep it light you can have configuration of package json in installation website project layout in a web dev or static site development section and move the build commands to a build or run section
| 0
|
168,283
| 26,626,675,213
|
IssuesEvent
|
2023-01-24 15:00:34
|
patternfly/patternfly-design
|
https://api.github.com/repos/patternfly/patternfly-design
|
closed
|
Consider adjusting the visual design of PF's left nav menu.
|
Visual Design PF website
|
Consider better visually distinguishing the PF site left nav menu from the site's background.
- This is an enhancement suggestion based on the [design system competitive analysis.](https://docs.google.com/presentation/d/161XsZ6NrUSgnCkaA1e_96oxWY72VsJAa2SVmtIEhWJg/edit#slide=id.g547716335e_0_260)
- The idea is to have more distinction, inspired by [Carbon](https://carbondesignsystem.com/components/button/usage/), [Clarity](https://clarity.design/documentation/buttons), and [Pajamas](https://design.gitlab.com/components/button/).
- In its current implementation, the contrast between the menu and the site's background is low. There is not significant separation between the menu and the rest of the site.
|
1.0
|
Consider adjusting the visual design of PF's left nav menu. - Consider better visually distinguishing the PF site left nav menu from the site's background.
- This is an enhancement suggestion based on the [design system competitive analysis.](https://docs.google.com/presentation/d/161XsZ6NrUSgnCkaA1e_96oxWY72VsJAa2SVmtIEhWJg/edit#slide=id.g547716335e_0_260)
- The idea is to have more distinction, inspired by [Carbon](https://carbondesignsystem.com/components/button/usage/), [Clarity](https://clarity.design/documentation/buttons), and [Pajamas](https://design.gitlab.com/components/button/).
- In its current implementation, the contrast between the menu and the site's background is low. There is not significant separation between the menu and the rest of the site.
|
non_process
|
consider adjusting the visual design of pf s left nav menu consider better visually distinguishing the pf site left nav menu from the site s background this is an enhancement suggestion based on the the idea is to have more distinction inspired by and in its current implementation the contrast between the menu and the site s background is low there is not significant separation between the menu and the rest of the site
| 0
|
156,498
| 24,624,327,799
|
IssuesEvent
|
2022-10-16 10:12:11
|
roeszler/reabook
|
https://api.github.com/repos/roeszler/reabook
|
closed
|
User Story: Create index.html
|
feature ux design
|
As a **admin**, I can **produce a html template** so that **all users can land at the initial home page**.
|
1.0
|
User Story: Create index.html - As a **admin**, I can **produce a html template** so that **all users can land at the initial home page**.
|
non_process
|
user story create index html as a admin i can produce a html template so that all users can land at the initial home page
| 0
|
56,407
| 6,518,786,076
|
IssuesEvent
|
2017-08-28 09:38:31
|
hardware/mailserver
|
https://api.github.com/repos/hardware/mailserver
|
closed
|
Postfix / OpenSSL 1.1.0 "no shared cipher" and "alert handshake failure" errors with ECDSA certificates
|
1.1-latest Bug / issue Fixed Upstream
|
Bon le titre exagère un peu puisque maintenant le serveur mail est fonctionnel.
Cela dit, j'ai eu une erreur avec TLS qui bloquait le serveur SMTP, l'une d'elle est par exemple :
```
2017-08-24T19:23:25.515590+00:00 drogon postfix/submission/smtpd[714]: warning: TLS library problem: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:ssl/record/rec_layer_s3.c:1382:SSL alert number 40
```
La chose à laquelle j'ai tout de suite pensé, c'est mon certificat LE qui utilise P-384, OCSP Must-Staple, tout ça tout ça, donc je me suis dit que ça venait peut-être de là. J'ai essayé de passer au self-signed en virant le volume et ça marche effectivement désormais. Cela dit je pense qu'on peut s'intéresser au problème plutôt que de le contourner ?
En vrac, j'ai d'autres erreurs dans les logs, telles que :
```2017-08-24T19:31:16.475679+00:00 drogon rspamd[532]: <455c3c>; lua; rbl.lua:93: error looking up 106.81.4.46.zen.spamhaus.org: server fail```
La même chose avec `rspamd.com`.
Ah et aussi mes notifications de Ossec sont parfois rejetées et/ou marquées en spam, parfois non, je pense que c'est dû au rate limiting, mais pas sûr.
|
1.0
|
Postfix / OpenSSL 1.1.0 "no shared cipher" and "alert handshake failure" errors with ECDSA certificates - Bon le titre exagère un peu puisque maintenant le serveur mail est fonctionnel.
Cela dit, j'ai eu une erreur avec TLS qui bloquait le serveur SMTP, l'une d'elle est par exemple :
```
2017-08-24T19:23:25.515590+00:00 drogon postfix/submission/smtpd[714]: warning: TLS library problem: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:ssl/record/rec_layer_s3.c:1382:SSL alert number 40
```
La chose à laquelle j'ai tout de suite pensé, c'est mon certificat LE qui utilise P-384, OCSP Must-Staple, tout ça tout ça, donc je me suis dit que ça venait peut-être de là. J'ai essayé de passer au self-signed en virant le volume et ça marche effectivement désormais. Cela dit je pense qu'on peut s'intéresser au problème plutôt que de le contourner ?
En vrac, j'ai d'autres erreurs dans les logs, telles que :
```2017-08-24T19:31:16.475679+00:00 drogon rspamd[532]: <455c3c>; lua; rbl.lua:93: error looking up 106.81.4.46.zen.spamhaus.org: server fail```
La même chose avec `rspamd.com`.
Ah et aussi mes notifications de Ossec sont parfois rejetées et/ou marquées en spam, parfois non, je pense que c'est dû au rate limiting, mais pas sûr.
|
non_process
|
postfix openssl no shared cipher and alert handshake failure errors with ecdsa certificates bon le titre exagère un peu puisque maintenant le serveur mail est fonctionnel cela dit j ai eu une erreur avec tls qui bloquait le serveur smtp l une d elle est par exemple drogon postfix submission smtpd warning tls library problem error ssl routines read bytes alert handshake failure ssl record rec layer c ssl alert number la chose à laquelle j ai tout de suite pensé c est mon certificat le qui utilise p ocsp must staple tout ça tout ça donc je me suis dit que ça venait peut être de là j ai essayé de passer au self signed en virant le volume et ça marche effectivement désormais cela dit je pense qu on peut s intéresser au problème plutôt que de le contourner en vrac j ai d autres erreurs dans les logs telles que drogon rspamd lua rbl lua error looking up zen spamhaus org server fail la même chose avec rspamd com ah et aussi mes notifications de ossec sont parfois rejetées et ou marquées en spam parfois non je pense que c est dû au rate limiting mais pas sûr
| 0
|
57,439
| 3,082,008,160
|
IssuesEvent
|
2015-08-23 09:42:42
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
closed
|
Убрать CustomLocations.* и GeoIPCountryWhois.csv из инсталляторов
|
bug imported Priority-Low
|
_From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on March 23, 2013 09:03:21_
Обоснование:
1.Они устаревают быстро и пользователи стабильных версий после инсталляции докачивают их и интернета.
2.Размер инсталлера будет меньше
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=957_
|
1.0
|
Убрать CustomLocations.* и GeoIPCountryWhois.csv из инсталляторов - _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on March 23, 2013 09:03:21_
Обоснование:
1.Они устаревают быстро и пользователи стабильных версий после инсталляции докачивают их и интернета.
2.Размер инсталлера будет меньше
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=957_
|
non_process
|
убрать customlocations и geoipcountrywhois csv из инсталляторов from on march обоснование они устаревают быстро и пользователи стабильных версий после инсталляции докачивают их и интернета размер инсталлера будет меньше original issue
| 0
|
339,473
| 30,449,451,607
|
IssuesEvent
|
2023-07-16 05:07:36
|
CATcher-org/WATcher
|
https://api.github.com/repos/CATcher-org/WATcher
|
opened
|
Type error when running ng test
|
aspect-Testing category.Bug
|
**Describe the bug**
```
Subsequent variable declarations must have the same type.
Variable 'window' must be of type 'Window & typeof globalThis', but here has type 'Window'.ts(2403)
lib.dom.d.ts(27343, 13): 'window' was also declared here.
```
|
1.0
|
Type error when running ng test - **Describe the bug**
```
Subsequent variable declarations must have the same type.
Variable 'window' must be of type 'Window & typeof globalThis', but here has type 'Window'.ts(2403)
lib.dom.d.ts(27343, 13): 'window' was also declared here.
```
|
non_process
|
type error when running ng test describe the bug subsequent variable declarations must have the same type variable window must be of type window typeof globalthis but here has type window ts lib dom d ts window was also declared here
| 0
|
220,694
| 24,565,381,646
|
IssuesEvent
|
2022-10-13 02:10:19
|
dgee2/dgee2.github.io
|
https://api.github.com/repos/dgee2/dgee2.github.io
|
closed
|
CVE-2022-0722 (High) detected in parse-url-5.0.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-0722 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.2.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.2.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.2.1.tgz (Root Library)
- gatsby-telemetry-2.2.0.tgz
- git-up-4.0.2.tgz
- :x: **parse-url-5.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dgee2/dgee2.github.io/commit/d34d2613e60e2c1800648027985cd960c769bd0d">d34d2613e60e2c1800648027985cd960c769bd0d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0722>CVE-2022-0722</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226">https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution (parse-url): 6.0.3</p>
<p>Direct dependency fix Resolution (gatsby): 3.3.0-telemetry-test.33</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0722 (High) detected in parse-url-5.0.2.tgz - autoclosed - ## CVE-2022-0722 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.2.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.2.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.2.1.tgz (Root Library)
- gatsby-telemetry-2.2.0.tgz
- git-up-4.0.2.tgz
- :x: **parse-url-5.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dgee2/dgee2.github.io/commit/d34d2613e60e2c1800648027985cd960c769bd0d">d34d2613e60e2c1800648027985cd960c769bd0d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
<p>Publish Date: 2022-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0722>CVE-2022-0722</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226">https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226</a></p>
<p>Release Date: 2022-06-27</p>
<p>Fix Resolution (parse-url): 6.0.3</p>
<p>Direct dependency fix Resolution (gatsby): 3.3.0-telemetry-test.33</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in parse url tgz autoclosed cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url package json dependency hierarchy gatsby tgz root library gatsby telemetry tgz git up tgz x parse url tgz vulnerable library found in head commit a href found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url direct dependency fix resolution gatsby telemetry test step up your open source security game with mend
| 0
|
16,469
| 21,391,945,732
|
IssuesEvent
|
2022-04-21 08:01:46
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
closed
|
Be more like proper forward checking
|
Constraint Processing
|
After #422, move evaluation of solved expressions from an action after fail, to an action after success. Rather than determine what assignment we _should have_ used for Xi, instead use it to determine what assignment we _could_ use for Xi+1. Initially, there will only be one possibility, until #424.
|
1.0
|
Be more like proper forward checking - After #422, move evaluation of solved expressions from an action after fail, to an action after success. Rather than determine what assignment we _should have_ used for Xi, instead use it to determine what assignment we _could_ use for Xi+1. Initially, there will only be one possibility, until #424.
|
process
|
be more like proper forward checking after move evaluation of solved expressions from an action after fail to an action after success rather than determine what assignment we should have used for xi instead use it to determine what assignment we could use for xi initially there will only be one possibility until
| 1
|
3,619
| 6,657,713,599
|
IssuesEvent
|
2017-09-30 09:32:41
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
opened
|
Release 0.12.2
|
release process
|
**Initial release notes**:
- updated API of `MarshmallowNetworkObservingStrategy` (TODO: write more details)
- ...
**Things to do**:
- ...
|
1.0
|
Release 0.12.2 - **Initial release notes**:
- updated API of `MarshmallowNetworkObservingStrategy` (TODO: write more details)
- ...
**Things to do**:
- ...
|
process
|
release initial release notes updated api of marshmallownetworkobservingstrategy todo write more details things to do
| 1
|
259,242
| 8,195,749,459
|
IssuesEvent
|
2018-08-31 07:26:25
|
alpheios-project/webextension
|
https://api.github.com/repos/alpheios-project/webextension
|
reopened
|
can't be reactivated after page reload on Chrome
|
bug priority_high waiting_verification
|
@monzug reports that with the issue-inflections-59 build after page reload it's no longer possible to activate Alpheios in Chrome. It still works fine in Firefox.
I suspect this might have been introduced by #102
Assigning to @irina060981 to investigate.
|
1.0
|
can't be reactivated after page reload on Chrome - @monzug reports that with the issue-inflections-59 build after page reload it's no longer possible to activate Alpheios in Chrome. It still works fine in Firefox.
I suspect this might have been introduced by #102
Assigning to @irina060981 to investigate.
|
non_process
|
can t be reactivated after page reload on chrome monzug reports that with the issue inflections build after page reload it s no longer possible to activate alpheios in chrome it still works fine in firefox i suspect this might have been introduced by assigning to to investigate
| 0
|
2,470
| 5,245,818,770
|
IssuesEvent
|
2017-02-01 06:47:39
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
onRowClick fires only when I click on a cell padding.
|
bug inprocess
|
Actually, the title says it all.
I am using onRowClick event to fire a function that will display an overlay.
Everything was working as expected until recently. I noticed that when I click on a row the onRowClick event doesn't fire.
After some investigation, I discovered that it's firing only when a cell padding is clicked but not its content.
Did anyone encounter the same issue?
Thanks for your time.
|
1.0
|
onRowClick fires only when I click on a cell padding. - Actually, the title says it all.
I am using onRowClick event to fire a function that will display an overlay.
Everything was working as expected until recently. I noticed that when I click on a row the onRowClick event doesn't fire.
After some investigation, I discovered that it's firing only when a cell padding is clicked but not its content.
Did anyone encounter the same issue?
Thanks for your time.
|
process
|
onrowclick fires only when i click on a cell padding actually the title says it all i am using onrowclick event to fire a function that will display an overlay everything was working as expected until recently i noticed that when i click on a row the onrowclick event doesn t fire after some investigation i discovered that it s firing only when a cell padding is clicked but not its content did anyone encounter the same issue thanks for your time
| 1
|
132,739
| 28,312,709,008
|
IssuesEvent
|
2023-04-10 16:50:17
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0] Feature Idea: Showon for filling Custom Fields
|
New Feature No Code Attached Yet J4 Issue
|
This is an idea I had for a while.
The ability to show the input form of a custom field based on the value of another custom field in the same input form.
For example, say you have a category of "Sports".
You have a custom field of type "List" called "Sport Type" that can be: football, basket, tennis, swimming, water-polo, etc...
You have a custom field of type "Text" called "Pool size". You want this custom field only show up for being filled in only if the sport type is an aquatic sport.
So when creating the custom field "Pool size", in the field options you could have something like:

You could add up to 3 showon conditions.
Then when editing an article inside the "Sports" category, the ability to enter a value for the "Pool size" field would only show up if you previously set the "Sport Type" to either "swimming" or "water-polo" as the other custom field value.

If the sport is set to "Football" for example, the "Pool size" field input would be collapsed.

This is just an example. But I'm sure you get the purpose of the showon thingy.
|
1.0
|
[4.0] Feature Idea: Showon for filling Custom Fields - This is an idea I had for a while.
The ability to show the input form of a custom field based on the value of another custom field in the same input form.
For example, say you have a category of "Sports".
You have a custom field of type "List" called "Sport Type" that can be: football, basket, tennis, swimming, water-polo, etc...
You have a custom field of type "Text" called "Pool size". You want this custom field only show up for being filled in only if the sport type is an aquatic sport.
So when creating the custom field "Pool size", in the field options you could have something like:

You could add up to 3 showon conditions.
Then when editing an article inside the "Sports" category, the ability to enter a value for the "Pool size" field would only show up if you previously set the "Sport Type" to either "swimming" or "water-polo" as the other custom field value.

If the sport is set to "Football" for example, the "Pool size" field input would be collapsed.

This is just an example. But I'm sure you get the purpose of the showon thingy.
|
non_process
|
feature idea showon for filling custom fields this is an idea i had for a while the ability to show the input form of a custom field based on the value of another custom field in the same input form for example say you have a category of sports you have a custom field of type list called sport type that can be football basket tennis swimming water polo etc you have a custom field of type text called pool size you want this custom field only show up for being filled in only if the sport type is an aquatic sport so when creating the custom field pool size in the field options you could have something like you could add up to showon conditions then when editing an article inside the sports category the ability to enter a value for the pool size field would only show up if you previously set the sport type to either swimming or water polo as the other custom field value if the sport is set to football for example the pool size field input would be collapsed this is just an example but i m sure you get the purpose of the showon thingy
| 0
|
5,568
| 5,814,535,651
|
IssuesEvent
|
2017-05-05 04:14:15
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Failed assert in NegotiateStream_StreamToStream_Authentication_TargetName_Success
|
area-System.Net.Security bug os-windows
|
I hit this assert when I try to run the System.Net.Security tests (outerloop) locally:
```
System.Net.Security.Tests.NegotiateStreamStreamToStreamTest.NegotiateStream_StreamToStream_Authentication_TargetName_Success [FAIL]
System.Diagnostics.Debug+DebugAssertException : Unknown Interop.SecurityStatus value: -2146892976
at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Environment.get_StackTrace()
at System.Diagnostics.Debug.Assert(Boolean condition, String message, String detailMessage)
at System.Net.SecurityStatusAdapterPal.GetSecurityStatusPalFromInterop(SecurityStatus win32SecurityStatus) in c:\Users\stoub\Source\repos\corefx\src
\System.Net.Security\src\System\Net\SecurityStatusAdapterPal.Windows.cs:line 74
at System.Net.Security.NegotiateStreamPal.InitializeSecurityContext(SafeFreeCredentials credentialsHandle, SafeDeleteContext& securityContext, Strin
g spn, ContextFlagsPal requestedContextFlags, SecurityBuffer[] inSecurityBufferArray, SecurityBuffer outSecurityBuffer, ContextFlagsPal& contextFlags) in c:\
Users\stoub\Source\repos\corefx\src\System.Net.Security\src\System\Net\NegotiateStreamPal.Windows.cs:line 166
at System.Net.NTAuthentication.GetOutgoingBlob(Byte[] incomingBlob, Boolean throwOnError, SecurityStatusPal& statusCode) in c:\Users\stoub\Source\re
pos\corefx\src\System.Net.Security\src\System\Net\NTAuthentication.cs:line 344
at System.Net.Security.NegoState.GetOutgoingBlob(Byte[] incomingBlob, Exception& e) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\sr
c\System\Net\SecureProtocols\NegoState.cs:line 787
at System.Net.Security.NegoState.StartSendBlob(Byte[] message, LazyAsyncResult lazyResult) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Secu
rity\src\System\Net\SecureProtocols\NegoState.cs:line 443
at System.Net.Security.NegoState.ProcessAuthentication(LazyAsyncResult lazyResult) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\src
\System\Net\SecureProtocols\NegoState.cs:line 356
at System.Net.Security.NegotiateStream.BeginAuthenticateAsClient(NetworkCredential credential, ChannelBinding binding, String targetName, Protection
Level requiredProtectionLevel, TokenImpersonationLevel allowedImpersonationLevel, AsyncCallback asyncCallback, Object asyncState) in c:\Users\stoub\Source\re
pos\corefx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 101
at System.Net.Security.NegotiateStream.BeginAuthenticateAsClient(NetworkCredential credential, String targetName, AsyncCallback asyncCallback, Objec
t asyncState) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 60
at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl[TArg1,TArg2](Func`5 beginMethod, Func`2 endFunction, Action`1 endAction, TArg1 arg1, TArg2 arg
2, Object state, TaskCreationOptions creationOptions)
at System.Threading.Tasks.TaskFactory.FromAsync[TArg1,TArg2](Func`5 beginMethod, Action`1 endMethod, TArg1 arg1, TArg2 arg2, Object state)
at System.Net.Security.NegotiateStream.AuthenticateAsClientAsync(NetworkCredential credential, String targetName) in c:\Users\stoub\Source\repos\cor
efx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 183
at System.Net.Security.Tests.NegotiateStreamStreamToStreamTest.NegotiateStream_StreamToStream_Authentication_TargetName_Success() in c:\Users\stoub\
Source\repos\corefx\src\System.Net.Security\tests\FunctionalTests\NegotiateStreamStreamToStreamTest.cs:line 90
```
According to https://msdn.microsoft.com/en-us/library/windows/desktop/dd542646(v=vs.85).aspx, that status code is SEC_E_DOWNGRADE_DETECTED ("The system cannot contact a domain controller to service the authentication request. Please try again later."). Do we need to add this error and others to the table in https://github.com/dotnet/corefx/blob/2ff9b2a1e367a9694af6bdaf9856ea12f9ae13cd/src/System.Net.Security/src/System/Net/SecurityStatusAdapterPal.Windows.cs?
|
True
|
Failed assert in NegotiateStream_StreamToStream_Authentication_TargetName_Success - I hit this assert when I try to run the System.Net.Security tests (outerloop) locally:
```
System.Net.Security.Tests.NegotiateStreamStreamToStreamTest.NegotiateStream_StreamToStream_Authentication_TargetName_Success [FAIL]
System.Diagnostics.Debug+DebugAssertException : Unknown Interop.SecurityStatus value: -2146892976
at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Environment.get_StackTrace()
at System.Diagnostics.Debug.Assert(Boolean condition, String message, String detailMessage)
at System.Net.SecurityStatusAdapterPal.GetSecurityStatusPalFromInterop(SecurityStatus win32SecurityStatus) in c:\Users\stoub\Source\repos\corefx\src
\System.Net.Security\src\System\Net\SecurityStatusAdapterPal.Windows.cs:line 74
at System.Net.Security.NegotiateStreamPal.InitializeSecurityContext(SafeFreeCredentials credentialsHandle, SafeDeleteContext& securityContext, Strin
g spn, ContextFlagsPal requestedContextFlags, SecurityBuffer[] inSecurityBufferArray, SecurityBuffer outSecurityBuffer, ContextFlagsPal& contextFlags) in c:\
Users\stoub\Source\repos\corefx\src\System.Net.Security\src\System\Net\NegotiateStreamPal.Windows.cs:line 166
at System.Net.NTAuthentication.GetOutgoingBlob(Byte[] incomingBlob, Boolean throwOnError, SecurityStatusPal& statusCode) in c:\Users\stoub\Source\re
pos\corefx\src\System.Net.Security\src\System\Net\NTAuthentication.cs:line 344
at System.Net.Security.NegoState.GetOutgoingBlob(Byte[] incomingBlob, Exception& e) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\sr
c\System\Net\SecureProtocols\NegoState.cs:line 787
at System.Net.Security.NegoState.StartSendBlob(Byte[] message, LazyAsyncResult lazyResult) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Secu
rity\src\System\Net\SecureProtocols\NegoState.cs:line 443
at System.Net.Security.NegoState.ProcessAuthentication(LazyAsyncResult lazyResult) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\src
\System\Net\SecureProtocols\NegoState.cs:line 356
at System.Net.Security.NegotiateStream.BeginAuthenticateAsClient(NetworkCredential credential, ChannelBinding binding, String targetName, Protection
Level requiredProtectionLevel, TokenImpersonationLevel allowedImpersonationLevel, AsyncCallback asyncCallback, Object asyncState) in c:\Users\stoub\Source\re
pos\corefx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 101
at System.Net.Security.NegotiateStream.BeginAuthenticateAsClient(NetworkCredential credential, String targetName, AsyncCallback asyncCallback, Objec
t asyncState) in c:\Users\stoub\Source\repos\corefx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 60
at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl[TArg1,TArg2](Func`5 beginMethod, Func`2 endFunction, Action`1 endAction, TArg1 arg1, TArg2 arg
2, Object state, TaskCreationOptions creationOptions)
at System.Threading.Tasks.TaskFactory.FromAsync[TArg1,TArg2](Func`5 beginMethod, Action`1 endMethod, TArg1 arg1, TArg2 arg2, Object state)
at System.Net.Security.NegotiateStream.AuthenticateAsClientAsync(NetworkCredential credential, String targetName) in c:\Users\stoub\Source\repos\cor
efx\src\System.Net.Security\src\System\Net\SecureProtocols\NegotiateStream.cs:line 183
at System.Net.Security.Tests.NegotiateStreamStreamToStreamTest.NegotiateStream_StreamToStream_Authentication_TargetName_Success() in c:\Users\stoub\
Source\repos\corefx\src\System.Net.Security\tests\FunctionalTests\NegotiateStreamStreamToStreamTest.cs:line 90
```
According to https://msdn.microsoft.com/en-us/library/windows/desktop/dd542646(v=vs.85).aspx, that status code is SEC_E_DOWNGRADE_DETECTED ("The system cannot contact a domain controller to service the authentication request. Please try again later."). Do we need to add this error and others to the table in https://github.com/dotnet/corefx/blob/2ff9b2a1e367a9694af6bdaf9856ea12f9ae13cd/src/System.Net.Security/src/System/Net/SecurityStatusAdapterPal.Windows.cs?
|
non_process
|
failed assert in negotiatestream streamtostream authentication targetname success i hit this assert when i try to run the system net security tests outerloop locally system net security tests negotiatestreamstreamtostreamtest negotiatestream streamtostream authentication targetname success system diagnostics debug debugassertexception unknown interop securitystatus value at system environment getstacktrace exception e boolean needfileinfo at system environment get stacktrace at system diagnostics debug assert boolean condition string message string detailmessage at system net securitystatusadapterpal getsecuritystatuspalfrominterop securitystatus in c users stoub source repos corefx src system net security src system net securitystatusadapterpal windows cs line at system net security negotiatestreampal initializesecuritycontext safefreecredentials credentialshandle safedeletecontext securitycontext strin g spn contextflagspal requestedcontextflags securitybuffer insecuritybufferarray securitybuffer outsecuritybuffer contextflagspal contextflags in c users stoub source repos corefx src system net security src system net negotiatestreampal windows cs line at system net ntauthentication getoutgoingblob byte incomingblob boolean throwonerror securitystatuspal statuscode in c users stoub source re pos corefx src system net security src system net ntauthentication cs line at system net security negostate getoutgoingblob byte incomingblob exception e in c users stoub source repos corefx src system net security sr c system net secureprotocols negostate cs line at system net security negostate startsendblob byte message lazyasyncresult lazyresult in c users stoub source repos corefx src system net secu rity src system net secureprotocols negostate cs line at system net security negostate processauthentication lazyasyncresult lazyresult in c users stoub source repos corefx src system net security src system net secureprotocols negostate cs line at system net security negotiatestream beginauthenticateasclient networkcredential credential channelbinding binding string targetname protection level requiredprotectionlevel tokenimpersonationlevel allowedimpersonationlevel asynccallback asynccallback object asyncstate in c users stoub source re pos corefx src system net security src system net secureprotocols negotiatestream cs line at system net security negotiatestream beginauthenticateasclient networkcredential credential string targetname asynccallback asynccallback objec t asyncstate in c users stoub source repos corefx src system net security src system net secureprotocols negotiatestream cs line at system threading tasks taskfactory fromasyncimpl func beginmethod func endfunction action endaction arg object state taskcreationoptions creationoptions at system threading tasks taskfactory fromasync func beginmethod action endmethod object state at system net security negotiatestream authenticateasclientasync networkcredential credential string targetname in c users stoub source repos cor efx src system net security src system net secureprotocols negotiatestream cs line at system net security tests negotiatestreamstreamtostreamtest negotiatestream streamtostream authentication targetname success in c users stoub source repos corefx src system net security tests functionaltests negotiatestreamstreamtostreamtest cs line according to that status code is sec e downgrade detected the system cannot contact a domain controller to service the authentication request please try again later do we need to add this error and others to the table in
| 0
|
48,548
| 12,215,502,185
|
IssuesEvent
|
2020-05-01 13:04:45
|
Catfriend1/syncthing-android
|
https://api.github.com/repos/Catfriend1/syncthing-android
|
opened
|
Lint warnings
|
build
|
```
Use FragmentContainerView instead of the <fragment> tag
../../src/main/res/layout/fragment_simple_versioning.xml:29: Replace the <fragment> tag with FragmentContainerView.
26 android:textSize="18sp"
27 android:textStyle="bold" />
28
29 <fragment
30 android:id="@+id/fragment"
31 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
32 android:layout_width="match_parent"
../../src/main/res/layout/fragment_staggered_versioning.xml:25: Replace the <fragment> tag with FragmentContainerView.
22 android:layout_margin="10dp"
23 android:layout_marginBottom="0dp" />
24
25 <fragment
26 android:id="@+id/fragment"
27 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
28 android:layout_width="match_parent"
../../src/main/res/layout/fragment_trashcan_versioning.xml:30: Replace the <fragment> tag with FragmentContainerView.
27 android:layout_margin="10dp"
28 android:textStyle="bold"/>
29
30 <fragment
31 android:id="@+id/fragment"
32 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
33 android:layout_width="match_parent"
```
|
1.0
|
Lint warnings - ```
Use FragmentContainerView instead of the <fragment> tag
../../src/main/res/layout/fragment_simple_versioning.xml:29: Replace the <fragment> tag with FragmentContainerView.
26 android:textSize="18sp"
27 android:textStyle="bold" />
28
29 <fragment
30 android:id="@+id/fragment"
31 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
32 android:layout_width="match_parent"
../../src/main/res/layout/fragment_staggered_versioning.xml:25: Replace the <fragment> tag with FragmentContainerView.
22 android:layout_margin="10dp"
23 android:layout_marginBottom="0dp" />
24
25 <fragment
26 android:id="@+id/fragment"
27 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
28 android:layout_width="match_parent"
../../src/main/res/layout/fragment_trashcan_versioning.xml:30: Replace the <fragment> tag with FragmentContainerView.
27 android:layout_margin="10dp"
28 android:textStyle="bold"/>
29
30 <fragment
31 android:id="@+id/fragment"
32 android:name="com.nutomic.syncthingandroid.fragments.NumberPickerFragment"
33 android:layout_width="match_parent"
```
|
non_process
|
lint warnings use fragmentcontainerview instead of the tag src main res layout fragment simple versioning xml replace the tag with fragmentcontainerview android textsize android textstyle bold fragment android id id fragment android name com nutomic syncthingandroid fragments numberpickerfragment android layout width match parent src main res layout fragment staggered versioning xml replace the tag with fragmentcontainerview android layout margin android layout marginbottom fragment android id id fragment android name com nutomic syncthingandroid fragments numberpickerfragment android layout width match parent src main res layout fragment trashcan versioning xml replace the tag with fragmentcontainerview android layout margin android textstyle bold fragment android id id fragment android name com nutomic syncthingandroid fragments numberpickerfragment android layout width match parent
| 0
|
166,209
| 6,299,959,535
|
IssuesEvent
|
2017-07-21 01:22:27
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
OpenBMC debug not available when we are in configured for HIERARCHY
|
component: openbmc priority:high sprint2
|
For OpenBMC, we have used XCATBYPASS=1 to print out debug information and the rest calls... with Hierarchy we can't use it.
```
[root@fs3 xcat]# XCATBYPASS=1 rpower mid05tor12cn02 state
Warning: XCATBYPASS is set, skipping hierarchy call to sn01
```
Can we convert the openbmc code to use the `site.xcatdebugmode=1`, similar to how we are doing things for `[ipmi_debug]` calls?
Need a way to get some information about to deubg the openbmc code in a hierarchy environment.
|
1.0
|
OpenBMC debug not available when we are in configured for HIERARCHY - For OpenBMC, we have used XCATBYPASS=1 to print out debug information and the rest calls... with Hierarchy we can't use it.
```
[root@fs3 xcat]# XCATBYPASS=1 rpower mid05tor12cn02 state
Warning: XCATBYPASS is set, skipping hierarchy call to sn01
```
Can we convert the openbmc code to use the `site.xcatdebugmode=1`, similar to how we are doing things for `[ipmi_debug]` calls?
Need a way to get some information about to deubg the openbmc code in a hierarchy environment.
|
non_process
|
openbmc debug not available when we are in configured for hierarchy for openbmc we have used xcatbypass to print out debug information and the rest calls with hierarchy we can t use it xcatbypass rpower state warning xcatbypass is set skipping hierarchy call to can we convert the openbmc code to use the site xcatdebugmode similar to how we are doing things for calls need a way to get some information about to deubg the openbmc code in a hierarchy environment
| 0
|
9,958
| 12,991,265,648
|
IssuesEvent
|
2020-07-23 03:00:18
|
chavarera/python-mini-projects
|
https://api.github.com/repos/chavarera/python-mini-projects
|
opened
|
find the most dominant color/tone in an image
|
Image-processing boring-stuffs
|
**problem statement**
find the most dominant color/tone in an image
|
1.0
|
find the most dominant color/tone in an image - **problem statement**
find the most dominant color/tone in an image
|
process
|
find the most dominant color tone in an image problem statement find the most dominant color tone in an image
| 1
|
1,326
| 3,875,038,080
|
IssuesEvent
|
2016-04-11 22:48:13
|
Lever-age/Leverage
|
https://api.github.com/repos/Lever-age/Leverage
|
closed
|
Set up organization
|
process/administration
|
On Wednesday we discussed setting up a GitHub organization and having the repo live there (instead of keeping it under @BayoAdejare's name).
|
1.0
|
Set up organization - On Wednesday we discussed setting up a GitHub organization and having the repo live there (instead of keeping it under @BayoAdejare's name).
|
process
|
set up organization on wednesday we discussed setting up a github organization and having the repo live there instead of keeping it under bayoadejare s name
| 1
|
59,565
| 3,114,353,107
|
IssuesEvent
|
2015-09-03 08:14:20
|
ceylon/ceylon.language
|
https://api.github.com/repos/ceylon/ceylon.language
|
closed
|
infinity.powerOfInteger(0) on JS
|
BUG high priority IN PROGRESS JS runtime
|
`infinity.powerOfInteger(0)` currently blows up. It should evaluate to `0.0`. Same with `(-infinity).powerOfInteger(0)`.
|
1.0
|
infinity.powerOfInteger(0) on JS - `infinity.powerOfInteger(0)` currently blows up. It should evaluate to `0.0`. Same with `(-infinity).powerOfInteger(0)`.
|
non_process
|
infinity powerofinteger on js infinity powerofinteger currently blows up it should evaluate to same with infinity powerofinteger
| 0
|
827,055
| 31,723,395,806
|
IssuesEvent
|
2023-09-10 17:13:19
|
rangav/thunder-client-support
|
https://api.github.com/repos/rangav/thunder-client-support
|
closed
|
Test for text responseData
|
bug Priority
|
Is it possible to writing tests for ResponseBody in text format?
I can parse it with js custom function but functions does not work for ResponseBody field in tests.
|
1.0
|
Test for text responseData - Is it possible to writing tests for ResponseBody in text format?
I can parse it with js custom function but functions does not work for ResponseBody field in tests.
|
non_process
|
test for text responsedata is it possible to writing tests for responsebody in text format i can parse it with js custom function but functions does not work for responsebody field in tests
| 0
|
521,486
| 15,109,993,490
|
IssuesEvent
|
2021-02-08 18:35:47
|
microsoft/responsible-ai-widgets
|
https://api.github.com/repos/microsoft/responsible-ai-widgets
|
closed
|
Error Analysis: Legend does not update for the heatmap
|
Error Analysis High Priority Pending verification to close bug
|
Seems to be happening only for the 1-dim case. Not sure:

|
1.0
|
Error Analysis: Legend does not update for the heatmap - Seems to be happening only for the 1-dim case. Not sure:

|
non_process
|
error analysis legend does not update for the heatmap seems to be happening only for the dim case not sure
| 0
|
700,944
| 24,079,669,111
|
IssuesEvent
|
2022-09-19 04:34:24
|
Faithful-Resource-Pack/Discord-Bot
|
https://api.github.com/repos/Faithful-Resource-Pack/Discord-Bot
|
closed
|
[Feature] Add support for Classic Faithful 32x PA Bedrock into /texture.
|
feature high priority easy
|
### Is your feature request related to a problem?
No.
### Describe the feature you'd like
Pretty much what it sounds like, when running /texture on CF 32x PA for a bedrock exclusive texture, currently it does not show up.
This should ideally function exactly as the main Faithful packs, and Classic Faithful 32x Jappa do currently.
### Notes
Pretty high priority but not an absolute must.
|
1.0
|
[Feature] Add support for Classic Faithful 32x PA Bedrock into /texture. - ### Is your feature request related to a problem?
No.
### Describe the feature you'd like
Pretty much what it sounds like, when running /texture on CF 32x PA for a bedrock exclusive texture, currently it does not show up.
This should ideally function exactly as the main Faithful packs, and Classic Faithful 32x Jappa do currently.
### Notes
Pretty high priority but not an absolute must.
|
non_process
|
add support for classic faithful pa bedrock into texture is your feature request related to a problem no describe the feature you d like pretty much what it sounds like when running texture on cf pa for a bedrock exclusive texture currently it does not show up this should ideally function exactly as the main faithful packs and classic faithful jappa do currently notes pretty high priority but not an absolute must
| 0
|
305,281
| 26,375,661,625
|
IssuesEvent
|
2023-01-12 02:04:07
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
opened
|
[TEST-INFRA] Implement separate pipeline on Jenkins for negative test cases
|
kind/test
|
## What's the test to develop? Please describe
Implement separate pipeline on Jenkins to run negative test cases. The negative test cases can run weekly once or twice
## Describe the items of the test development (DoD, definition of done) you'd like
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
- [ ] Implement separate pipeline on Jenkins for negative test cases
## Additional context
|
1.0
|
[TEST-INFRA] Implement separate pipeline on Jenkins for negative test cases - ## What's the test to develop? Please describe
Implement separate pipeline on Jenkins to run negative test cases. The negative test cases can run weekly once or twice
## Describe the items of the test development (DoD, definition of done) you'd like
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
- [ ] Implement separate pipeline on Jenkins for negative test cases
## Additional context
|
non_process
|
implement separate pipeline on jenkins for negative test cases what s the test to develop please describe implement separate pipeline on jenkins to run negative test cases the negative test cases can run weekly once or twice describe the items of the test development dod definition of done you d like please use a task list for items on a separate line with a clickable checkbox implement separate pipeline on jenkins for negative test cases additional context
| 0
|
11,985
| 14,737,129,545
|
IssuesEvent
|
2021-01-07 00:57:20
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - trying to generate an invoice between billing cycles
|
anc-external anc-ops anc-process anp-important ant-bug ant-support has attachment
|
In GitLab by @kdjstudios on Apr 18, 2018, 09:33
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-17-83231
**Server:** Hosted
**Client/Site:** Keener
**Account:** All
**Issue:**
I was trying to generate an invoice between billing cycles and I get this error. We were able to create invoices between billing cycles before the merge but now we cannot. This was a feature that we absolutely loved and needed.

|
1.0
|
Keener - trying to generate an invoice between billing cycles - In GitLab by @kdjstudios on Apr 18, 2018, 09:33
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-17-83231
**Server:** Hosted
**Client/Site:** Keener
**Account:** All
**Issue:**
I was trying to generate an invoice between billing cycles and I get this error. We were able to create invoices between billing cycles before the merge but now we cannot. This was a feature that we absolutely loved and needed.

|
process
|
keener trying to generate an invoice between billing cycles in gitlab by kdjstudios on apr submitted by gaylan garrett helpdesk server hosted client site keener account all issue i was trying to generate an invoice between billing cycles and i get this error we were able to create invoices between billing cycles before the merge but now we cannot this was a feature that we absolutely loved and needed uploads image png
| 1
|
14,248
| 17,185,235,254
|
IssuesEvent
|
2021-07-16 00:06:28
|
filecoin-project/lotus
|
https://api.github.com/repos/filecoin-project/lotus
|
closed
|
Deal retrieval failed
|
area/markets epic/seal-unseal-process hint/needs-author-input kind/bug kind/stale
|
> Note: For security-related bugs/issues, please follow the [security policy](https://github.com/filecoin-project/lotus/security/policy).
**Describe the bug**
Client failed to retrieve deal from miner
**Version (run `lotus version`):**
```
$ lotus version
Daemon: 1.9.0-rc3+mainnet+git.0398e556d+api1.2.0
Local: lotus version 1.9.0-rc3+mainnet+git.0398e556d
```
**To Reproduce**
Steps to reproduce the behavior:
```
lotus client retrieve --miner f0447183 bafykbzacedhlaek5flnggsdojwwv2v4nfk3jq2ek5vlxujpz2dupq6dd44ohw test0511
```
**Expected behavior**
Deal retrieval complete
**Logs**
Client stuck at here for a very long time:
```
> Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew)
> Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance)
> Recv: 0 B, Paid 0 FIL, ClientEventDealAccepted (DealStatusAccepted)
> Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelSkip (DealStatusOngoing)
```
Miner logs:
```
May 11 21:23:19 lotus-miner[8387]: 2021-05-11T21:23:19.464Z INFO markets loggers/loggers.go:30 retrieval provider event {"name": "ProviderEventDataTransferError", "deal ID": "1620766516060596709", "receiver": "12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g", "state": "DealStatusErrored", "message": "deal data transfer failed: data transfer channel 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g-12D3KooWLHJ46AAKHWaWzs2SMizgqWcgGfiuZphMthUeitgHFNd3-1620766516060126252 failed to transfer data: graphsync response to peer 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g did not complete: response status code RequestFailedUnknown"}
```
List retrieval-deals on miner:
```
$ lotus-miner retrieval-deals list
tx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g-12D3KooWLHJ46AAKHWaWzs2SMizgqWcgGfiuZphMthUeitgHFNd3-1620766516060126251 failed to transfer data: graphsync response to peer 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g did not complete: response status code RequestFailedUnknown
```
|
1.0
|
Deal retrieval failed - > Note: For security-related bugs/issues, please follow the [security policy](https://github.com/filecoin-project/lotus/security/policy).
**Describe the bug**
Client failed to retrieve deal from miner
**Version (run `lotus version`):**
```
$ lotus version
Daemon: 1.9.0-rc3+mainnet+git.0398e556d+api1.2.0
Local: lotus version 1.9.0-rc3+mainnet+git.0398e556d
```
**To Reproduce**
Steps to reproduce the behavior:
```
lotus client retrieve --miner f0447183 bafykbzacedhlaek5flnggsdojwwv2v4nfk3jq2ek5vlxujpz2dupq6dd44ohw test0511
```
**Expected behavior**
Deal retrieval complete
**Logs**
Client stuck at here for a very long time:
```
> Recv: 0 B, Paid 0 FIL, ClientEventOpen (DealStatusNew)
> Recv: 0 B, Paid 0 FIL, ClientEventDealProposed (DealStatusWaitForAcceptance)
> Recv: 0 B, Paid 0 FIL, ClientEventDealAccepted (DealStatusAccepted)
> Recv: 0 B, Paid 0 FIL, ClientEventPaymentChannelSkip (DealStatusOngoing)
```
Miner logs:
```
May 11 21:23:19 lotus-miner[8387]: 2021-05-11T21:23:19.464Z INFO markets loggers/loggers.go:30 retrieval provider event {"name": "ProviderEventDataTransferError", "deal ID": "1620766516060596709", "receiver": "12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g", "state": "DealStatusErrored", "message": "deal data transfer failed: data transfer channel 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g-12D3KooWLHJ46AAKHWaWzs2SMizgqWcgGfiuZphMthUeitgHFNd3-1620766516060126252 failed to transfer data: graphsync response to peer 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g did not complete: response status code RequestFailedUnknown"}
```
List retrieval-deals on miner:
```
$ lotus-miner retrieval-deals list
tx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g-12D3KooWLHJ46AAKHWaWzs2SMizgqWcgGfiuZphMthUeitgHFNd3-1620766516060126251 failed to transfer data: graphsync response to peer 12D3KooWQC7pYowGqCodbtx5mQPVVwyQQxrXfXbCjsuLtfeZ2W1g did not complete: response status code RequestFailedUnknown
```
|
process
|
deal retrieval failed note for security related bugs issues please follow the describe the bug client failed to retrieve deal from miner version run lotus version lotus version daemon mainnet git local lotus version mainnet git to reproduce steps to reproduce the behavior lotus client retrieve miner expected behavior deal retrieval complete logs client stuck at here for a very long time recv b paid fil clienteventopen dealstatusnew recv b paid fil clienteventdealproposed dealstatuswaitforacceptance recv b paid fil clienteventdealaccepted dealstatusaccepted recv b paid fil clienteventpaymentchannelskip dealstatusongoing miner logs may lotus miner info markets loggers loggers go retrieval provider event name providereventdatatransfererror deal id receiver state dealstatuserrored message deal data transfer failed data transfer channel failed to transfer data graphsync response to peer did not complete response status code requestfailedunknown list retrieval deals on miner lotus miner retrieval deals list failed to transfer data graphsync response to peer did not complete response status code requestfailedunknown
| 1
|
15,323
| 19,433,139,019
|
IssuesEvent
|
2021-12-21 14:15:59
|
threefoldtech/tfchain
|
https://api.github.com/repos/threefoldtech/tfchain
|
closed
|
Staking: fork frame V3 staking pallet and implement threefold staking
|
process_wontfix
|
We need to fork frame V3 staking pallet and implement our own staking rules. Currently the staking pallet relies on a inflation curve, we can modify the payout code to reward validators with funds from a central account (staking pool account)
|
1.0
|
Staking: fork frame V3 staking pallet and implement threefold staking - We need to fork frame V3 staking pallet and implement our own staking rules. Currently the staking pallet relies on a inflation curve, we can modify the payout code to reward validators with funds from a central account (staking pool account)
|
process
|
staking fork frame staking pallet and implement threefold staking we need to fork frame staking pallet and implement our own staking rules currently the staking pallet relies on a inflation curve we can modify the payout code to reward validators with funds from a central account staking pool account
| 1
|
264,670
| 28,212,202,386
|
IssuesEvent
|
2023-04-05 05:55:53
|
hshivhare67/platform_frameworks_av_AOSP10_r33
|
https://api.github.com/repos/hshivhare67/platform_frameworks_av_AOSP10_r33
|
closed
|
CVE-2022-20228 (Medium) detected in avandroid-10.0.0_r33 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-20228 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>avandroid-10.0.0_r33</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/av>https://android.googlesource.com/platform/frameworks/av</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/media/codec2/vndk/C2AllocatorIon.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In various functions of C2DmaBufAllocator.cpp, there is a possible memory corruption due to a use after free. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-12 Android-12LAndroid ID: A-213850092
<p>Publish Date: 2022-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20228>CVE-2022-20228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/av/+/a12f8a065c081b7aa2d7aaa1df79498c282c53d2">https://android.googlesource.com/platform/frameworks/av/+/a12f8a065c081b7aa2d7aaa1df79498c282c53d2</a></p>
<p>Release Date: 2022-07-13</p>
<p>Fix Resolution: android-12.1.0_r9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-20228 (Medium) detected in avandroid-10.0.0_r33 - autoclosed - ## CVE-2022-20228 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>avandroid-10.0.0_r33</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/av>https://android.googlesource.com/platform/frameworks/av</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/media/codec2/vndk/C2AllocatorIon.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In various functions of C2DmaBufAllocator.cpp, there is a possible memory corruption due to a use after free. This could lead to remote information disclosure with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-12 Android-12LAndroid ID: A-213850092
<p>Publish Date: 2022-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20228>CVE-2022-20228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/av/+/a12f8a065c081b7aa2d7aaa1df79498c282c53d2">https://android.googlesource.com/platform/frameworks/av/+/a12f8a065c081b7aa2d7aaa1df79498c282c53d2</a></p>
<p>Release Date: 2022-07-13</p>
<p>Fix Resolution: android-12.1.0_r9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in avandroid autoclosed cve medium severity vulnerability vulnerable library avandroid library home page a href found in base branch main vulnerable source files media vndk cpp vulnerability details in various functions of cpp there is a possible memory corruption due to a use after free this could lead to remote information disclosure with no additional execution privileges needed user interaction is needed for exploitation product androidversions android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend
| 0
|
3,704
| 15,115,668,787
|
IssuesEvent
|
2021-02-09 05:04:30
|
geolexica/geolexica-server
|
https://api.github.com/repos/geolexica/geolexica-server
|
opened
|
Extract Jbuilder tag to a separate gem
|
maintainability
|
The Liquid tag which has been introduced in #157 is generally useful and should be extracted to a separate gem.
|
True
|
Extract Jbuilder tag to a separate gem - The Liquid tag which has been introduced in #157 is generally useful and should be extracted to a separate gem.
|
non_process
|
extract jbuilder tag to a separate gem the liquid tag which has been introduced in is generally useful and should be extracted to a separate gem
| 0
|
17,915
| 23,905,908,605
|
IssuesEvent
|
2022-09-09 00:43:51
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Potential stack overflow table refactoring
|
processor air v0.3
|
Currently, we limit the initial and the final stack depth to exactly 16. The latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing. This can lead to user confusion (as in #342) but also is really annoying for tests (where we need to add a bunch of extra operations at the end to ensure the stack is in the right state).
We can relax these limitations by setting initial/final values of the running product column controlling stack overflow table to values which are not always $1$. This would mean two things:
1. We could initialize the stack with an arbitrary number of values. We'll need to get a little creative on how to build the state of the overflow table because we use `clk` values for backlinks - but it shouldn't be too difficult (we could just use "negative" values).
2. Stack depth at the end of execution could be arbitrary as well. This would get rid of the annoying issues with tests etc.
The downside of this approach are:
1. It may encourage people to use a lot of public inputs, which are expensive for the verifier.
2. Not cleaning up the stack at the end may lead to some information leaking (relevant only for private transactions). Though, this could still be mitigated by calling `finalize_stack` at the end, if needed.
|
1.0
|
Potential stack overflow table refactoring - Currently, we limit the initial and the final stack depth to exactly 16. The latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing. This can lead to user confusion (as in #342) but also is really annoying for tests (where we need to add a bunch of extra operations at the end to ensure the stack is in the right state).
We can relax these limitations by setting initial/final values of the running product column controlling stack overflow table to values which are not always $1$. This would mean two things:
1. We could initialize the stack with an arbitrary number of values. We'll need to get a little creative on how to build the state of the overflow table because we use `clk` values for backlinks - but it shouldn't be too difficult (we could just use "negative" values).
2. Stack depth at the end of execution could be arbitrary as well. This would get rid of the annoying issues with tests etc.
The downside of this approach are:
1. It may encourage people to use a lot of public inputs, which are expensive for the verifier.
2. Not cleaning up the stack at the end may lead to some information leaking (relevant only for private transactions). Though, this could still be mitigated by calling `finalize_stack` at the end, if needed.
|
process
|
potential stack overflow table refactoring currently we limit the initial and the final stack depth to exactly the latter part is especially annoying because we need to drop items deep in the stack before a program finishes executing this can lead to user confusion as in but also is really annoying for tests where we need to add a bunch of extra operations at the end to ensure the stack is in the right state we can relax these limitations by setting initial final values of the running product column controlling stack overflow table to values which are not always this would mean two things we could initialize the stack with an arbitrary number of values we ll need to get a little creative on how to build the state of the overflow table because we use clk values for backlinks but it shouldn t be too difficult we could just use negative values stack depth at the end of execution could be arbitrary as well this would get rid of the annoying issues with tests etc the downside of this approach are it may encourage people to use a lot of public inputs which are expensive for the verifier not cleaning up the stack at the end may lead to some information leaking relevant only for private transactions though this could still be mitigated by calling finalize stack at the end if needed
| 1
|
6,033
| 8,839,277,369
|
IssuesEvent
|
2019-01-06 04:00:53
|
PennyDreadfulMTG/perf-reports
|
https://api.github.com/repos/PennyDreadfulMTG/perf-reports
|
closed
|
500 error at /api/gitpull
|
CalledProcessError decksite wontfix
|
Command '['git', 'fetch']' returned non-zero exit status 1.
Reported on decksite by logged_out```
--------------------------------------------------------------------------------
Request Method: POST
Path: /api/gitpull?
Cookies: {}
Endpoint: process_github_webhook
View Args: {}
Person: logged_out
Referrer: None
Request Data: {}
Host: pennydreadfulmagic.com
Accept-Encoding: gzip
Cf-Ipcountry: US
X-Forwarded-For: 192.30.252.39, 172.69.62.126
Cf-Ray: 4722702dcaa356c9-IAD
X-Forwarded-Proto: https
Cf-Visitor: {"scheme":"https"}
Accept: */*
User-Agent: GitHub-Hookshot/c48e715
X-Github-Event: push
X-Github-Delivery: 8a8b737e-dcae-11e8-9f71-029a3b0162a4
Content-Type: application/json
Cf-Connecting-Ip: 192.30.252.39
X-Forwarded-Host: pennydreadfulmagic.com
X-Forwarded-Server: pennydreadfulmagic.com
Connection: Keep-Alive
Content-Length: 9899
```
--------------------------------------------------------------------------------
CalledProcessError
Command '['git', 'fetch']' returned non-zero exit status 1.
Stack Trace:
```
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "./shared_web/api.py", line 17, in process_github_webhook
subprocess.check_output(['git', 'fetch'])
File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib64/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
```
|
1.0
|
500 error at /api/gitpull - Command '['git', 'fetch']' returned non-zero exit status 1.
Reported on decksite by logged_out```
--------------------------------------------------------------------------------
Request Method: POST
Path: /api/gitpull?
Cookies: {}
Endpoint: process_github_webhook
View Args: {}
Person: logged_out
Referrer: None
Request Data: {}
Host: pennydreadfulmagic.com
Accept-Encoding: gzip
Cf-Ipcountry: US
X-Forwarded-For: 192.30.252.39, 172.69.62.126
Cf-Ray: 4722702dcaa356c9-IAD
X-Forwarded-Proto: https
Cf-Visitor: {"scheme":"https"}
Accept: */*
User-Agent: GitHub-Hookshot/c48e715
X-Github-Event: push
X-Github-Delivery: 8a8b737e-dcae-11e8-9f71-029a3b0162a4
Content-Type: application/json
Cf-Connecting-Ip: 192.30.252.39
X-Forwarded-Host: pennydreadfulmagic.com
X-Forwarded-Server: pennydreadfulmagic.com
Connection: Keep-Alive
Content-Length: 9899
```
--------------------------------------------------------------------------------
CalledProcessError
Command '['git', 'fetch']' returned non-zero exit status 1.
Stack Trace:
```
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "./shared_web/api.py", line 17, in process_github_webhook
subprocess.check_output(['git', 'fetch'])
File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib64/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
```
|
process
|
error at api gitpull command returned non zero exit status reported on decksite by logged out request method post path api gitpull cookies endpoint process github webhook view args person logged out referrer none request data host pennydreadfulmagic com accept encoding gzip cf ipcountry us x forwarded for cf ray iad x forwarded proto https cf visitor scheme https accept user agent github hookshot x github event push x github delivery dcae content type application json cf connecting ip x forwarded host pennydreadfulmagic com x forwarded server pennydreadfulmagic com connection keep alive content length calledprocesserror command returned non zero exit status stack trace file home discord local lib site packages flask app py line in call return self wsgi app environ start response file home discord local lib site packages flask app py line in wsgi app response self handle exception e file home discord local lib site packages flask app py line in wsgi app response self full dispatch request file home discord local lib site packages flask app py line in full dispatch request rv self handle user exception e file home discord local lib site packages flask app py line in handle user exception reraise exc type exc value tb file home discord local lib site packages flask compat py line in reraise raise value file home discord local lib site packages flask app py line in full dispatch request rv self dispatch request file home discord local lib site packages flask app py line in dispatch request return self view functions req view args file shared web api py line in process github webhook subprocess check output file usr subprocess py line in check output kwargs stdout file usr subprocess py line in run output stdout stderr stderr
| 1
|
699,390
| 24,015,315,688
|
IssuesEvent
|
2022-09-14 23:38:45
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[YSQL] Import commit to report the progress of copy command
|
kind/bug area/ysql priority/medium
|
Jira Link: [[DB-355]](https://yugabyte.atlassian.net/browse/DB-355)
### Description
PG14 added the following view to track the progress of the copy command
[COPY-PROGRESS-REPORTING](https://www.postgresql.org/docs/14/progress-reporting.html#COPY-PROGRESS-REPORTING)
**PG commit -**
- [PG git commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=8a4f618e7ae3cb11b0b37d0f06f05c8ff905833f)
- [postgres-14-monitoring-copy](https://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/)
We should import this commit in YB as to provide visibility for progress of the copy command.
**YugabyteDB Specific enhancements:**
- **tuples_processed** are the tuples which are already persisted in a transaction during the copy command.
- Retains copy command information in the view after the copy has finished.
- Add a **status** column to indicate the status of the copy command.
ColumnName | Possible Values | Description
-- | -- | --
Status | Completed / In-progress / Failed | status of the copy command |
[DB-355]: https://yugabyte.atlassian.net/browse/DB-355?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
1.0
|
[YSQL] Import commit to report the progress of copy command - Jira Link: [[DB-355]](https://yugabyte.atlassian.net/browse/DB-355)
### Description
PG14 added the following view to track the progress of the copy command
[COPY-PROGRESS-REPORTING](https://www.postgresql.org/docs/14/progress-reporting.html#COPY-PROGRESS-REPORTING)
**PG commit -**
- [PG git commit](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=8a4f618e7ae3cb11b0b37d0f06f05c8ff905833f)
- [postgres-14-monitoring-copy](https://paquier.xyz/postgresql-2/postgres-14-monitoring-copy/)
We should import this commit in YB as to provide visibility for progress of the copy command.
**YugabyteDB Specific enhancements:**
- **tuples_processed** are the tuples which are already persisted in a transaction during the copy command.
- Retains copy command information in the view after the copy has finished.
- Add a **status** column to indicate the status of the copy command.
ColumnName | Possible Values | Description
-- | -- | --
Status | Completed / In-progress / Failed | status of the copy command |
[DB-355]: https://yugabyte.atlassian.net/browse/DB-355?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
non_process
|
import commit to report the progress of copy command jira link description added the following view to track the progress of the copy command pg commit we should import this commit in yb as to provide visibility for progress of the copy command yugabytedb specific enhancements tuples processed are the tuples which are already persisted in a transaction during the copy command retains copy command information in the view after the copy has finished add a status column to indicate the status of the copy command columnname possible values description status completed in progress failed status of the copy command
| 0
|
157
| 2,581,542,450
|
IssuesEvent
|
2015-02-14 04:46:08
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
[Proposal] Provide a way to process arbitrary objects with GraphComputer
|
enhancement process
|
I want to be able to do this in OLAP:
```groovy
__(12,21,3,4,75).is(gt,10).sum()
```
An idea is this... For any `Iterator<Object>`, we need to be able to generate a Graph where:
```groovy
g.addVertex(id,12)
g.addVertex(id,21)
g.addVertex(id,3)
g.addVertex(id,4)
g.addVertex(id,75)
```
...and then the start of the OLAP traversal has a "hidden prefix" of `g.V.id` thus, behind the scenes, what is being executed is:
```groovy
g.V.id.is(gt,10).sum()
```
If we can make object processing just as natural as graph processing, then I don't see why Gremlin is not the craziest functional language to date:
* single machine or distributed
* supports non-terminal sideEffects
* has natural branching constructs -- not just serial stream processing.
* supports an execution structure that is a graph, not a DAG (e.g. `repeat()`, `back()`, ...)
* fundamentally an arbitrary execution graph with `jump()` (low-level, but there)
* can be represented in any arbitrary host language (even outside the JVM)
* can be used as a database query language (and/)or an arbitrary data flow language (given this proposal).
* has numerous execution engines (Giraph,MapReduce,Spark,Fulgora,TinkerGraph,...) with different time/space complexities
* has remote execution functionality via GremlinServer.
Its mind boggling actually... I can't think of anything else like this.
@mbroecheler @dkuppitz @spmallette @joshsh
|
1.0
|
[Proposal] Provide a way to process arbitrary objects with GraphComputer - I want to be able to do this in OLAP:
```groovy
__(12,21,3,4,75).is(gt,10).sum()
```
An idea is this... For any `Iterator<Object>`, we need to be able to generate a Graph where:
```groovy
g.addVertex(id,12)
g.addVertex(id,21)
g.addVertex(id,3)
g.addVertex(id,4)
g.addVertex(id,75)
```
...and then the start of the OLAP traversal has a "hidden prefix" of `g.V.id` thus, behind the scenes, what is being executed is:
```groovy
g.V.id.is(gt,10).sum()
```
If we can make object processing just as natural as graph processing, then I don't see why Gremlin is not the craziest functional language to date:
* single machine or distributed
* supports non-terminal sideEffects
* has natural branching constructs -- not just serial stream processing.
* supports an execution structure that is a graph, not a DAG (e.g. `repeat()`, `back()`, ...)
* fundamentally an arbitrary execution graph with `jump()` (low-level, but there)
* can be represented in any arbitrary host language (even outside the JVM)
* can be used as a database query language (and/)or an arbitrary data flow language (given this proposal).
* has numerous execution engines (Giraph,MapReduce,Spark,Fulgora,TinkerGraph,...) with different time/space complexities
* has remote execution functionality via GremlinServer.
Its mind boggling actually... I can't think of anything else like this.
@mbroecheler @dkuppitz @spmallette @joshsh
|
process
|
provide a way to process arbitrary objects with graphcomputer i want to be able to do this in olap groovy is gt sum an idea is this for any iterator we need to be able to generate a graph where groovy g addvertex id g addvertex id g addvertex id g addvertex id g addvertex id and then the start of the olap traversal has a hidden prefix of g v id thus behind the scenes what is being executed is groovy g v id is gt sum if we can make object processing just as natural as graph processing then i don t see why gremlin is not the craziest functional language to date single machine or distributed supports non terminal sideeffects has natural branching constructs not just serial stream processing supports an execution structure that is a graph not a dag e g repeat back fundamentally an arbitrary execution graph with jump low level but there can be represented in any arbitrary host language even outside the jvm can be used as a database query language and or an arbitrary data flow language given this proposal has numerous execution engines giraph mapreduce spark fulgora tinkergraph with different time space complexities has remote execution functionality via gremlinserver its mind boggling actually i can t think of anything else like this mbroecheler dkuppitz spmallette joshsh
| 1
|
18,331
| 24,446,953,486
|
IssuesEvent
|
2022-10-06 18:50:36
|
Azure/bicep
|
https://api.github.com/repos/Azure/bicep
|
closed
|
Linter rule process workflow
|
process devdiv story: linter
|
DONE AND TESTED:
State | Rule | Expected release | State | Bug or PR | Doc bug | Redirect link (when enabled)
---- | ---- | ---- | ---- | ---- | ---- | ----
DONE | adminusername-should-not-be-literal | (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/adminusername-should-not-be-literal
DONE | outputs-should-not-contain-secrets| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/outputs-should-not-contain-secrets
DONE | protect-commandtoexecute-secrets| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/protect-commandtoexecute-secrets
DONE | secure-parameter-default| (previous) | (previous) | (previous) |(previous)|https://aka.ms/bicep/linter/secure-parameter-default
DONE | max-outputs| (previous) | (previous) | ||https://aka.ms/bicep/linter/max-outputs
DONE | max-params| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/max-params
DONE | max-resources| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/max-resources
DONE | max-variables| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/max-variables
DONE | no-hardcoded-location| (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/no-hardcoded-location
DONE | no-loc-expr-outside-params| (previous) | (previous) | (previous) | |https://aka.ms/bicep/linter/no-loc-expr-outside-params
DONE | no-unused-params| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/no-unused-params
DONE | no-unused-vars| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/no-unused-vars
DONE | no-unnecessary-dependson| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/no-unnecessary-dependson
DONE | prefer-interpolation| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/prefer-interpolation
DONE | simplify-interpolation| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/simplify-interpolation
DONE | no-hardcoded-env-urls| (previous) | (previous) | (previous) || https://aka.ms/bicep/linter/no-hardcoded-env-urls
DONE | use-stable-vm-image| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/use-stable-vm-image
DONE | explicit-values-for-loc-params| (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/explicit-values-for-loc-params
DONE| artifacts-parameters|0.9||https://github.com/Azure/bicep/issues/7716|PR: https://github.com/MicrosoftDocs/azure-docs-pr/pull/206413, https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-artifacts-parameters|https://aka.ms/bicep/linter/artifacts-parameters
DONE| no-unused-existing-resources | 0.9 |||https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-no-unused-existing-resources|https://aka.ms/bicep/linter/no-unused-existing-resources
DONE | use-stable-resource-identifiers|0.9||https://github.com/Azure/bicep/issues/7564|https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-use-stable-resource-identifier|https://aka.ms/bicep/linter/use-stable-resource-identifiers
DONE| prefer-unquoted-property-names|0.9|https://github.com/Azure/bicep/issues/5542|https://github.com/Azure/bicep/issues/7003|https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-prefer-unquoted-property-names|https://aka.ms/bicep/linter/prefer-unquoted-property-names
DONE | secure-secrets-in-params|0.9|https://github.com/Azure/bicep/issues/7224|https://github.com/Azure/bicep/issues/7718|PR approved: https://github.com/MicrosoftDocs/azure-docs-pr/pull/206413, https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-secrets-in-params, https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-secrets-in-parameters|https://aka.ms/bicep/linter/secure-secrets-in-params
DONE: MERGED WITH use-recent-api-version| apiVersions-Should-Be-Recent-In-Reference-Functions|0.10|TODO|https://github.com/Azure/bicep/issues/7227|-|-|-
DONE | secure-params-in-nested-deploy|0.10|https://github.com/Azure/bicep/issues/7223|https://github.com/Azure/bicep/issues/7746 | https://github.com/Azure/bicep/issues/DONE:7746 | https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-params-in-nested-deploy|TODO: https://aka.ms/bicep/linter/secure-params-in-nested-deploy
DONE | IDs-Should-Be-Derived-From-ResourceIDs.test (use-resource-id-functions)|0.10|PR: https://github.com/Azure/bicep/pull/7907| https://github.com/Azure/bicep/issues/7228|doc bug: https://github.com/Azure/bicep/issues/8441|https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-use-resource-id-functions|https://aka.ms/bicep/linter/use-resource-id-functions
DONE | use-recent-api-version|0.10|||https://github.com/Azure/bicep/issues/8455|https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-use-recent-api-versions|TODO: https://aka.ms/bicep/linter/use-recent-api-versions
IN PROGRESS:
State | Rule | Expected release | State | Bug or PR | Doc bug | Doc link | Redirect link (when enabled)
---- | ---- | ---- | ---- | ---- | ---- | ---- | ----
|
1.0
|
Linter rule process workflow - DONE AND TESTED:
State | Rule | Expected release | State | Bug or PR | Doc bug | Redirect link (when enabled)
---- | ---- | ---- | ---- | ---- | ---- | ----
DONE | adminusername-should-not-be-literal | (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/adminusername-should-not-be-literal
DONE | outputs-should-not-contain-secrets| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/outputs-should-not-contain-secrets
DONE | protect-commandtoexecute-secrets| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/protect-commandtoexecute-secrets
DONE | secure-parameter-default| (previous) | (previous) | (previous) |(previous)|https://aka.ms/bicep/linter/secure-parameter-default
DONE | max-outputs| (previous) | (previous) | ||https://aka.ms/bicep/linter/max-outputs
DONE | max-params| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/max-params
DONE | max-resources| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/max-resources
DONE | max-variables| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/max-variables
DONE | no-hardcoded-location| (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/no-hardcoded-location
DONE | no-loc-expr-outside-params| (previous) | (previous) | (previous) | |https://aka.ms/bicep/linter/no-loc-expr-outside-params
DONE | no-unused-params| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/no-unused-params
DONE | no-unused-vars| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/no-unused-vars
DONE | no-unnecessary-dependson| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/no-unnecessary-dependson
DONE | prefer-interpolation| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/prefer-interpolation
DONE | simplify-interpolation| (previous) | (previous) | (previous)||https://aka.ms/bicep/linter/simplify-interpolation
DONE | no-hardcoded-env-urls| (previous) | (previous) | (previous) || https://aka.ms/bicep/linter/no-hardcoded-env-urls
DONE | use-stable-vm-image| (previous) | (previous) | (previous) ||https://aka.ms/bicep/linter/use-stable-vm-image
DONE | explicit-values-for-loc-params| (previous) | (previous) | (previous) | | https://aka.ms/bicep/linter/explicit-values-for-loc-params
DONE| artifacts-parameters|0.9||https://github.com/Azure/bicep/issues/7716|PR: https://github.com/MicrosoftDocs/azure-docs-pr/pull/206413, https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-artifacts-parameters|https://aka.ms/bicep/linter/artifacts-parameters
DONE| no-unused-existing-resources | 0.9 |||https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-no-unused-existing-resources|https://aka.ms/bicep/linter/no-unused-existing-resources
DONE | use-stable-resource-identifiers|0.9||https://github.com/Azure/bicep/issues/7564|https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-use-stable-resource-identifier|https://aka.ms/bicep/linter/use-stable-resource-identifiers
DONE| prefer-unquoted-property-names|0.9|https://github.com/Azure/bicep/issues/5542|https://github.com/Azure/bicep/issues/7003|https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter-rule-prefer-unquoted-property-names|https://aka.ms/bicep/linter/prefer-unquoted-property-names
DONE | secure-secrets-in-params|0.9|https://github.com/Azure/bicep/issues/7224|https://github.com/Azure/bicep/issues/7718|PR approved: https://github.com/MicrosoftDocs/azure-docs-pr/pull/206413, https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-secrets-in-params, https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-secrets-in-parameters|https://aka.ms/bicep/linter/secure-secrets-in-params
DONE: MERGED WITH use-recent-api-version| apiVersions-Should-Be-Recent-In-Reference-Functions|0.10|TODO|https://github.com/Azure/bicep/issues/7227|-|-|-
DONE | secure-params-in-nested-deploy|0.10|https://github.com/Azure/bicep/issues/7223|https://github.com/Azure/bicep/issues/7746 | https://github.com/Azure/bicep/issues/DONE:7746 | https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-secure-params-in-nested-deploy|TODO: https://aka.ms/bicep/linter/secure-params-in-nested-deploy
DONE | IDs-Should-Be-Derived-From-ResourceIDs.test (use-resource-id-functions)|0.10|PR: https://github.com/Azure/bicep/pull/7907| https://github.com/Azure/bicep/issues/7228|doc bug: https://github.com/Azure/bicep/issues/8441|https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-use-resource-id-functions|https://aka.ms/bicep/linter/use-resource-id-functions
DONE | use-recent-api-version|0.10|||https://github.com/Azure/bicep/issues/8455|https://docs.microsoft.com/azure/azure-resource-manager/bicep/linter-rule-use-recent-api-versions|TODO: https://aka.ms/bicep/linter/use-recent-api-versions
IN PROGRESS:
State | Rule | Expected release | State | Bug or PR | Doc bug | Doc link | Redirect link (when enabled)
---- | ---- | ---- | ---- | ---- | ---- | ---- | ----
|
process
|
linter rule process workflow done and tested state rule expected release state bug or pr doc bug redirect link when enabled done adminusername should not be literal previous previous previous done outputs should not contain secrets previous previous previous done protect commandtoexecute secrets previous previous previous done secure parameter default previous previous previous previous done max outputs previous previous done max params previous previous previous done max resources previous previous previous done max variables previous previous previous done no hardcoded location previous previous previous done no loc expr outside params previous previous previous done no unused params previous previous previous done no unused vars previous previous previous done no unnecessary dependson previous previous previous done prefer interpolation previous previous previous done simplify interpolation previous previous previous done no hardcoded env urls previous previous previous done use stable vm image previous previous previous done explicit values for loc params previous previous previous done artifacts parameters done no unused existing resources done use stable resource identifiers done prefer unquoted property names done secure secrets in params approved done merged with use recent api version apiversions should be recent in reference functions todo done secure params in nested deploy done ids should be derived from resourceids test use resource id functions pr bug done use recent api version in progress state rule expected release state bug or pr doc bug doc link redirect link when enabled
| 1
|
27,314
| 7,934,024,090
|
IssuesEvent
|
2018-07-08 14:20:51
|
openMVG/openMVG
|
https://api.github.com/repos/openMVG/openMVG
|
closed
|
There are some errors when using openmvg-develop as a library
|
build enhancement question
|
**I wrote this test code below.**
`#include <iostream>
#include "openMVG/sfm/sfm.hpp"
using namespace std;
int main(int argc, char **argv) {
std::cout << "Hello, OpenMVG" << std::endl;
return 0;
}`
`cmake_minimum_required(VERSION 3.0)
project(helloopenmvg)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
set(CMAKE_PREFIX_PATH "/home/owen/Libs/openMVG_develop_install")
include_directories("/usr/include/eigen3")
include_directories("/home/owen/Libs/openMVG_develop_install/include/openMVG")
find_package(OpenMVG REQUIRED)
include_directories(${OPENMVG_INCLUDE_DIRS})
add_executable(helloopenmvg main.cpp)
target_link_libraries(helloopenmvg ${OPENMVG_LIBRARIES})`
**But there are some errors when I build and execute the code. Please help me to handle this problem that had puzzled me for several days. Thank you very much in advance.**
/home/owen/SfM/useOpenMVG/Hello_1/HelloOpenMVG/build> make -j2 helloopenmvg
-- OpenMVG Find_Package
-- Found OpenMVG version: 1.3.0
-- Installed in: /home/owen/Libs/openMVG_develop_install
-- Used OpenMVG libraries: openMVG_camera;openMVG_exif;openMVG_features;openMVG_geodesy;openMVG_geometry;openMVG_graph;openMVG_image;openMVG_kvld;openMVG_lInftyComputerVision;openMVG_matching;openMVG_matching_image_collection;openMVG_multiview;openMVG_sfm;openMVG_system
-- Configuring done
-- Generating done
-- Build files have been written to: /home/owen/SfM/useOpenMVG/Hello_1/HelloOpenMVG/build
Scanning dependencies of target helloopenmvg
[ 50%] Building CXX object CMakeFiles/helloopenmvg.dir/main.cpp.o
[100%] Linking CXX executable helloopenmvg
/usr/bin/ld: cannot find -lopenMVG_camera
/usr/bin/ld: cannot find -lopenMVG_exif
/usr/bin/ld: cannot find -lopenMVG_features
/usr/bin/ld: cannot find -lopenMVG_geodesy
/usr/bin/ld: cannot find -lopenMVG_geometry
/usr/bin/ld: cannot find -lopenMVG_graph
/usr/bin/ld: cannot find -lopenMVG_image
/usr/bin/ld: cannot find -lopenMVG_kvld
/usr/bin/ld: cannot find -lopenMVG_lInftyComputerVision
/usr/bin/ld: cannot find -lopenMVG_matching
/usr/bin/ld: cannot find -lopenMVG_matching_image_collection
/usr/bin/ld: cannot find -lopenMVG_multiview
/usr/bin/ld: cannot find -lopenMVG_sfm
/usr/bin/ld: cannot find -lopenMVG_system
collect2: error: ld returned 1 exit status
CMakeFiles/helloopenmvg.dir/build.make:94: recipe for target 'helloopenmvg' failed
make[3]: *** [helloopenmvg] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/helloopenmvg.dir/all' failed
make[2]: *** [CMakeFiles/helloopenmvg.dir/all] Error 2
CMakeFiles/Makefile2:79: recipe for target 'CMakeFiles/helloopenmvg.dir/rule' failed
make[1]: *** [CMakeFiles/helloopenmvg.dir/rule] Error 2
Makefile:162: recipe for target 'helloopenmvg' failed
make: *** [helloopenmvg] Error 2
*** Failure: Exit code 2 ***
|
1.0
|
There are some errors when using openmvg-develop as a library - **I wrote this test code below.**
`#include <iostream>
#include "openMVG/sfm/sfm.hpp"
using namespace std;
int main(int argc, char **argv) {
std::cout << "Hello, OpenMVG" << std::endl;
return 0;
}`
`cmake_minimum_required(VERSION 3.0)
project(helloopenmvg)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
set(CMAKE_PREFIX_PATH "/home/owen/Libs/openMVG_develop_install")
include_directories("/usr/include/eigen3")
include_directories("/home/owen/Libs/openMVG_develop_install/include/openMVG")
find_package(OpenMVG REQUIRED)
include_directories(${OPENMVG_INCLUDE_DIRS})
add_executable(helloopenmvg main.cpp)
target_link_libraries(helloopenmvg ${OPENMVG_LIBRARIES})`
**But there are some errors when I build and execute the code. Please help me to handle this problem that had puzzled me for several days. Thank you very much in advance.**
/home/owen/SfM/useOpenMVG/Hello_1/HelloOpenMVG/build> make -j2 helloopenmvg
-- OpenMVG Find_Package
-- Found OpenMVG version: 1.3.0
-- Installed in: /home/owen/Libs/openMVG_develop_install
-- Used OpenMVG libraries: openMVG_camera;openMVG_exif;openMVG_features;openMVG_geodesy;openMVG_geometry;openMVG_graph;openMVG_image;openMVG_kvld;openMVG_lInftyComputerVision;openMVG_matching;openMVG_matching_image_collection;openMVG_multiview;openMVG_sfm;openMVG_system
-- Configuring done
-- Generating done
-- Build files have been written to: /home/owen/SfM/useOpenMVG/Hello_1/HelloOpenMVG/build
Scanning dependencies of target helloopenmvg
[ 50%] Building CXX object CMakeFiles/helloopenmvg.dir/main.cpp.o
[100%] Linking CXX executable helloopenmvg
/usr/bin/ld: cannot find -lopenMVG_camera
/usr/bin/ld: cannot find -lopenMVG_exif
/usr/bin/ld: cannot find -lopenMVG_features
/usr/bin/ld: cannot find -lopenMVG_geodesy
/usr/bin/ld: cannot find -lopenMVG_geometry
/usr/bin/ld: cannot find -lopenMVG_graph
/usr/bin/ld: cannot find -lopenMVG_image
/usr/bin/ld: cannot find -lopenMVG_kvld
/usr/bin/ld: cannot find -lopenMVG_lInftyComputerVision
/usr/bin/ld: cannot find -lopenMVG_matching
/usr/bin/ld: cannot find -lopenMVG_matching_image_collection
/usr/bin/ld: cannot find -lopenMVG_multiview
/usr/bin/ld: cannot find -lopenMVG_sfm
/usr/bin/ld: cannot find -lopenMVG_system
collect2: error: ld returned 1 exit status
CMakeFiles/helloopenmvg.dir/build.make:94: recipe for target 'helloopenmvg' failed
make[3]: *** [helloopenmvg] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/helloopenmvg.dir/all' failed
make[2]: *** [CMakeFiles/helloopenmvg.dir/all] Error 2
CMakeFiles/Makefile2:79: recipe for target 'CMakeFiles/helloopenmvg.dir/rule' failed
make[1]: *** [CMakeFiles/helloopenmvg.dir/rule] Error 2
Makefile:162: recipe for target 'helloopenmvg' failed
make: *** [helloopenmvg] Error 2
*** Failure: Exit code 2 ***
|
non_process
|
there are some errors when using openmvg develop as a library i wrote this test code below include include openmvg sfm sfm hpp using namespace std int main int argc char argv std cout hello openmvg std endl return cmake minimum required version project helloopenmvg set cmake cxx flags cmake cxx flags std c set cmake prefix path home owen libs openmvg develop install include directories usr include include directories home owen libs openmvg develop install include openmvg find package openmvg required include directories openmvg include dirs add executable helloopenmvg main cpp target link libraries helloopenmvg openmvg libraries but there are some errors when i build and execute the code please help me to handle this problem that had puzzled me for several days thank you very much in advance home owen sfm useopenmvg hello helloopenmvg build make helloopenmvg openmvg find package found openmvg version installed in home owen libs openmvg develop install used openmvg libraries openmvg camera openmvg exif openmvg features openmvg geodesy openmvg geometry openmvg graph openmvg image openmvg kvld openmvg linftycomputervision openmvg matching openmvg matching image collection openmvg multiview openmvg sfm openmvg system configuring done generating done build files have been written to home owen sfm useopenmvg hello helloopenmvg build scanning dependencies of target helloopenmvg building cxx object cmakefiles helloopenmvg dir main cpp o linking cxx executable helloopenmvg usr bin ld cannot find lopenmvg camera usr bin ld cannot find lopenmvg exif usr bin ld cannot find lopenmvg features usr bin ld cannot find lopenmvg geodesy usr bin ld cannot find lopenmvg geometry usr bin ld cannot find lopenmvg graph usr bin ld cannot find lopenmvg image usr bin ld cannot find lopenmvg kvld usr bin ld cannot find lopenmvg linftycomputervision usr bin ld cannot find lopenmvg matching usr bin ld cannot find lopenmvg matching image collection usr bin ld cannot find lopenmvg multiview usr bin ld cannot find lopenmvg sfm usr bin ld cannot find lopenmvg system error ld returned exit status cmakefiles helloopenmvg dir build make recipe for target helloopenmvg failed make error cmakefiles recipe for target cmakefiles helloopenmvg dir all failed make error cmakefiles recipe for target cmakefiles helloopenmvg dir rule failed make error makefile recipe for target helloopenmvg failed make error failure exit code
| 0
|
13,860
| 16,618,433,935
|
IssuesEvent
|
2021-06-02 20:04:07
|
lalitpagaria/obsei
|
https://api.github.com/repos/lalitpagaria/obsei
|
closed
|
Add text cleaner node
|
medium priority preprocessor
|
Idea to have configurable text cleaning node.
This node also have predefined template to clean tweets, facebook feed, app reviews etc.
For detail refer https://github.com/lalitpagaria/obsei/issues/75#issuecomment-849307243
|
1.0
|
Add text cleaner node - Idea to have configurable text cleaning node.
This node also have predefined template to clean tweets, facebook feed, app reviews etc.
For detail refer https://github.com/lalitpagaria/obsei/issues/75#issuecomment-849307243
|
process
|
add text cleaner node idea to have configurable text cleaning node this node also have predefined template to clean tweets facebook feed app reviews etc for detail refer
| 1
|
22,656
| 31,895,828,124
|
IssuesEvent
|
2023-09-18 01:32:00
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - latestPeriodOrHighestSystem
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_latestPeriodOrHighestSystem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestPeriodOrHighestSystem
* Term label (English, not normative): Latest Period Or Highest System
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the latest possible geochronologic period or highest chronostratigraphic system attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Neogene, Tertiary, Quaternary
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - latestPeriodOrHighestSystem - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_latestPeriodOrHighestSystem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestPeriodOrHighestSystem
* Term label (English, not normative): Latest Period Or Highest System
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the latest possible geochronologic period or highest chronostratigraphic system attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Neogene, Tertiary, Quaternary
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term latestperiodorhighestsystem term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes latestperiodorhighestsystem term label english not normative latest period or highest system organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the latest possible geochronologic period or highest chronostratigraphic system attributable to the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative neogene tertiary quaternary refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
4,425
| 7,302,885,117
|
IssuesEvent
|
2018-02-27 11:09:09
|
Rokid/ShadowNode
|
https://api.github.com/repos/Rokid/ShadowNode
|
closed
|
`process.hrtime([time])` do not return diffs while `time` is given
|
console process
|
> `time` is an optional parameter that must be the result of a previous process.hrtime() call to diff with the current time.
demo script:
```js
var NS_PER_SEC = 1e9;
var time = process.hrtime();
// [ 16264, 282919475 ]
setTimeout(() => {
var diff = process.hrtime(time);
// expecting [ 1, 2476940 ]
// actual [ 16265, 285396415 ]
console.log(`Benchmark took ${diff[0] * NS_PER_SEC + diff[1]} nanoseconds`);
}, 1000);
```
|
1.0
|
`process.hrtime([time])` do not return diffs while `time` is given - > `time` is an optional parameter that must be the result of a previous process.hrtime() call to diff with the current time.
demo script:
```js
var NS_PER_SEC = 1e9;
var time = process.hrtime();
// [ 16264, 282919475 ]
setTimeout(() => {
var diff = process.hrtime(time);
// expecting [ 1, 2476940 ]
// actual [ 16265, 285396415 ]
console.log(`Benchmark took ${diff[0] * NS_PER_SEC + diff[1]} nanoseconds`);
}, 1000);
```
|
process
|
process hrtime do not return diffs while time is given time is an optional parameter that must be the result of a previous process hrtime call to diff with the current time demo script js var ns per sec var time process hrtime settimeout var diff process hrtime time expecting actual console log benchmark took diff ns per sec diff nanoseconds
| 1
|
538
| 3,000,677,265
|
IssuesEvent
|
2015-07-24 04:32:52
|
HazyResearch/dd-genomics
|
https://api.github.com/repos/HazyResearch/dd-genomics
|
opened
|
Refactor onto/make_all.sh- is 248 lines & scary!!
|
Data source Low priority Preprocessing
|
Not a priority exactly but should do at some point; delete unused lines & comment a few things better...
|
1.0
|
Refactor onto/make_all.sh- is 248 lines & scary!! - Not a priority exactly but should do at some point; delete unused lines & comment a few things better...
|
process
|
refactor onto make all sh is lines scary not a priority exactly but should do at some point delete unused lines comment a few things better
| 1
|
112,910
| 11,782,215,352
|
IssuesEvent
|
2020-03-17 01:06:23
|
kubernetes-sigs/cluster-api-provider-azure
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-azure
|
closed
|
Update the README with supported versions
|
kind/documentation
|
The README is outdated and needs to be updated before the 0.4.0 release.
|
1.0
|
Update the README with supported versions - The README is outdated and needs to be updated before the 0.4.0 release.
|
non_process
|
update the readme with supported versions the readme is outdated and needs to be updated before the release
| 0
|
13,886
| 16,654,860,976
|
IssuesEvent
|
2021-06-05 10:35:23
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issues in Locations
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Responsive issues in Locations
1. Add new location > UI issue

2. UI issues in table

|
3.0
|
[PM] Responsive issues in Locations - Responsive issues in Locations
1. Add new location > UI issue

2. UI issues in table

|
process
|
responsive issues in locations responsive issues in locations add new location ui issue ui issues in table
| 1
|
94,675
| 19,573,757,765
|
IssuesEvent
|
2022-01-04 13:11:50
|
Onelinerhub/onelinerhub
|
https://api.github.com/repos/Onelinerhub/onelinerhub
|
closed
|
Short solution needed: "Setting datetime in Redis" (python-redis)
|
help wanted good first issue code python-redis
|
Please help us write most modern and shortest code solution for this issue:
**Setting datetime in Redis** (technology: [python-redis](https://onelinerhub.com/python-redis))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.
|
1.0
|
Short solution needed: "Setting datetime in Redis" (python-redis) - Please help us write most modern and shortest code solution for this issue:
**Setting datetime in Redis** (technology: [python-redis](https://onelinerhub.com/python-redis))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.
|
non_process
|
short solution needed setting datetime in redis python redis please help us write most modern and shortest code solution for this issue setting datetime in redis technology fast way just write the code solution in the comments prefered way create pull request with a new code file inside don t forget to use comments to make solution explained link to this issue in comments of pull request
| 0
|
9,871
| 12,882,600,964
|
IssuesEvent
|
2020-07-12 17:42:01
|
ttcoder404/blog-comments
|
https://api.github.com/repos/ttcoder404/blog-comments
|
opened
|
浅谈浏览器多进程与JS线程 — 头秃程序猿 — 前端知识学习分享
|
/2020/07/13/Thread-and-process/ gitment
|
https://ttcoder.com/2020/07/13/Thread-and-process/
一直对浏览器的进程、线程的运行一无所知,经过一次的刷刷刷相关的博客之后,对其有了初步的了解,是时候该总结一波了。
|
1.0
|
浅谈浏览器多进程与JS线程 — 头秃程序猿 — 前端知识学习分享 - https://ttcoder.com/2020/07/13/Thread-and-process/
一直对浏览器的进程、线程的运行一无所知,经过一次的刷刷刷相关的博客之后,对其有了初步的了解,是时候该总结一波了。
|
process
|
浅谈浏览器多进程与js线程 — 头秃程序猿 — 前端知识学习分享 一直对浏览器的进程、线程的运行一无所知,经过一次的刷刷刷相关的博客之后,对其有了初步的了解,是时候该总结一波了。
| 1
|
969
| 3,423,078,705
|
IssuesEvent
|
2015-12-09 03:16:19
|
martensonbj/traffic-spy-skeleton
|
https://api.github.com/repos/martensonbj/traffic-spy-skeleton
|
closed
|
processing_requests_happy_path
|
processing requests user story
|
As a user
When I send a POST request to 'http://yourapplication:port/sources/IDENTIFIER/data'
And I send all payload parameters
Then I get a response of 'Success - 200 OK'
|
1.0
|
processing_requests_happy_path - As a user
When I send a POST request to 'http://yourapplication:port/sources/IDENTIFIER/data'
And I send all payload parameters
Then I get a response of 'Success - 200 OK'
|
process
|
processing requests happy path as a user when i send a post request to and i send all payload parameters then i get a response of success ok
| 1
|
18,875
| 24,805,484,399
|
IssuesEvent
|
2022-10-25 03:50:59
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
opened
|
Introduce e2e tests for kubernetes story
|
enhancement ci-cd processor/k8sattributes receiver/k8scluster needs triage
|
### Is your feature request related to a problem? Please describe.
Introduce e2e tests for kubernetes story
### Describe the solution you'd like
Introduce e2e tests for kubernetes story
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
1.0
|
Introduce e2e tests for kubernetes story - ### Is your feature request related to a problem? Please describe.
Introduce e2e tests for kubernetes story
### Describe the solution you'd like
Introduce e2e tests for kubernetes story
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
process
|
introduce tests for kubernetes story is your feature request related to a problem please describe introduce tests for kubernetes story describe the solution you d like introduce tests for kubernetes story describe alternatives you ve considered no response additional context no response
| 1
|
19,068
| 25,089,683,803
|
IssuesEvent
|
2022-11-08 04:32:46
|
johndpjr/AgTern
|
https://api.github.com/repos/johndpjr/AgTern
|
opened
|
Make the web scraper more robust
|
scraping discussion topic data processing
|
Currently, if the web scraper runs into certain errors, we lose all of the data that we just scraped! This obviously isn't desirable, but there are certain errors that would make it impossible to write to the database (such as a database error itself). We probably shouldn't automatically try to commit this data since it could contain errors, so what do we do with it?
- Should we dump this data to a file?
- To the console?
- Would it just be easier to run the scraper again to collect more data?
At least during development, it could be useful to have some sort of data dumping mechanism to help figure out what went wrong.
- How should this work?
- How would we view this data?
- Do we need to create a script that would import this data into the database?
- Would this even be worth our time?
|
1.0
|
Make the web scraper more robust - Currently, if the web scraper runs into certain errors, we lose all of the data that we just scraped! This obviously isn't desirable, but there are certain errors that would make it impossible to write to the database (such as a database error itself). We probably shouldn't automatically try to commit this data since it could contain errors, so what do we do with it?
- Should we dump this data to a file?
- To the console?
- Would it just be easier to run the scraper again to collect more data?
At least during development, it could be useful to have some sort of data dumping mechanism to help figure out what went wrong.
- How should this work?
- How would we view this data?
- Do we need to create a script that would import this data into the database?
- Would this even be worth our time?
|
process
|
make the web scraper more robust currently if the web scraper runs into certain errors we lose all of the data that we just scraped this obviously isn t desirable but there are certain errors that would make it impossible to write to the database such as a database error itself we probably shouldn t automatically try to commit this data since it could contain errors so what do we do with it should we dump this data to a file to the console would it just be easier to run the scraper again to collect more data at least during development it could be useful to have some sort of data dumping mechanism to help figure out what went wrong how should this work how would we view this data do we need to create a script that would import this data into the database would this even be worth our time
| 1
|
13,734
| 16,489,409,450
|
IssuesEvent
|
2021-05-25 00:01:04
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
cron-utils: should be able to update the payload
|
type: process
|
Currently, cron-utils does not update the payload of a Cloud Scheduler jobs. This is because we were supporting the legacy `cron` file that wasn't able to specify any extra payload options.
We should:
1. drop support for `cron`
2. update all the possible scheduled job parameters so that it is truly createOrUpdate.
|
1.0
|
cron-utils: should be able to update the payload - Currently, cron-utils does not update the payload of a Cloud Scheduler jobs. This is because we were supporting the legacy `cron` file that wasn't able to specify any extra payload options.
We should:
1. drop support for `cron`
2. update all the possible scheduled job parameters so that it is truly createOrUpdate.
|
process
|
cron utils should be able to update the payload currently cron utils does not update the payload of a cloud scheduler jobs this is because we were supporting the legacy cron file that wasn t able to specify any extra payload options we should drop support for cron update all the possible scheduled job parameters so that it is truly createorupdate
| 1
|
27,962
| 30,779,485,541
|
IssuesEvent
|
2023-07-31 09:02:02
|
epam/hub-extensions
|
https://api.github.com/repos/epam/hub-extensions
|
closed
|
Rename env INTERACTIVE to HUB_INTERACTIVE
|
usability
|
Improve usability of `ask`
- [x] `INTERACTIVE` rename to `HUB_INTERACTIVE`
- [x] init should set `HUB_INTERACTIVE ` env defaults to `1` so user can change it in .env file
|
True
|
Rename env INTERACTIVE to HUB_INTERACTIVE - Improve usability of `ask`
- [x] `INTERACTIVE` rename to `HUB_INTERACTIVE`
- [x] init should set `HUB_INTERACTIVE ` env defaults to `1` so user can change it in .env file
|
non_process
|
rename env interactive to hub interactive improve usability of ask interactive rename to hub interactive init should set hub interactive env defaults to so user can change it in env file
| 0
|
4,506
| 7,349,918,537
|
IssuesEvent
|
2018-03-08 12:31:30
|
scieloorg/opac_proc
|
https://api.github.com/repos/scieloorg/opac_proc
|
opened
|
Criar uma forma fácil para executar comandos no stack do opac_proc
|
Processamento
|
Através do ranchercli é possível realizar as solicitações de processamentos necessários para o SciELO SP.
Estou em conjunto com a infra estudando uma forma fácil de executar esses processamento!
Após a automatização desse fluxo/processo irei documentar e passar essa atividade para a infra-estrutura.
|
1.0
|
Criar uma forma fácil para executar comandos no stack do opac_proc - Através do ranchercli é possível realizar as solicitações de processamentos necessários para o SciELO SP.
Estou em conjunto com a infra estudando uma forma fácil de executar esses processamento!
Após a automatização desse fluxo/processo irei documentar e passar essa atividade para a infra-estrutura.
|
process
|
criar uma forma fácil para executar comandos no stack do opac proc através do ranchercli é possível realizar as solicitações de processamentos necessários para o scielo sp estou em conjunto com a infra estudando uma forma fácil de executar esses processamento após a automatização desse fluxo processo irei documentar e passar essa atividade para a infra estrutura
| 1
|
14,330
| 17,362,633,260
|
IssuesEvent
|
2021-07-29 23:44:09
|
googleapis/python-spanner
|
https://api.github.com/repos/googleapis/python-spanner
|
closed
|
tests.system.test_system.TestDatabaseAPI: many tests failed
|
api: spanner flakybot: issue type: process
|
Many tests failed at the same time in this package.
* I will close this issue when there are no more failures in this package _and_
there is at least one pass.
* No new issues will be filed for this package until this issue is closed.
* If there are already issues for individual test cases, I will close them when
the corresponding test passes. You can close them earlier, if you prefer, and
I won't reopen them while this issue is still open.
Here are the tests that failed:
* test_create_database
* test_create_database_pitr_invalid_retention_period
* test_create_database_pitr_success
* test_create_database_with_default_leader_success
* test_db_batch_insert_then_db_snapshot_read
* test_db_run_in_transaction_then_snapshot_execute_sql
* test_db_run_in_transaction_twice
* test_db_run_in_transaction_twice_4181
* test_list_databases
* test_table_not_found
* test_update_database_ddl_default_leader_success
* test_update_database_ddl_pitr_invalid
* test_update_database_ddl_pitr_success
* test_update_database_ddl_with_operation_id
-----
commit: 2487800e31842a44dcc37937c325e130c8c926b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/306a2e02-87cb-4be9-be31-37456ec7a8a2), [Sponge](http://sponge2/306a2e02-87cb-4be9-be31-37456ec7a8a2)
status: failed
|
1.0
|
tests.system.test_system.TestDatabaseAPI: many tests failed - Many tests failed at the same time in this package.
* I will close this issue when there are no more failures in this package _and_
there is at least one pass.
* No new issues will be filed for this package until this issue is closed.
* If there are already issues for individual test cases, I will close them when
the corresponding test passes. You can close them earlier, if you prefer, and
I won't reopen them while this issue is still open.
Here are the tests that failed:
* test_create_database
* test_create_database_pitr_invalid_retention_period
* test_create_database_pitr_success
* test_create_database_with_default_leader_success
* test_db_batch_insert_then_db_snapshot_read
* test_db_run_in_transaction_then_snapshot_execute_sql
* test_db_run_in_transaction_twice
* test_db_run_in_transaction_twice_4181
* test_list_databases
* test_table_not_found
* test_update_database_ddl_default_leader_success
* test_update_database_ddl_pitr_invalid
* test_update_database_ddl_pitr_success
* test_update_database_ddl_with_operation_id
-----
commit: 2487800e31842a44dcc37937c325e130c8c926b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/306a2e02-87cb-4be9-be31-37456ec7a8a2), [Sponge](http://sponge2/306a2e02-87cb-4be9-be31-37456ec7a8a2)
status: failed
|
process
|
tests system test system testdatabaseapi many tests failed many tests failed at the same time in this package i will close this issue when there are no more failures in this package and there is at least one pass no new issues will be filed for this package until this issue is closed if there are already issues for individual test cases i will close them when the corresponding test passes you can close them earlier if you prefer and i won t reopen them while this issue is still open here are the tests that failed test create database test create database pitr invalid retention period test create database pitr success test create database with default leader success test db batch insert then db snapshot read test db run in transaction then snapshot execute sql test db run in transaction twice test db run in transaction twice test list databases test table not found test update database ddl default leader success test update database ddl pitr invalid test update database ddl pitr success test update database ddl with operation id commit buildurl status failed
| 1
|
690,673
| 23,669,039,533
|
IssuesEvent
|
2022-08-27 03:53:28
|
ecotiya/ecotiya-portfolio-site
|
https://api.github.com/repos/ecotiya/ecotiya-portfolio-site
|
opened
|
プロフィール欄は、自分の画像をS3に保存しといて、それを参照させる。
|
enhancement Priority: Low
|
<!-- 要望のテンプレート -->
## 概要
プロフィール欄は、自分の画像をS3に保存しといて、それを参照させる。
テーブルにプロフィール欄のURLを保存しており、そちらを参照して表示させているだけなので、簡単にできるはず。
## 目的
プロフィール画像の差し替えする際、いちいちデプロイするのが面倒なので、S3変更により画像差し替え可能にしておきたい。
## タスク
- [ ] S3にsample画像を配備する。
- [ ] セクションタイトルマスタのカラムを変更して、S3の画像を参照可能か確認する。
- [ ] 実現不可能な場合、構成の見直しと調査。
## 補足
S3へのVPCエンドポイントを用意しているので、すぐにできるはず…
|
1.0
|
プロフィール欄は、自分の画像をS3に保存しといて、それを参照させる。 - <!-- 要望のテンプレート -->
## 概要
プロフィール欄は、自分の画像をS3に保存しといて、それを参照させる。
テーブルにプロフィール欄のURLを保存しており、そちらを参照して表示させているだけなので、簡単にできるはず。
## 目的
プロフィール画像の差し替えする際、いちいちデプロイするのが面倒なので、S3変更により画像差し替え可能にしておきたい。
## タスク
- [ ] S3にsample画像を配備する。
- [ ] セクションタイトルマスタのカラムを変更して、S3の画像を参照可能か確認する。
- [ ] 実現不可能な場合、構成の見直しと調査。
## 補足
S3へのVPCエンドポイントを用意しているので、すぐにできるはず…
|
non_process
|
プロフィール欄は、 、それを参照させる。 概要 プロフィール欄は、 、それを参照させる。 テーブルにプロフィール欄のurlを保存しており、そちらを参照して表示させているだけなので、簡単にできるはず。 目的 プロフィール画像の差し替えする際、いちいちデプロイするのが面倒なので、 。 タスク 。 セクションタイトルマスタのカラムを変更して、 。 実現不可能な場合、構成の見直しと調査。 補足 、すぐにできるはず…
| 0
|
60,199
| 25,029,030,808
|
IssuesEvent
|
2022-11-04 10:39:24
|
docintelapp/DocIntel
|
https://api.github.com/repos/docintelapp/DocIntel
|
closed
|
Avoid hard commit when adding/updating SolR indexes
|
enhancement good first issue service
|
The current code is forcing hard commit at every update or addition to the index. This is unnecessary and slowing down operations.
- [ ] Remove call to `commit`
- [ ] Force hard/soft commit after a period (see https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html)
|
1.0
|
Avoid hard commit when adding/updating SolR indexes - The current code is forcing hard commit at every update or addition to the index. This is unnecessary and slowing down operations.
- [ ] Remove call to `commit`
- [ ] Force hard/soft commit after a period (see https://solr.apache.org/guide/6_6/updatehandlers-in-solrconfig.html)
|
non_process
|
avoid hard commit when adding updating solr indexes the current code is forcing hard commit at every update or addition to the index this is unnecessary and slowing down operations remove call to commit force hard soft commit after a period see
| 0
|
10,964
| 13,768,673,670
|
IssuesEvent
|
2020-10-07 17:26:22
|
googleapis/python-dataproc
|
https://api.github.com/repos/googleapis/python-dataproc
|
closed
|
GA Release
|
api: dataproc type: process
|
**RELEASE AFTER 2020-06-16**
Package name: **google-cloud-dataproc**
Current release: **alpha**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last **alpha**~~beta~~ release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
GA Release - **RELEASE AFTER 2020-06-16**
Package name: **google-cloud-dataproc**
Current release: **alpha**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last **alpha**~~beta~~ release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
ga release release after package name google cloud dataproc current release alpha proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last alpha beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
395
| 2,842,537,771
|
IssuesEvent
|
2015-05-28 09:53:58
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
Add new preprocessor option minimal_workspace
|
preprocessor
|
Matlab has a limit on the number of variables allowed in the workspace. To prevent problems with this limit, introduce new preprocessor option ```minimal_workspace``` that when used, suppresses writing the parameters to a variable with their name in the workspace. That is, instead of
```
M_.params( 1 ) = 0.999;
beta = M_.params( 1 );
```
only write
```M_.params( 1 ) = 0.999;```
Also set ```options_.minimal_workspace``` to 1 (default: 0) so that we can suppress the call to ```dyn2vec``` in stoch_simul to prevent duplicating simulated variables in the workspace. See the email of Stéphane from 26/05/15
|
1.0
|
Add new preprocessor option minimal_workspace - Matlab has a limit on the number of variables allowed in the workspace. To prevent problems with this limit, introduce new preprocessor option ```minimal_workspace``` that when used, suppresses writing the parameters to a variable with their name in the workspace. That is, instead of
```
M_.params( 1 ) = 0.999;
beta = M_.params( 1 );
```
only write
```M_.params( 1 ) = 0.999;```
Also set ```options_.minimal_workspace``` to 1 (default: 0) so that we can suppress the call to ```dyn2vec``` in stoch_simul to prevent duplicating simulated variables in the workspace. See the email of Stéphane from 26/05/15
|
process
|
add new preprocessor option minimal workspace matlab has a limit on the number of variables allowed in the workspace to prevent problems with this limit introduce new preprocessor option minimal workspace that when used suppresses writing the parameters to a variable with their name in the workspace that is instead of m params beta m params only write m params also set options minimal workspace to default so that we can suppress the call to in stoch simul to prevent duplicating simulated variables in the workspace see the email of stéphane from
| 1
|
693,205
| 23,766,907,160
|
IssuesEvent
|
2022-09-01 13:28:47
|
thoth-station/user-api
|
https://api.github.com/repos/thoth-station/user-api
|
closed
|
GET /python/package/versions gives 404 Error not found
|
priority/critical-urgent kind/bug sig/devsecops
|
**Describe the bug**
Get API gives 404 Error not found with default value , expecting to list available python versions
"error": "Package 'tensorflow' not found",
"parameters": {
"name": "tensorflow",
"order_by": null,
"os_name": "ubi",
"os_version": "9",
"python_version": "3.9"
}
}
**To Reproduce**
Steps to reproduce the behavior:
1. Try https://khemenu.thoth-station.ninja/api/v1/ui/#/PythonPackages/list_python_package_versions
**Expected behavior**
Expecting to list available python versions
|
1.0
|
GET /python/package/versions gives 404 Error not found - **Describe the bug**
Get API gives 404 Error not found with default value , expecting to list available python versions
"error": "Package 'tensorflow' not found",
"parameters": {
"name": "tensorflow",
"order_by": null,
"os_name": "ubi",
"os_version": "9",
"python_version": "3.9"
}
}
**To Reproduce**
Steps to reproduce the behavior:
1. Try https://khemenu.thoth-station.ninja/api/v1/ui/#/PythonPackages/list_python_package_versions
**Expected behavior**
Expecting to list available python versions
|
non_process
|
get python package versions gives error not found describe the bug get api gives error not found with default value expecting to list available python versions error package tensorflow not found parameters name tensorflow order by null os name ubi os version python version to reproduce steps to reproduce the behavior try expected behavior expecting to list available python versions
| 0
|
170
| 2,586,827,577
|
IssuesEvent
|
2015-02-17 14:50:23
|
MozillaFoundation/plan
|
https://api.github.com/repos/MozillaFoundation/plan
|
closed
|
Document how MoFo does program reviews
|
p2 planning process
|
Document our Program Review plan, template and calendar
### RACI
* Owner: Matt
* Decision-maker: Angela
* Consulted / Informed: Ops, TPS, and New Communities leads
(once we are further along with a draft plan and calendar)
* **Context:** As part of standard Mozilla policy and process going forward, we'll need to do program reviews across MoFo on a regular basis going forward.
* This ticket will help us co-ordinate this work this Heartbeat. To be done mostly by Angela and Matt.
### Links
* **Product line review template.** From Dave Slater: http://mzl.la/product_line_review
* **MoFo Program Review plan / notes** https://etherpad.mozilla.org/program-review
|
1.0
|
Document how MoFo does program reviews - Document our Program Review plan, template and calendar
### RACI
* Owner: Matt
* Decision-maker: Angela
* Consulted / Informed: Ops, TPS, and New Communities leads
(once we are further along with a draft plan and calendar)
* **Context:** As part of standard Mozilla policy and process going forward, we'll need to do program reviews across MoFo on a regular basis going forward.
* This ticket will help us co-ordinate this work this Heartbeat. To be done mostly by Angela and Matt.
### Links
* **Product line review template.** From Dave Slater: http://mzl.la/product_line_review
* **MoFo Program Review plan / notes** https://etherpad.mozilla.org/program-review
|
process
|
document how mofo does program reviews document our program review plan template and calendar raci owner matt decision maker angela consulted informed ops tps and new communities leads once we are further along with a draft plan and calendar context as part of standard mozilla policy and process going forward we ll need to do program reviews across mofo on a regular basis going forward this ticket will help us co ordinate this work this heartbeat to be done mostly by angela and matt links product line review template from dave slater mofo program review plan notes
| 1
|
520,901
| 15,096,947,275
|
IssuesEvent
|
2021-02-07 16:54:59
|
CookieJarApps/SmartCookieWeb
|
https://api.github.com/repos/CookieJarApps/SmartCookieWeb
|
closed
|
[Feature] Please change homepage and tab switcher styles
|
P2: Medium priority enhancement
|
:>Hey, I loved your browser smartcookieweb but it would be much better if you can change the tab switcher like chrome and also change the default homepage style.
|
1.0
|
[Feature] Please change homepage and tab switcher styles - :>Hey, I loved your browser smartcookieweb but it would be much better if you can change the tab switcher like chrome and also change the default homepage style.
|
non_process
|
please change homepage and tab switcher styles hey i loved your browser smartcookieweb but it would be much better if you can change the tab switcher like chrome and also change the default homepage style
| 0
|
12,226
| 14,743,333,525
|
IssuesEvent
|
2021-01-07 13:45:47
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Invoice issue
|
anc-process anp-1 ant-bug has attachment
|
In GitLab by @kdjstudios on Aug 9, 2019, 08:54
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/8982591
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-09-06-26528
**Server:** Internal (ALL)
**Client/Site:** NA
**Account:** Multiple
**Issue:**
There is a difference in the format of the two invoices attached.
The Alliance Inspection New Charges does not include the payment of $ 73,584.36.
The Langford Fence invoice New Charges does include the payment of $ 92.00.
I believe the correct format should NOT include the payment in the New Charges (since payments are NOT charges).
[0951_190808094755_001.pdf](/uploads/d0ffdb2710e5b2aeb988d7accb2d1e34/0951_190808094755_001.pdf)
Why are the two formats not consistent?
|
1.0
|
Invoice issue - In GitLab by @kdjstudios on Aug 9, 2019, 08:54
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/8982591
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-09-06-26528
**Server:** Internal (ALL)
**Client/Site:** NA
**Account:** Multiple
**Issue:**
There is a difference in the format of the two invoices attached.
The Alliance Inspection New Charges does not include the payment of $ 73,584.36.
The Langford Fence invoice New Charges does include the payment of $ 92.00.
I believe the correct format should NOT include the payment in the New Charges (since payments are NOT charges).
[0951_190808094755_001.pdf](/uploads/d0ffdb2710e5b2aeb988d7accb2d1e34/0951_190808094755_001.pdf)
Why are the two formats not consistent?
|
process
|
invoice issue in gitlab by kdjstudios on aug submitted by richard soltoff helpdesk helpdesk server internal all client site na account multiple issue there is a difference in the format of the two invoices attached the alliance inspection new charges does not include the payment of the langford fence invoice new charges does include the payment of i believe the correct format should not include the payment in the new charges since payments are not charges uploads pdf why are the two formats not consistent
| 1
|
13,023
| 15,378,934,789
|
IssuesEvent
|
2021-03-02 18:58:46
|
Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
|
https://api.github.com/repos/Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
|
reopened
|
Benutzer soll sich registrieren können
|
backburner register process
|
__Als__ Benutzer,
__möchte ich__ mich registrieren können
__damit__ ich mich einloggen kann.
__Szenatio 1__: Benutzer kann sich registieren.
__Szenario 2__: Registrierung schlägt fehl.
|
1.0
|
Benutzer soll sich registrieren können - __Als__ Benutzer,
__möchte ich__ mich registrieren können
__damit__ ich mich einloggen kann.
__Szenatio 1__: Benutzer kann sich registieren.
__Szenario 2__: Registrierung schlägt fehl.
|
process
|
benutzer soll sich registrieren können als benutzer möchte ich mich registrieren können damit ich mich einloggen kann szenatio benutzer kann sich registieren szenario registrierung schlägt fehl
| 1
|
25,406
| 6,658,139,032
|
IssuesEvent
|
2017-09-30 15:35:26
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Ajax download error during j3.8 upgrade from 3.7.5 - install ABEND, site corrupted
|
No Code Attached Yet
|
### Steps to reproduce the issue
Using Joomla upgrade component, to download and write J3.8 files, early in the process, threw an error "Ajax downloads not permitted" and upgrade process stopped.
Site was corrupt 500 error loading home page - no admin menus, etc...
Restored files from backup to recover.
### Expected result
Upgraded site
### Actual result
Corrupt site / partial upgrade
### System information (as much as possible)
PHP Built On Linux box291.bluehost.com 3.10.0-614.10.2.lve1.4.50.el6h.x86_64 #1 SMP Mon May 22 17:31:11 EDT 2017 x86_64
Database Version 5.6.32-78.1-log
Database Collation utf8_general_ci
Database Connection Collation utf8mb4_general_ci
PHP Version 5.6.31
Web Server Apache
WebServer to PHP Interface cgi-fcgi
Joomla! Version Joomla! 3.7.5 Stable [ Amani ] 14-August-2017 12:09 GMT
Joomla! Platform Version Joomla Platform 13.1.0 Stable [ Curiosity ] 24-Apr-2013 00:00 GMT
### Additional comments
Upgrade component has worked without problems up to this point.
|
1.0
|
Ajax download error during j3.8 upgrade from 3.7.5 - install ABEND, site corrupted - ### Steps to reproduce the issue
Using Joomla upgrade component, to download and write J3.8 files, early in the process, threw an error "Ajax downloads not permitted" and upgrade process stopped.
Site was corrupt 500 error loading home page - no admin menus, etc...
Restored files from backup to recover.
### Expected result
Upgraded site
### Actual result
Corrupt site / partial upgrade
### System information (as much as possible)
PHP Built On Linux box291.bluehost.com 3.10.0-614.10.2.lve1.4.50.el6h.x86_64 #1 SMP Mon May 22 17:31:11 EDT 2017 x86_64
Database Version 5.6.32-78.1-log
Database Collation utf8_general_ci
Database Connection Collation utf8mb4_general_ci
PHP Version 5.6.31
Web Server Apache
WebServer to PHP Interface cgi-fcgi
Joomla! Version Joomla! 3.7.5 Stable [ Amani ] 14-August-2017 12:09 GMT
Joomla! Platform Version Joomla Platform 13.1.0 Stable [ Curiosity ] 24-Apr-2013 00:00 GMT
### Additional comments
Upgrade component has worked without problems up to this point.
|
non_process
|
ajax download error during upgrade from install abend site corrupted steps to reproduce the issue using joomla upgrade component to download and write files early in the process threw an error ajax downloads not permitted and upgrade process stopped site was corrupt error loading home page no admin menus etc restored files from backup to recover expected result upgraded site actual result corrupt site partial upgrade system information as much as possible php built on linux bluehost com smp mon may edt database version log database collation general ci database connection collation general ci php version web server apache webserver to php interface cgi fcgi joomla version joomla stable august gmt joomla platform version joomla platform stable apr gmt additional comments upgrade component has worked without problems up to this point
| 0
|
13,788
| 16,550,794,621
|
IssuesEvent
|
2021-05-28 08:21:28
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
The Query Engine will choke the Node.js event loop with logs, if having lots of traffic.
|
kind/bug kind/improvement process/candidate team/client
|
### Bug description
The Node.js event loop is one thread, and the Query Engine is as many threads as you have logical cores. This means if we run lots of parallel queries, and we also log all of them, the 32 threads you have in Query Engine will spam the single-threaded JavaScript event loop.
This leads to:
- Using all the memory.
- Preventing the JavaScript event loop from serving any requests.
We can fix this by not logging in the client at all. The client should tell the Query Engine what needs to be logged, and how the logs should be formatted. We then just utilize the facilities of Tracing, directly writing to stdout/stderr.
### How to reproduce
- Get a fast server/workstation
- Enable query logging
- Execute a single query as fast as you can
- Observe results in RSS growth and event loop getting slower.
### Expected behavior
Logging should not have any effect on RSS or system perfomance.
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
### Environment & setup
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
### Prisma Version
```
```
|
1.0
|
The Query Engine will choke the Node.js event loop with logs, if having lots of traffic. - ### Bug description
The Node.js event loop is one thread, and the Query Engine is as many threads as you have logical cores. This means if we run lots of parallel queries, and we also log all of them, the 32 threads you have in Query Engine will spam the single-threaded JavaScript event loop.
This leads to:
- Using all the memory.
- Preventing the JavaScript event loop from serving any requests.
We can fix this by not logging in the client at all. The client should tell the Query Engine what needs to be logged, and how the logs should be formatted. We then just utilize the facilities of Tracing, directly writing to stdout/stderr.
### How to reproduce
- Get a fast server/workstation
- Enable query logging
- Execute a single query as fast as you can
- Observe results in RSS growth and event loop getting slower.
### Expected behavior
Logging should not have any effect on RSS or system perfomance.
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
### Environment & setup
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
### Prisma Version
```
```
|
process
|
the query engine will choke the node js event loop with logs if having lots of traffic bug description the node js event loop is one thread and the query engine is as many threads as you have logical cores this means if we run lots of parallel queries and we also log all of them the threads you have in query engine will spam the single threaded javascript event loop this leads to using all the memory preventing the javascript event loop from serving any requests we can fix this by not logging in the client at all the client should tell the query engine what needs to be logged and how the logs should be formatted we then just utilize the facilities of tracing directly writing to stdout stderr how to reproduce get a fast server workstation enable query logging execute a single query as fast as you can observe results in rss growth and event loop getting slower expected behavior logging should not have any effect on rss or system perfomance prisma information environment setup os database node js version prisma version
| 1
|
9,237
| 3,265,771,440
|
IssuesEvent
|
2015-10-22 17:42:46
|
0u812/sbnw
|
https://api.github.com/repos/0u812/sbnw
|
closed
|
What does the method gf_getCanvas actually return?
|
Documentation Error/Enhancement question
|
The docs says that the method gf_canvas gf_getCanvas returns a canvas, but what is this canvas? gf_canvas appears to be a void*. The docs are unclear in this matter.
|
1.0
|
What does the method gf_getCanvas actually return? - The docs says that the method gf_canvas gf_getCanvas returns a canvas, but what is this canvas? gf_canvas appears to be a void*. The docs are unclear in this matter.
|
non_process
|
what does the method gf getcanvas actually return the docs says that the method gf canvas gf getcanvas returns a canvas but what is this canvas gf canvas appears to be a void the docs are unclear in this matter
| 0
|
16,642
| 21,707,265,201
|
IssuesEvent
|
2022-05-10 10:46:12
|
sjmog/smartflix
|
https://api.github.com/repos/sjmog/smartflix
|
opened
|
Handling failures
|
04-background-processing Rails/logging Ruby/Time
|
So far, we’ve implemented creating and updating records, based on the API response. Unfortunately, not every time the response can be successful, so we should be ready for such cases as well.
Notice, that the response contain `response` key defining whether the request succeeded or failed. We want to know about it so we’ll write a proper message into the logs when it failed.
It may also happen, that a movie has been deleted from the API’s database and it no longer exists. In this way, some of our database records won’t be updated, so we’ll delete them as well after 48 hours.
## To complete this challenge, you will need to
- [ ] Store a message of your choice in the [Rails logger](https://guides.rubyonrails.org/debugging_rails_applications.html#the-logger) when a given movie title wasn’t found in API. The message should contain the current timestamp and have the log level as a _warning_.
- [ ] Create a new worker deleting all show records that weren’t updated within the last 48 hours.
- [ ] Run the worker frequently: for instance, every day at midnight.
- [ ] Make sure you've tested the log message and deletion process! (Check out the Tips for more.)
## Tips
- To test this, you'll need to be able to manipulate time – freezing it, and moving forwards and backwards during a single test. Check out both [Rails' built-in methods for this](https://api.rubyonrails.org/classes/ActiveSupport/Testing/TimeHelpers.html) and the popular [Timecop](https://github.com/travisjeffery/timecop). Consider reading this [comparison of the two](https://engineering.freeagent.com/2021/03/25/timecop-vs-rails-timehelpers/).
|
1.0
|
Handling failures - So far, we’ve implemented creating and updating records, based on the API response. Unfortunately, not every time the response can be successful, so we should be ready for such cases as well.
Notice, that the response contain `response` key defining whether the request succeeded or failed. We want to know about it so we’ll write a proper message into the logs when it failed.
It may also happen, that a movie has been deleted from the API’s database and it no longer exists. In this way, some of our database records won’t be updated, so we’ll delete them as well after 48 hours.
## To complete this challenge, you will need to
- [ ] Store a message of your choice in the [Rails logger](https://guides.rubyonrails.org/debugging_rails_applications.html#the-logger) when a given movie title wasn’t found in API. The message should contain the current timestamp and have the log level as a _warning_.
- [ ] Create a new worker deleting all show records that weren’t updated within the last 48 hours.
- [ ] Run the worker frequently: for instance, every day at midnight.
- [ ] Make sure you've tested the log message and deletion process! (Check out the Tips for more.)
## Tips
- To test this, you'll need to be able to manipulate time – freezing it, and moving forwards and backwards during a single test. Check out both [Rails' built-in methods for this](https://api.rubyonrails.org/classes/ActiveSupport/Testing/TimeHelpers.html) and the popular [Timecop](https://github.com/travisjeffery/timecop). Consider reading this [comparison of the two](https://engineering.freeagent.com/2021/03/25/timecop-vs-rails-timehelpers/).
|
process
|
handling failures so far we’ve implemented creating and updating records based on the api response unfortunately not every time the response can be successful so we should be ready for such cases as well notice that the response contain response key defining whether the request succeeded or failed we want to know about it so we’ll write a proper message into the logs when it failed it may also happen that a movie has been deleted from the api’s database and it no longer exists in this way some of our database records won’t be updated so we’ll delete them as well after hours to complete this challenge you will need to store a message of your choice in the when a given movie title wasn’t found in api the message should contain the current timestamp and have the log level as a warning create a new worker deleting all show records that weren’t updated within the last hours run the worker frequently for instance every day at midnight make sure you ve tested the log message and deletion process check out the tips for more tips to test this you ll need to be able to manipulate time – freezing it and moving forwards and backwards during a single test check out both and the popular consider reading this
| 1
|
10,804
| 13,609,287,985
|
IssuesEvent
|
2020-09-23 04:50:10
|
googleapis/java-errorreporting
|
https://api.github.com/repos/googleapis/java-errorreporting
|
closed
|
Dependency Dashboard
|
api: clouderrorreporting type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-errorreporting-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-errorreporting to v0.120.2-beta
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-errorreporting-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-errorreporting to v0.120.2-beta
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud errorreporting to beta chore deps update dependency com google cloud libraries bom to check this box to trigger a request for renovate to run again on this repository
| 1
|
22,060
| 30,576,584,236
|
IssuesEvent
|
2023-07-21 06:11:00
|
winter-telescope/mirar
|
https://api.github.com/repos/winter-telescope/mirar
|
opened
|
[FEATURE] Core fields for dataframe
|
enhancement skyportal processors WINTER
|
**Is your feature request related to a problem? Please describe.**
I'm always frustrated when I realise the source processors are only really set up to work with wirc, and nothing else.
**Describe the solution you'd like**
We should have a list of core fields, like images. `prv_candidates` should be under that banner. More generally, there is so much overly-specific stuff in the source/alert generation that should not be there. Maybe we need make_alert, like load_raw_image, which is pipeline-specific.
|
1.0
|
[FEATURE] Core fields for dataframe - **Is your feature request related to a problem? Please describe.**
I'm always frustrated when I realise the source processors are only really set up to work with wirc, and nothing else.
**Describe the solution you'd like**
We should have a list of core fields, like images. `prv_candidates` should be under that banner. More generally, there is so much overly-specific stuff in the source/alert generation that should not be there. Maybe we need make_alert, like load_raw_image, which is pipeline-specific.
|
process
|
core fields for dataframe is your feature request related to a problem please describe i m always frustrated when i realise the source processors are only really set up to work with wirc and nothing else describe the solution you d like we should have a list of core fields like images prv candidates should be under that banner more generally there is so much overly specific stuff in the source alert generation that should not be there maybe we need make alert like load raw image which is pipeline specific
| 1
|
17,513
| 23,325,676,944
|
IssuesEvent
|
2022-08-08 20:55:56
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Reopening a terminal session to a working directory with äöü in path won't work
|
bug help wanted terminal-process
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.67.2
- OS Version: MacOS Monterey 12.4 (default zsh terminal)
Steps to Reproduce:
1. Create a folder containing an "Umlaut" (äöü)
2. Open folder in VSCode and launch/use a terminal (for testing I used _echo something_ in zsh)
3. Close VSCode
4. When restarting, VSCode fails to restore previous session as "the directory doesn't exist"
However, if there is no "Umlaut" included in the path somewhere, this problem won't occur.
This issue should affect at least all german VSCode-Users (like me) who want to open a folder on their Mac located in OneDrive. The problem is that the filepath for such files looks like "~/Library/CloudStorage/OneDrive-Persönlich/..." and restoring a session isn't possible for any folder/file in OneDrive, which is a little bit inconvenient.
There might also just be a problem with my terminals encoding (currently set to UTF-8 so should work I guess) that it can't open the working directory with how it spells (in this case) the ö in the filepath:
<img width="390" alt="Bildschirmfoto 2022-05-23 um 00 04 17" src="https://user-images.githubusercontent.com/81378532/169717961-1c3a9dac-ea5c-45ac-8ced-e8993eb6987f.png">
(when I try to cd to that filepath it doesn't recognize the o\xcc\x88 as ö and says the path doesn't exist)
Thanks for help in advantage :)
Greetings,
Fussel
|
1.0
|
Reopening a terminal session to a working directory with äöü in path won't work - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.67.2
- OS Version: MacOS Monterey 12.4 (default zsh terminal)
Steps to Reproduce:
1. Create a folder containing an "Umlaut" (äöü)
2. Open folder in VSCode and launch/use a terminal (for testing I used _echo something_ in zsh)
3. Close VSCode
4. When restarting, VSCode fails to restore previous session as "the directory doesn't exist"
However, if there is no "Umlaut" included in the path somewhere, this problem won't occur.
This issue should affect at least all german VSCode-Users (like me) who want to open a folder on their Mac located in OneDrive. The problem is that the filepath for such files looks like "~/Library/CloudStorage/OneDrive-Persönlich/..." and restoring a session isn't possible for any folder/file in OneDrive, which is a little bit inconvenient.
There might also just be a problem with my terminals encoding (currently set to UTF-8 so should work I guess) that it can't open the working directory with how it spells (in this case) the ö in the filepath:
<img width="390" alt="Bildschirmfoto 2022-05-23 um 00 04 17" src="https://user-images.githubusercontent.com/81378532/169717961-1c3a9dac-ea5c-45ac-8ced-e8993eb6987f.png">
(when I try to cd to that filepath it doesn't recognize the o\xcc\x88 as ö and says the path doesn't exist)
Thanks for help in advantage :)
Greetings,
Fussel
|
process
|
reopening a terminal session to a working directory with äöü in path won t work does this issue occur when all extensions are disabled yes report issue dialog can assist with this vs code version os version macos monterey default zsh terminal steps to reproduce create a folder containing an umlaut äöü open folder in vscode and launch use a terminal for testing i used echo something in zsh close vscode when restarting vscode fails to restore previous session as the directory doesn t exist however if there is no umlaut included in the path somewhere this problem won t occur this issue should affect at least all german vscode users like me who want to open a folder on their mac located in onedrive the problem is that the filepath for such files looks like library cloudstorage onedrive persönlich and restoring a session isn t possible for any folder file in onedrive which is a little bit inconvenient there might also just be a problem with my terminals encoding currently set to utf so should work i guess that it can t open the working directory with how it spells in this case the ö in the filepath img width alt bildschirmfoto um src when i try to cd to that filepath it doesn t recognize the o xcc as ö and says the path doesn t exist thanks for help in advantage greetings fussel
| 1
|
341,717
| 30,597,246,741
|
IssuesEvent
|
2023-07-22 00:36:13
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix pointwise_ops.test_torch_tanh
|
PyTorch Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix pointwise_ops.test_torch_tanh - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5627467749/job/15250034494"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_process
|
fix pointwise ops test torch tanh tensorflow a href src torch a href src jax a href src numpy a href src paddle a href src
| 0
|
17,172
| 22,745,624,191
|
IssuesEvent
|
2022-07-07 08:55:08
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion of molecular functions represented as biological processes
|
obsoletion multi-species process
|
Dear all,
The proposal has been made to obsolete:
GO:0044481 envenomation resulting in proteolysis in another organism
7 EXP - represents a MF -> removed, all annotations were redundant with other annotations to 'envenomation resulting in negative regulation of platelet aggregation in another organism' and/or 'envenomation resulting in fibrinogenolysis in another organism'
GO:0099127 envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism
0 annotations
GO:0044543 envenomation resulting in zymogen activation in another organism
0 annotations
The reason for obsoletion is that these represent molecular functions. Annotations have been removed. There are no mappings to these terms; these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23634
Thanks, Pascale
|
1.0
|
Obsoletion of molecular functions represented as biological processes - Dear all,
The proposal has been made to obsolete:
GO:0044481 envenomation resulting in proteolysis in another organism
7 EXP - represents a MF -> removed, all annotations were redundant with other annotations to 'envenomation resulting in negative regulation of platelet aggregation in another organism' and/or 'envenomation resulting in fibrinogenolysis in another organism'
GO:0099127 envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism
0 annotations
GO:0044543 envenomation resulting in zymogen activation in another organism
0 annotations
The reason for obsoletion is that these represent molecular functions. Annotations have been removed. There are no mappings to these terms; these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23634
Thanks, Pascale
|
process
|
obsoletion of molecular functions represented as biological processes dear all the proposal has been made to obsolete go envenomation resulting in proteolysis in another organism exp represents a mf removed all annotations were redundant with other annotations to envenomation resulting in negative regulation of platelet aggregation in another organism and or envenomation resulting in fibrinogenolysis in another organism go envenomation resulting in positive regulation of argininosuccinate synthase activity in another organism annotations go envenomation resulting in zymogen activation in another organism annotations the reason for obsoletion is that these represent molecular functions annotations have been removed there are no mappings to these terms these terms are not present in any subsets you can comment on the ticket thanks pascale
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.