Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,692
| 27,364,144,799
|
IssuesEvent
|
2023-02-27 17:50:51
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[doc][filter processor]
|
bug priority:p2 processor/filter
|
### Component(s)
processor/filter
### What happened?
## Description
We created a config that followed the examples on the [README.md](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.71.0/processor/filterprocessor/README.md) and it started failing on release `v0.71.0`. The issue seems to be due to the upstream change https://github.com/open-telemetry/opentelemetry-collector/pull/6876 which means that keys are now case sensitive. The examples on the `README.md` however has (multiple) blocks like
```yaml
processors:
filter:
spans:
include:
match_type: strict
services:
- app_3
exclude:
match_type: regexp
services:
- app_1
- app_2
span_names:
- hello_world
- hello/world
attributes:
- Key: container.name
Value: (app_container_1|app_container_2)
libraries:
- Name: opentelemetry
Version: 0.0-beta
resources:
- Key: container.host
Value: (localhost|127.0.0.1)
```
where `Key` and `Value` should be `key` and `value` instead. The use of `Key/Value` also differs from the test configs under `/testdata`.
## Steps to Reproduce
We don't need to use a full otel config to test, we can simply adjust the test data to have the incorrect key, i.e. in `processor/filterprocessor/testdata/config_traces.yaml` change `key` and `value` to `Key` and `Value` then run the test
```shel#
[$processor/filterprocessor] go test -v -run TestLoadingSpans
```
which will fail. Running the same test on tag `v0.70.0` passes.
## Expected Result
The tests passes - or maybe fail depending on your POV but I feel like this is an unintended breaking change.
## Actual Result
The tests fails with output
```
config_test.go:610:
Error Trace: /Users/edwintye/github/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:610
Error: Received unexpected error:
1 error(s) decoding:
* 'spans.include.attributes[0]' has invalid keys: Key, Value
```
### Collector version
v0.71.0
### Environment information
N/A
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
I don't know how wild spread this issue is and how many people copy the examples from the README.md. I have discovered in the filter processor because it broke our deployment but this issue may apply everywhere.
|
1.0
|
[doc][filter processor] - ### Component(s)
processor/filter
### What happened?
## Description
We created a config that followed the examples on the [README.md](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/v0.71.0/processor/filterprocessor/README.md) and it started failing on release `v0.71.0`. The issue seems to be due to the upstream change https://github.com/open-telemetry/opentelemetry-collector/pull/6876 which means that keys are now case sensitive. The examples on the `README.md` however has (multiple) blocks like
```yaml
processors:
filter:
spans:
include:
match_type: strict
services:
- app_3
exclude:
match_type: regexp
services:
- app_1
- app_2
span_names:
- hello_world
- hello/world
attributes:
- Key: container.name
Value: (app_container_1|app_container_2)
libraries:
- Name: opentelemetry
Version: 0.0-beta
resources:
- Key: container.host
Value: (localhost|127.0.0.1)
```
where `Key` and `Value` should be `key` and `value` instead. The use of `Key/Value` also differs from the test configs under `/testdata`.
## Steps to Reproduce
We don't need to use a full otel config to test, we can simply adjust the test data to have the incorrect key, i.e. in `processor/filterprocessor/testdata/config_traces.yaml` change `key` and `value` to `Key` and `Value` then run the test
```shel#
[$processor/filterprocessor] go test -v -run TestLoadingSpans
```
which will fail. Running the same test on tag `v0.70.0` passes.
## Expected Result
The tests passes - or maybe fail depending on your POV but I feel like this is an unintended breaking change.
## Actual Result
The tests fails with output
```
config_test.go:610:
Error Trace: /Users/edwintye/github/opentelemetry-collector-contrib/processor/filterprocessor/config_test.go:610
Error: Received unexpected error:
1 error(s) decoding:
* 'spans.include.attributes[0]' has invalid keys: Key, Value
```
### Collector version
v0.71.0
### Environment information
N/A
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
I don't know how wild spread this issue is and how many people copy the examples from the README.md. I have discovered in the filter processor because it broke our deployment but this issue may apply everywhere.
|
process
|
component s processor filter what happened description we created a config that followed the examples on the and it started failing on release the issue seems to be due to the upstream change which means that keys are now case sensitive the examples on the readme md however has multiple blocks like yaml processors filter spans include match type strict services app exclude match type regexp services app app span names hello world hello world attributes key container name value app container app container libraries name opentelemetry version beta resources key container host value localhost where key and value should be key and value instead the use of key value also differs from the test configs under testdata steps to reproduce we don t need to use a full otel config to test we can simply adjust the test data to have the incorrect key i e in processor filterprocessor testdata config traces yaml change key and value to key and value then run the test shel go test v run testloadingspans which will fail running the same test on tag passes expected result the tests passes or maybe fail depending on your pov but i feel like this is an unintended breaking change actual result the tests fails with output config test go error trace users edwintye github opentelemetry collector contrib processor filterprocessor config test go error received unexpected error error s decoding spans include attributes has invalid keys key value collector version environment information n a opentelemetry collector configuration no response log output no response additional context i don t know how wild spread this issue is and how many people copy the examples from the readme md i have discovered in the filter processor because it broke our deployment but this issue may apply everywhere
| 1
|
14,733
| 17,950,348,607
|
IssuesEvent
|
2021-09-12 15:57:32
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Remove the --all_incompatible_changes flag
|
P2 type: process team-Configurability
|
The `--all_incompatible_changes` flag should be removed.
- Its intent is not clear since the introduction of LTS tracks
- It originally was used to gather all breaking changes for the next major release. The team does not do that any more since changes can be introduced at HEAD at any time
- Sometimes distinct `--incompatible_*` flags can interfere with each other.
- in theory you could have conflicts that make Bazel unusable
- in practice, it interferes with flag deprecation #11193
- The mechanism does not capture Starlark defined flags. As we move flags out of the Bazel core and into Starlark it will become less useful.
|
1.0
|
Remove the --all_incompatible_changes flag - The `--all_incompatible_changes` flag should be removed.
- Its intent is not clear since the introduction of LTS tracks
- It originally was used to gather all breaking changes for the next major release. The team does not do that any more since changes can be introduced at HEAD at any time
- Sometimes distinct `--incompatible_*` flags can interfere with each other.
- in theory you could have conflicts that make Bazel unusable
- in practice, it interferes with flag deprecation #11193
- The mechanism does not capture Starlark defined flags. As we move flags out of the Bazel core and into Starlark it will become less useful.
|
process
|
remove the all incompatible changes flag the all incompatible changes flag should be removed its intent is not clear since the introduction of lts tracks it originally was used to gather all breaking changes for the next major release the team does not do that any more since changes can be introduced at head at any time sometimes distinct incompatible flags can interfere with each other in theory you could have conflicts that make bazel unusable in practice it interferes with flag deprecation the mechanism does not capture starlark defined flags as we move flags out of the bazel core and into starlark it will become less useful
| 1
|
15,306
| 19,347,976,468
|
IssuesEvent
|
2021-12-15 12:55:43
|
sara-sabr/ITStrategy
|
https://api.github.com/repos/sara-sabr/ITStrategy
|
opened
|
CS-02 Process Next Steps
|
initiative:hr-process
|
- [ ] Send invitations for the exam
- [ ] Send communique to Stakeholders for exam review
- [ ] Review exams and send results to HR
- [ ] Book interviews
- [ ] Interview candidates
- [ ] Send results to HR
- [ ] Create pool
- [ ] Send communique to SABR for informal interviews
- [ ] Send communique to IITB for informal interviews
|
1.0
|
CS-02 Process Next Steps - - [ ] Send invitations for the exam
- [ ] Send communique to Stakeholders for exam review
- [ ] Review exams and send results to HR
- [ ] Book interviews
- [ ] Interview candidates
- [ ] Send results to HR
- [ ] Create pool
- [ ] Send communique to SABR for informal interviews
- [ ] Send communique to IITB for informal interviews
|
process
|
cs process next steps send invitations for the exam send communique to stakeholders for exam review review exams and send results to hr book interviews interview candidates send results to hr create pool send communique to sabr for informal interviews send communique to iitb for informal interviews
| 1
|
18,796
| 24,698,135,628
|
IssuesEvent
|
2022-10-19 13:34:11
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
RepeaterSTART
|
enhancement in process
|
Good morning, I was wondering if you might include RepeaterSTART for Linux, the Linux repeaters app I've been working on -
https://github.com/programmin1/Repeater-START
|
1.0
|
RepeaterSTART - Good morning, I was wondering if you might include RepeaterSTART for Linux, the Linux repeaters app I've been working on -
https://github.com/programmin1/Repeater-START
|
process
|
repeaterstart good morning i was wondering if you might include repeaterstart for linux the linux repeaters app i ve been working on
| 1
|
154,628
| 19,751,313,329
|
IssuesEvent
|
2022-01-15 04:54:39
|
turkdevops/atom
|
https://api.github.com/repos/turkdevops/atom
|
opened
|
CVE-2021-37712 (High) detected in tar-4.4.4.tgz, tar-4.4.13.tgz
|
security vulnerability
|
## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-4.4.4.tgz</b>, <b>tar-4.4.13.tgz</b></p></summary>
<p>
<details><summary><b>tar-4.4.4.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.4.tgz">https://registry.npmjs.org/tar/-/tar-4.4.4.tgz</a></p>
<p>
Dependency Hierarchy:
- atom-package-manager-2.4.5.tgz (Root Library)
- npm-6.2.0.tgz
- :x: **tar-4.4.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>
Dependency Hierarchy:
- npm-6.14.6.tgz (Root Library)
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/atom/commit/1eff9f4173420e33aa6739fdce981e7651e8f212">1eff9f4173420e33aa6739fdce981e7651e8f212</a></p>
<p>Found in base branch: <b>electron-upgrade</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37712 (High) detected in tar-4.4.4.tgz, tar-4.4.13.tgz - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-4.4.4.tgz</b>, <b>tar-4.4.13.tgz</b></p></summary>
<p>
<details><summary><b>tar-4.4.4.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.4.tgz">https://registry.npmjs.org/tar/-/tar-4.4.4.tgz</a></p>
<p>
Dependency Hierarchy:
- atom-package-manager-2.4.5.tgz (Root Library)
- npm-6.2.0.tgz
- :x: **tar-4.4.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>
Dependency Hierarchy:
- npm-6.14.6.tgz (Root Library)
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/atom/commit/1eff9f4173420e33aa6739fdce981e7651e8f212">1eff9f4173420e33aa6739fdce981e7651e8f212</a></p>
<p>Found in base branch: <b>electron-upgrade</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tar tgz tar tgz cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar for node library home page a href dependency hierarchy atom package manager tgz root library npm tgz x tar tgz vulnerable library tar tgz tar for node library home page a href dependency hierarchy npm tgz root library x tar tgz vulnerable library found in head commit a href found in base branch electron upgrade vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource
| 0
|
502,074
| 14,539,764,862
|
IssuesEvent
|
2020-12-15 12:22:01
|
GSG-G9/todo_or_not_todo
|
https://api.github.com/repos/GSG-G9/todo_or_not_todo
|
closed
|
file structure
|
priority-0
|
- **Notes:**
- set up eslint
- set up travis
- **files structure:**
- public/
- index.html
- signup.html
- main.html
- styles.css
- scripts/
- utlis.js
- login.js
- signup.js
- main.js
- src/
- index.js
- app.js
- router.js
- controllers/
- index.js
- database/
- config/
- build.sql
- build.js
- connection.js
- queries/
- index.js
- \_\_tests\_\_/
- routes/
- index.test.js
- queries/
- index.test.js
- **packages**
- dependencies
- @hapi/joi
- bcryptjs
- compression
- cookie-parser
- env2
- express
- jsonwebtoken
- pg
- devDependencies
- jest
- nodemon
- supertest
|
1.0
|
file structure - - **Notes:**
- set up eslint
- set up travis
- **files structure:**
- public/
- index.html
- signup.html
- main.html
- styles.css
- scripts/
- utlis.js
- login.js
- signup.js
- main.js
- src/
- index.js
- app.js
- router.js
- controllers/
- index.js
- database/
- config/
- build.sql
- build.js
- connection.js
- queries/
- index.js
- \_\_tests\_\_/
- routes/
- index.test.js
- queries/
- index.test.js
- **packages**
- dependencies
- @hapi/joi
- bcryptjs
- compression
- cookie-parser
- env2
- express
- jsonwebtoken
- pg
- devDependencies
- jest
- nodemon
- supertest
|
non_process
|
file structure notes set up eslint set up travis files structure public index html signup html main html styles css scripts utlis js login js signup js main js src index js app js router js controllers index js database config build sql build js connection js queries index js tests routes index test js queries index test js packages dependencies hapi joi bcryptjs compression cookie parser express jsonwebtoken pg devdependencies jest nodemon supertest
| 0
|
49,428
| 7,504,062,517
|
IssuesEvent
|
2018-04-10 01:26:08
|
BabylonJS/Babylon.js
|
https://api.github.com/repos/BabylonJS/Babylon.js
|
closed
|
Generate documentation for glTFLoader class
|
documentation
|
- [x] add docs to gltfLoader
- [x] get typedoc validate to validate against loaders
|
1.0
|
Generate documentation for glTFLoader class - - [x] add docs to gltfLoader
- [x] get typedoc validate to validate against loaders
|
non_process
|
generate documentation for gltfloader class add docs to gltfloader get typedoc validate to validate against loaders
| 0
|
11,491
| 14,365,108,449
|
IssuesEvent
|
2020-12-01 01:00:11
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Enable appending layers to existing geopackage when using package layers algorithm
|
Bug Feedback Processing Regression
|
# Problem
The current implementation of package layer algorithm does not allow a user to append layers to an existing geopackage.
The default behavior is to overwrite an existing geopackage even when a user hasn't specified the option to overwrite existing geopackage.


**NB** I think this should be a bug but I am not sure
|
1.0
|
Enable appending layers to existing geopackage when using package layers algorithm - # Problem
The current implementation of package layer algorithm does not allow a user to append layers to an existing geopackage.
The default behavior is to overwrite an existing geopackage even when a user hasn't specified the option to overwrite existing geopackage.


**NB** I think this should be a bug but I am not sure
|
process
|
enable appending layers to existing geopackage when using package layers algorithm problem the current implementation of package layer algorithm does not allow a user to append layers to an existing geopackage the default behavior is to overwrite an existing geopackage even when a user hasn t specified the option to overwrite existing geopackage nb i think this should be a bug but i am not sure
| 1
|
744,303
| 25,937,738,670
|
IssuesEvent
|
2022-12-16 15:32:14
|
GEOS-ESM/MAPL
|
https://api.github.com/repos/GEOS-ESM/MAPL
|
closed
|
ACG: Apply star-expansion to long name
|
high priority
|
Per @amdasilva, the star-expansion done in the short name seen here:

Should also be done in the long name.
> For the short name the “*” is replaced with the name of the component, thus BCEXTTAU, BRCEXTTAU, etc. A similar device was supposed to have been implemented for the long name but apparently it has not. In this case, we should have something like this:
>
> Which would expand to “BC Extinction AOT”, “BRC Extinction AOT”, etc… Expanding BC to “Black Carbon” cannot be easily done with the ACG without introducing cumbersome syntax. Having ncks perform attribute renaming at the script level may be easier to implement.
|
1.0
|
ACG: Apply star-expansion to long name - Per @amdasilva, the star-expansion done in the short name seen here:

Should also be done in the long name.
> For the short name the “*” is replaced with the name of the component, thus BCEXTTAU, BRCEXTTAU, etc. A similar device was supposed to have been implemented for the long name but apparently it has not. In this case, we should have something like this:
>
> Which would expand to “BC Extinction AOT”, “BRC Extinction AOT”, etc… Expanding BC to “Black Carbon” cannot be easily done with the ACG without introducing cumbersome syntax. Having ncks perform attribute renaming at the script level may be easier to implement.
|
non_process
|
acg apply star expansion to long name per amdasilva the star expansion done in the short name seen here should also be done in the long name for the short name the “ ” is replaced with the name of the component thus bcexttau brcexttau etc a similar device was supposed to have been implemented for the long name but apparently it has not in this case we should have something like this which would expand to “bc extinction aot” “brc extinction aot” etc… expanding bc to “black carbon” cannot be easily done with the acg without introducing cumbersome syntax having ncks perform attribute renaming at the script level may be easier to implement
| 0
|
10,828
| 13,610,057,296
|
IssuesEvent
|
2020-09-23 06:41:39
|
thegooddocsproject/thegooddocsproject.github.io
|
https://api.github.com/repos/thegooddocsproject/thegooddocsproject.github.io
|
closed
|
Add new PSC and committee members
|
process
|
We need to update our webpages as per: https://thegooddocsproject.groups.io/g/main/topic/inviting_people_into_official/76775222?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76775222
We have the following new PSC/committee members:
Project Steering Committee:
* Morgan Craft
* Alyssa Rock
Committer:
* Lana Brindley
* Daniel Beck
* Viraji Ogodapola
|
1.0
|
Add new PSC and committee members - We need to update our webpages as per: https://thegooddocsproject.groups.io/g/main/topic/inviting_people_into_official/76775222?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,76775222
We have the following new PSC/committee members:
Project Steering Committee:
* Morgan Craft
* Alyssa Rock
Committer:
* Lana Brindley
* Daniel Beck
* Viraji Ogodapola
|
process
|
add new psc and committee members we need to update our webpages as per we have the following new psc committee members project steering committee morgan craft alyssa rock committer lana brindley daniel beck viraji ogodapola
| 1
|
12,064
| 14,739,730,210
|
IssuesEvent
|
2021-01-07 07:48:49
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
068- portland unable to process credit card payment
|
anc-process anp-0.5 ant-bug ant-child/secondary has attachment
|
In GitLab by @kdjstudios on Sep 14, 2018, 12:18
**Submitted by:** "Lettice Ross" <lettice.ross@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-14-22562/conversation
**Server:** Internal
**Client/Site:** 068
**Account:** 068-B01200
**Issue:**
I’m sending this email in regards to trying to process a credit card payment through sab, and receiving a declined due to communication error message.
I have attached a copy of the message I received back after trying to run the credit card, however there were other credit cards that I ran this morning with no problems.
I reached out to the client to have him call his bank to make sure it was not on their end, and he called me back stating the bank said everything was fine on their end, and that they didn’t show anything trying to come through his account. When I pull up the clients account in sab it show three transactions as failed when I only tried once.
If someone can please assist.
[b01200+declined.txt](/uploads/eadfd0ce56365a3c45487633188ec5bc/b01200+declined.txt)
|
1.0
|
068- portland unable to process credit card payment - In GitLab by @kdjstudios on Sep 14, 2018, 12:18
**Submitted by:** "Lettice Ross" <lettice.ross@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-14-22562/conversation
**Server:** Internal
**Client/Site:** 068
**Account:** 068-B01200
**Issue:**
I’m sending this email in regards to trying to process a credit card payment through sab, and receiving a declined due to communication error message.
I have attached a copy of the message I received back after trying to run the credit card, however there were other credit cards that I ran this morning with no problems.
I reached out to the client to have him call his bank to make sure it was not on their end, and he called me back stating the bank said everything was fine on their end, and that they didn’t show anything trying to come through his account. When I pull up the clients account in sab it show three transactions as failed when I only tried once.
If someone can please assist.
[b01200+declined.txt](/uploads/eadfd0ce56365a3c45487633188ec5bc/b01200+declined.txt)
|
process
|
portland unable to process credit card payment in gitlab by kdjstudios on sep submitted by lettice ross helpdesk server internal client site account issue i’m sending this email in regards to trying to process a credit card payment through sab and receiving a declined due to communication error message i have attached a copy of the message i received back after trying to run the credit card however there were other credit cards that i ran this morning with no problems i reached out to the client to have him call his bank to make sure it was not on their end and he called me back stating the bank said everything was fine on their end and that they didn’t show anything trying to come through his account when i pull up the clients account in sab it show three transactions as failed when i only tried once if someone can please assist uploads declined txt
| 1
|
17,531
| 23,342,319,227
|
IssuesEvent
|
2022-08-09 14:56:03
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
LocalFileSystem::new_with_prefix fails with relative path
|
bug development-process
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Calling `LocalFileSystem::new_with_prefix(".")` no longer works as of #2269
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
LocalFileSystem::new_with_prefix fails with relative path - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Calling `LocalFileSystem::new_with_prefix(".")` no longer works as of #2269
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
localfilesystem new with prefix fails with relative path describe the bug a clear and concise description of what the bug is calling localfilesystem new with prefix no longer works as of to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here
| 1
|
18,553
| 24,555,431,498
|
IssuesEvent
|
2022-10-12 15:30:57
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline indicator] Offline indicator popup error message should get displayed when clicked on 'Sign up' button
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Install the app
2. Open the app
3. Turn off the internet/mobile data
4. Click on 'Sign up' button and Verify
**AR:** Participant's are navigated to sign up screen. as soon as they click on 'Sign up' button , Offline indicator popup error message is not getting displayed
**ER:** Participant's should not navigated to sign up screen. as soon as they click on 'Sign up' button , Offline indicator popup error message should get displayed as similar to 'Sign in' button
|
3.0
|
[Android] [Offline indicator] Offline indicator popup error message should get displayed when clicked on 'Sign up' button - **Steps:**
1. Install the app
2. Open the app
3. Turn off the internet/mobile data
4. Click on 'Sign up' button and Verify
**AR:** Participant's are navigated to sign up screen. as soon as they click on 'Sign up' button , Offline indicator popup error message is not getting displayed
**ER:** Participant's should not navigated to sign up screen. as soon as they click on 'Sign up' button , Offline indicator popup error message should get displayed as similar to 'Sign in' button
|
process
|
offline indicator popup error message should get displayed when clicked on sign up button steps install the app open the app turn off the internet mobile data click on sign up button and verify ar participant s are navigated to sign up screen as soon as they click on sign up button offline indicator popup error message is not getting displayed er participant s should not navigated to sign up screen as soon as they click on sign up button offline indicator popup error message should get displayed as similar to sign in button
| 1
|
306,847
| 26,502,058,264
|
IssuesEvent
|
2023-01-18 11:01:54
|
vehicle-lang/vehicle
|
https://api.github.com/repos/vehicle-lang/vehicle
|
closed
|
Rename golden test files so they have the file extension at the end.
|
test-suite
|
By naming them e.g. `X.json.golden` it prevents JSON syntax highlighting from firing. Maybe we should rename them to `X.golden.json`?
|
1.0
|
Rename golden test files so they have the file extension at the end. - By naming them e.g. `X.json.golden` it prevents JSON syntax highlighting from firing. Maybe we should rename them to `X.golden.json`?
|
non_process
|
rename golden test files so they have the file extension at the end by naming them e g x json golden it prevents json syntax highlighting from firing maybe we should rename them to x golden json
| 0
|
11,285
| 14,079,760,057
|
IssuesEvent
|
2020-11-04 15:16:42
|
TIBCOSoftware/genxdm
|
https://api.github.com/repos/TIBCOSoftware/genxdm
|
closed
|
schema ids and keys are probably not working
|
Component-Processors Priority-Low bug
|
```
IdentityKey and friends (IdentityField, etc.) are no longer working. This
probably means that ids and keys don't work in schema.
The problem is that this is noted, by the original developer, as pretty
over-the-top: it allows atomic values as keys, not just strings, which is what
the spec mandates. Yeah, cool. But once atomic values don't live in the schemas
(so that they can be *shared among different bridges* grrr), we fall back to
strings.
Two possible solutions: refactor so that ids and keys are strings, or do
validation so that we can return the typed values, which means passing the type
of the keys/ids down into the abstractions that implement. I'm in favor of
reverting to strings, personally.
```
Original issue reported on code.google.com by `aale...@gmail.com` on 26 Sep 2012 at 4:08
|
1.0
|
schema ids and keys are probably not working - ```
IdentityKey and friends (IdentityField, etc.) are no longer working. This
probably means that ids and keys don't work in schema.
The problem is that this is noted, by the original developer, as pretty
over-the-top: it allows atomic values as keys, not just strings, which is what
the spec mandates. Yeah, cool. But once atomic values don't live in the schemas
(so that they can be *shared among different bridges* grrr), we fall back to
strings.
Two possible solutions: refactor so that ids and keys are strings, or do
validation so that we can return the typed values, which means passing the type
of the keys/ids down into the abstractions that implement. I'm in favor of
reverting to strings, personally.
```
Original issue reported on code.google.com by `aale...@gmail.com` on 26 Sep 2012 at 4:08
|
process
|
schema ids and keys are probably not working identitykey and friends identityfield etc are no longer working this probably means that ids and keys don t work in schema the problem is that this is noted by the original developer as pretty over the top it allows atomic values as keys not just strings which is what the spec mandates yeah cool but once atomic values don t live in the schemas so that they can be shared among different bridges grrr we fall back to strings two possible solutions refactor so that ids and keys are strings or do validation so that we can return the typed values which means passing the type of the keys ids down into the abstractions that implement i m in favor of reverting to strings personally original issue reported on code google com by aale gmail com on sep at
| 1
|
20,246
| 26,863,869,069
|
IssuesEvent
|
2023-02-03 21:10:47
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Remove dependency on Velocity from //third_party dir
|
P4 type: process team-Starlark-Integration
|
Once the stardoc renderer binary is migrated to the bazelbuild/stardoc repo instead of the bazelbuild/bazel repo, we shouldn't need to have bazel depend on the velocity templating engine. This issue is a reminder to cull that dep. cc @tetromino
|
1.0
|
Remove dependency on Velocity from //third_party dir - Once the stardoc renderer binary is migrated to the bazelbuild/stardoc repo instead of the bazelbuild/bazel repo, we shouldn't need to have bazel depend on the velocity templating engine. This issue is a reminder to cull that dep. cc @tetromino
|
process
|
remove dependency on velocity from third party dir once the stardoc renderer binary is migrated to the bazelbuild stardoc repo instead of the bazelbuild bazel repo we shouldn t need to have bazel depend on the velocity templating engine this issue is a reminder to cull that dep cc tetromino
| 1
|
8,269
| 11,430,023,475
|
IssuesEvent
|
2020-02-04 09:17:43
|
energy-modelling-toolkit/Dispa-SET
|
https://api.github.com/repos/energy-modelling-toolkit/Dispa-SET
|
closed
|
pytest: 4 failed
|
docs preprocessing
|
I have cloned the `master` branch and created an environment from the yml file. I have then started the `pytest`:
```
platform win32 -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
```
I don't know where to find the total output of pytest, however this is the error that appears multiple times:
```
____________________________________________________________________ test_solve_gams[MILP] ____________________________________________________________________
config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}
@pytest.mark.skipif('TRAVIS' in os.environ,
reason='This test is too long for the demo GAMS license version which is currently installed in Travis')
def test_solve_gams(config):
from dispaset.misc.gdx_handler import get_gams_path
> r = ds.solve_GAMS(config['SimulationDirectory'], get_gams_path())
tests\test_solve.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\misc\gdx_handler.py:424: in get_gams_path
tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()
def read(self, *args):
> raise IOError("reading from stdin while output is captured")
E OSError: reading from stdin while output is captured
C:\PGM\ANACONDA\envs\dispaset\lib\site-packages\_pytest\capture.py:693: OSError
-------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------
Specify the path to GAMS within quotes (e.g. "C:\\GAMS\\win64\\24.3"):
_______________________________________________________________________ test_build[LP] ________________________________________________________________________
config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}
tmpdir = local('C:\\Users\\felicma\\AppData\\Local\\Temp\\1\\pytest-of-felicma\\pytest-0\\test_build_LP_0')
def test_build(config, tmpdir):
# Using temp dir to ensure that each time a new directory is used
config['SimulationDirectory'] = tmpdir
> SimData = ds.build_simulation(config)
tests\test_solve.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\preprocessing\preprocessing.py:651: in build_simulation
write_variables(config['GAMS_folder'], gdx_out, [sets, parameters])
dispaset\misc\gdx_handler.py:178: in write_variables
gams_dir = get_gams_path(gams_dir=gams_dir.encode())
dispaset\misc\gdx_handler.py:424: in get_gams_path
tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()
def read(self, *args):
> raise IOError("reading from stdin while output is captured")
E OSError: reading from stdin while output is captured
```
I think it is due to the fact that it is looking for the GAMS path interactively, is it possible to improve the automatic discovery of the GAMS path without asking the user for an input?
|
1.0
|
pytest: 4 failed - I have cloned the `master` branch and created an environment from the yml file. I have then started the `pytest`:
```
platform win32 -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
```
I don't know where to find the total output of pytest, however this is the error that appears multiple times:
```
____________________________________________________________________ test_solve_gams[MILP] ____________________________________________________________________
config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}
@pytest.mark.skipif('TRAVIS' in os.environ,
reason='This test is too long for the demo GAMS license version which is currently installed in Travis')
def test_solve_gams(config):
from dispaset.misc.gdx_handler import get_gams_path
> r = ds.solve_GAMS(config['SimulationDirectory'], get_gams_path())
tests\test_solve.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\misc\gdx_handler.py:424: in get_gams_path
tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()
def read(self, *args):
> raise IOError("reading from stdin while output is captured")
E OSError: reading from stdin while output is captured
C:\PGM\ANACONDA\envs\dispaset\lib\site-packages\_pytest\capture.py:693: OSError
-------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------
Specify the path to GAMS within quotes (e.g. "C:\\GAMS\\win64\\24.3"):
_______________________________________________________________________ test_build[LP] ________________________________________________________________________
config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}
tmpdir = local('C:\\Users\\felicma\\AppData\\Local\\Temp\\1\\pytest-of-felicma\\pytest-0\\test_build_LP_0')
def test_build(config, tmpdir):
# Using temp dir to ensure that each time a new directory is used
config['SimulationDirectory'] = tmpdir
> SimData = ds.build_simulation(config)
tests\test_solve.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\preprocessing\preprocessing.py:651: in build_simulation
write_variables(config['GAMS_folder'], gdx_out, [sets, parameters])
dispaset\misc\gdx_handler.py:178: in write_variables
gams_dir = get_gams_path(gams_dir=gams_dir.encode())
dispaset\misc\gdx_handler.py:424: in get_gams_path
tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()
def read(self, *args):
> raise IOError("reading from stdin while output is captured")
E OSError: reading from stdin while output is captured
```
I think it is due to the fact that it is looking for the GAMS path interactively, is it possible to improve the automatic discovery of the GAMS path without asking the user for an input?
|
process
|
pytest failed i have cloned the master branch and created an environment from the yml file i have then started the pytest platform python pytest py pluggy i don t know where to find the total output of pytest however this is the error that appears multiple times test solve gams config allowcurtailment clustering costheatslack h code dispa set costloadshedding h code dispa set pytest mark skipif travis in os environ reason this test is too long for the demo gams license version which is currently installed in travis def test solve gams config from dispaset misc gdx handler import get gams path r ds solve gams config get gams path tests test solve py dispaset misc gdx handler py in get gams path tmp input specify the path to gams within quotes e g c gams self args def read self args raise ioerror reading from stdin while output is captured e oserror reading from stdin while output is captured c pgm anaconda envs dispaset lib site packages pytest capture py oserror captured stdout call specify the path to gams within quotes e g c gams test build config allowcurtailment clustering costheatslack h code dispa set costloadshedding h code dispa set tmpdir local c users felicma appdata local temp pytest of felicma pytest test build lp def test build config tmpdir using temp dir to ensure that each time a new directory is used config tmpdir simdata ds build simulation config tests test solve py dispaset preprocessing preprocessing py in build simulation write variables config gdx out dispaset misc gdx handler py in write variables gams dir get gams path gams dir gams dir encode dispaset misc gdx handler py in get gams path tmp input specify the path to gams within quotes e g c gams self args def read self args raise ioerror reading from stdin while output is captured e oserror reading from stdin while output is captured i think it is due to the fact that it is looking for the gams path interactively is it possible to improve the automatic discovery of the gams path without asking the user for an input
| 1
|
19,431
| 3,770,464,897
|
IssuesEvent
|
2016-03-16 14:41:38
|
baansconsulting/Wolfe
|
https://api.github.com/repos/baansconsulting/Wolfe
|
closed
|
update coverages
|
Testing
|
few issues noticed when trying to update coverages.
1. something is wrong with the total PCT field, it doesn't recognize the sales percentage boxes until you click on it. and once you hit save, it goes back to 0 and gives an error that should be 100 if we are overriding coverage for an backdate

|
1.0
|
update coverages - few issues noticed when trying to update coverages.
1. something is wrong with the total PCT field, it doesn't recognize the sales percentage boxes until you click on it. and once you hit save, it goes back to 0 and gives an error that should be 100 if we are overriding coverage for an backdate

|
non_process
|
update coverages few issues noticed when trying to update coverages something is wrong with the total pct field it doesn t recognize the sales percentage boxes until you click on it and once you hit save it goes back to and gives an error that should be if we are overriding coverage for an backdate
| 0
|
3,635
| 6,668,407,617
|
IssuesEvent
|
2017-10-03 15:41:55
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Process List in Safari doesn't show date.
|
browser: safari bug comp: activiti-processList
|
1. Create a new process instance that has no user tasks in it and which will therefore complete straight away and give it a name.
2. Go to Completed in Process Filters.
Process List in Safari doesn't show the date next to the process name.
<img width="313" alt="screen shot 2017-01-24 at 12 01 09" src="https://cloud.githubusercontent.com/assets/24432311/22246428/e13bc3e6-e22c-11e6-8401-8d037697b505.png">
**Note**
This issue depends on the outcome of #1461.
|
1.0
|
Process List in Safari doesn't show date. - 1. Create a new process instance that has no user tasks in it and which will therefore complete straight away and give it a name.
2. Go to Completed in Process Filters.
Process List in Safari doesn't show the date next to the process name.
<img width="313" alt="screen shot 2017-01-24 at 12 01 09" src="https://cloud.githubusercontent.com/assets/24432311/22246428/e13bc3e6-e22c-11e6-8401-8d037697b505.png">
**Note**
This issue depends on the outcome of #1461.
|
process
|
process list in safari doesn t show date create a new process instance that has no user tasks in it and which will therefore complete straight away and give it a name go to completed in process filters process list in safari doesn t show the date next to the process name img width alt screen shot at src note this issue depends on the outcome of
| 1
|
61,993
| 3,163,985,233
|
IssuesEvent
|
2015-09-20 19:56:11
|
StarQuestMinecraft/StarQuestPublic
|
https://api.github.com/repos/StarQuestMinecraft/StarQuestPublic
|
closed
|
World reset bug
|
Priority low Under Review
|
I just logged in for the first time in about a week, and my sheep farm that was in claimed territory on Eratoss has disappeared; I assume this was due to the world reset on Tuesday. It had pretty much everything that I owned on the server, and that was quite a lot. Is there any way that the four chunks the sheep tower consisted of could be world-edited back in from the old map? The farm was located around x:1800 z:100
It also contained about 500 sheep. They long to return to their 32x32m segregated pens; do it for the sheeple!
|
1.0
|
World reset bug - I just logged in for the first time in about a week, and my sheep farm that was in claimed territory on Eratoss has disappeared; I assume this was due to the world reset on Tuesday. It had pretty much everything that I owned on the server, and that was quite a lot. Is there any way that the four chunks the sheep tower consisted of could be world-edited back in from the old map? The farm was located around x:1800 z:100
It also contained about 500 sheep. They long to return to their 32x32m segregated pens; do it for the sheeple!
|
non_process
|
world reset bug i just logged in for the first time in about a week and my sheep farm that was in claimed territory on eratoss has disappeared i assume this was due to the world reset on tuesday it had pretty much everything that i owned on the server and that was quite a lot is there any way that the four chunks the sheep tower consisted of could be world edited back in from the old map the farm was located around x z it also contained about sheep they long to return to their segregated pens do it for the sheeple
| 0
|
15,399
| 5,955,137,522
|
IssuesEvent
|
2017-05-28 01:52:03
|
Linuxbrew/legacy-linuxbrew
|
https://api.github.com/repos/Linuxbrew/legacy-linuxbrew
|
closed
|
sshfs issue
|
build-error
|
I'm trying to install sshfs but I thinκ it tries to install wrong dependency (osxfuse)
```
==> Installing dependencies for sshfs: osxfuse, libffi, glib
==> Installing sshfs dependency: osxfuse
==> Cloning https://github.com/osxfuse/osxfuse.git
Updating /home/billp/.cache/Homebrew/osxfuse--git
==> Checking out tag osxfuse-2.7.5
==> ./build.sh -t homebrew -f /opt/home/billp/.linuxbrew/Cellar/osxfuse/2.7.5
-f
/opt/home/billp/.linuxbrew/Cellar/osxfuse/2.7.5
./build.sh: line 2445: sw_vers: command not found
OSXFUSEBuildTool() failed: no supported version of Xcode found.
READ THIS: https://github.com/Homebrew/linuxbrew/blob/master/share/doc/homebrew/Troubleshooting.md#troubleshooting
```
|
1.0
|
sshfs issue - I'm trying to install sshfs but I thinκ it tries to install wrong dependency (osxfuse)
```
==> Installing dependencies for sshfs: osxfuse, libffi, glib
==> Installing sshfs dependency: osxfuse
==> Cloning https://github.com/osxfuse/osxfuse.git
Updating /home/billp/.cache/Homebrew/osxfuse--git
==> Checking out tag osxfuse-2.7.5
==> ./build.sh -t homebrew -f /opt/home/billp/.linuxbrew/Cellar/osxfuse/2.7.5
-f
/opt/home/billp/.linuxbrew/Cellar/osxfuse/2.7.5
./build.sh: line 2445: sw_vers: command not found
OSXFUSEBuildTool() failed: no supported version of Xcode found.
READ THIS: https://github.com/Homebrew/linuxbrew/blob/master/share/doc/homebrew/Troubleshooting.md#troubleshooting
```
|
non_process
|
sshfs issue i m trying to install sshfs but i thinκ it tries to install wrong dependency osxfuse installing dependencies for sshfs osxfuse libffi glib installing sshfs dependency osxfuse cloning updating home billp cache homebrew osxfuse git checking out tag osxfuse build sh t homebrew f opt home billp linuxbrew cellar osxfuse f opt home billp linuxbrew cellar osxfuse build sh line sw vers command not found osxfusebuildtool failed no supported version of xcode found read this
| 0
|
67,673
| 7,057,128,980
|
IssuesEvent
|
2018-01-04 15:25:13
|
tu-missions-computing/home-church-manager
|
https://api.github.com/repos/tu-missions-computing/home-church-manager
|
opened
|
Test Each Feature
|
Testing
|
Go through each feature on the site and do everything possible to try and break it. Document needed changes
|
1.0
|
Test Each Feature - Go through each feature on the site and do everything possible to try and break it. Document needed changes
|
non_process
|
test each feature go through each feature on the site and do everything possible to try and break it document needed changes
| 0
|
51,681
| 13,211,281,226
|
IssuesEvent
|
2020-08-15 22:01:21
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[icetray] Adding services directly to the context bypasses I3TrayInfo (Trac #826)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/826">https://code.icecube.wisc.edu/projects/icecube/ticket/826</a>, reported by cweaverand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T13:54:32",
"_ts": "1550066072250335",
"description": "With the reworking of the context, it has become possible to add services without the aid of a service factory, which is convenient in many cases. Unfortunately, it also does an end run around the TrayInfo bookkeeping system, since this discovers services and their parameters by iterating over the list of factories contained in the tray. A relevant example is code like this:\n\n{{{\nfrom I3Tray import I3Tray\nfrom icecube import icetray, dataclasses, dataio, phys_services\ntray = I3Tray()\nrandomService = phys_services.I3GSLRandomService(seed=12345)\ntray.context[\"I3RandomService\"] = randomService\ntray.AddModule(\"I3InfiniteSource\")\ntray.AddModule(\"I3Writer\",\"writer\",filename=\"test.i3\")\ntray.Execute(1)\n}}}\n\nThe seed used by the RNG is a very important piece of information which the user may want to inspect after-the-fact, but using code like the above hides it from the TrayInfoService, as can be seen by examining the 'I' frame in the file generated by running this code. \n\nThere are a few difficulties associated with trying to correct this: First, arbitrary objects may be constructed and placed in the context, which may not conform to any standard interface to extract parameter values. Second, the TrayInfo may be queried at times after the service factories have run, so it would have to somehow avoid double counting the services installed by the factories, but these services' names may be unrelated to the names of their corresponding factories. ",
"reporter": "cweaver",
"cc": "",
"resolution": "insufficient resources",
"time": "2014-12-04T22:37:06",
"component": "combo core",
"summary": "[icetray] Adding services directly to the context bypasses I3TrayInfo",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[icetray] Adding services directly to the context bypasses I3TrayInfo (Trac #826) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/826">https://code.icecube.wisc.edu/projects/icecube/ticket/826</a>, reported by cweaverand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T13:54:32",
"_ts": "1550066072250335",
"description": "With the reworking of the context, it has become possible to add services without the aid of a service factory, which is convenient in many cases. Unfortunately, it also does an end run around the TrayInfo bookkeeping system, since this discovers services and their parameters by iterating over the list of factories contained in the tray. A relevant example is code like this:\n\n{{{\nfrom I3Tray import I3Tray\nfrom icecube import icetray, dataclasses, dataio, phys_services\ntray = I3Tray()\nrandomService = phys_services.I3GSLRandomService(seed=12345)\ntray.context[\"I3RandomService\"] = randomService\ntray.AddModule(\"I3InfiniteSource\")\ntray.AddModule(\"I3Writer\",\"writer\",filename=\"test.i3\")\ntray.Execute(1)\n}}}\n\nThe seed used by the RNG is a very important piece of information which the user may want to inspect after-the-fact, but using code like the above hides it from the TrayInfoService, as can be seen by examining the 'I' frame in the file generated by running this code. \n\nThere are a few difficulties associated with trying to correct this: First, arbitrary objects may be constructed and placed in the context, which may not conform to any standard interface to extract parameter values. Second, the TrayInfo may be queried at times after the service factories have run, so it would have to somehow avoid double counting the services installed by the factories, but these services' names may be unrelated to the names of their corresponding factories. ",
"reporter": "cweaver",
"cc": "",
"resolution": "insufficient resources",
"time": "2014-12-04T22:37:06",
"component": "combo core",
"summary": "[icetray] Adding services directly to the context bypasses I3TrayInfo",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
adding services directly to the context bypasses trac migrated from json status closed changetime ts description with the reworking of the context it has become possible to add services without the aid of a service factory which is convenient in many cases unfortunately it also does an end run around the trayinfo bookkeeping system since this discovers services and their parameters by iterating over the list of factories contained in the tray a relevant example is code like this n n nfrom import nfrom icecube import icetray dataclasses dataio phys services ntray nrandomservice phys services seed ntray context randomservice ntray addmodule ntray addmodule writer filename test ntray execute n n nthe seed used by the rng is a very important piece of information which the user may want to inspect after the fact but using code like the above hides it from the trayinfoservice as can be seen by examining the i frame in the file generated by running this code n nthere are a few difficulties associated with trying to correct this first arbitrary objects may be constructed and placed in the context which may not conform to any standard interface to extract parameter values second the trayinfo may be queried at times after the service factories have run so it would have to somehow avoid double counting the services installed by the factories but these services names may be unrelated to the names of their corresponding factories reporter cweaver cc resolution insufficient resources time component combo core summary adding services directly to the context bypasses priority normal keywords milestone long term future owner olivas type defect
| 0
|
22,213
| 30,763,203,604
|
IssuesEvent
|
2023-07-30 00:47:45
|
danrleypereira/verzel-pleno-prova
|
https://api.github.com/repos/danrleypereira/verzel-pleno-prova
|
opened
|
Autenticar usuário - FrontEnd
|
feature Processo Seletivo
|
Como parte do nosso esforço contínuo para melhorar a segurança e a experiência do usuário, precisamos implementar a autenticação de usuário no frontend da aplicação. Isso inclui:
- [ ] Implementar o formulário de registro de usuário (precisamos coletar: nome de usuário, email e senha)
- [ ] Implementar a lógica de autenticação no frontend que irá comunicar-se com o backend
- [ ] Implementar a lógica para salvar o JWT token em um reducer
- [ ] Implementar um método para incluir o token JWT nas futuras solicitações que necessitem de autenticação
- [ ] Proteger rotas no frontend com autenticação (apenas usuários autenticados devem ter acesso)
- [ ] Implementar lógica para lidar com tokens JWT expirados
- [ ] Testar todos os aspectos do fluxo de autenticação
Nota: É importante garantir que todas as informações do usuário sejam transmitidas de forma segura.
|
1.0
|
Autenticar usuário - FrontEnd - Como parte do nosso esforço contínuo para melhorar a segurança e a experiência do usuário, precisamos implementar a autenticação de usuário no frontend da aplicação. Isso inclui:
- [ ] Implementar o formulário de registro de usuário (precisamos coletar: nome de usuário, email e senha)
- [ ] Implementar a lógica de autenticação no frontend que irá comunicar-se com o backend
- [ ] Implementar a lógica para salvar o JWT token em um reducer
- [ ] Implementar um método para incluir o token JWT nas futuras solicitações que necessitem de autenticação
- [ ] Proteger rotas no frontend com autenticação (apenas usuários autenticados devem ter acesso)
- [ ] Implementar lógica para lidar com tokens JWT expirados
- [ ] Testar todos os aspectos do fluxo de autenticação
Nota: É importante garantir que todas as informações do usuário sejam transmitidas de forma segura.
|
process
|
autenticar usuário frontend como parte do nosso esforço contínuo para melhorar a segurança e a experiência do usuário precisamos implementar a autenticação de usuário no frontend da aplicação isso inclui implementar o formulário de registro de usuário precisamos coletar nome de usuário email e senha implementar a lógica de autenticação no frontend que irá comunicar se com o backend implementar a lógica para salvar o jwt token em um reducer implementar um método para incluir o token jwt nas futuras solicitações que necessitem de autenticação proteger rotas no frontend com autenticação apenas usuários autenticados devem ter acesso implementar lógica para lidar com tokens jwt expirados testar todos os aspectos do fluxo de autenticação nota é importante garantir que todas as informações do usuário sejam transmitidas de forma segura
| 1
|
271,190
| 29,351,403,240
|
IssuesEvent
|
2023-05-27 01:05:11
|
snykiotcubedev/arangodb-3.7.6
|
https://api.github.com/repos/snykiotcubedev/arangodb-3.7.6
|
reopened
|
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz
|
Mend: dependency security vulnerability
|
## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary>
<p>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- ts-mocha-2.0.0.tgz (Root Library)
- ts-node-7.0.0.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/mkdirp/node_modules/minimist/package.json,/js/node/node_modules/eslint/node_modules/minimist/package.json,/js/node/node_modules/mocha/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- mocha-6.1.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary>
<p>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- ts-mocha-2.0.0.tgz (Root Library)
- ts-node-7.0.0.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/mkdirp/node_modules/minimist/package.json,/js/node/node_modules/eslint/node_modules/minimist/package.json,/js/node/node_modules/mocha/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- mocha-6.1.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file tools turbolizer package json path to vulnerable library tools turbolizer node modules minimist package json dependency hierarchy ts mocha tgz root library ts node tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tools turbolizer package json path to vulnerable library tools turbolizer node modules mkdirp node modules minimist package json js node node modules eslint node modules minimist package json js node node modules mocha node modules minimist package json dependency hierarchy mocha tgz root library mkdirp tgz x minimist tgz vulnerable library found in head commit a href found in base branch main vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution minimist step up your open source security game with mend
| 0
|
41,080
| 16,618,195,259
|
IssuesEvent
|
2021-06-02 19:43:20
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Update Journal Voucher fields in AMD Data Tracker
|
Product: AMD Data Tracker Project: Warehouse Inventory Service: Apps Type: Data Workgroup: AMD Workgroup: Finance
|
- [ ] Import spreadsheet information to capture Journal Voucher Dates and Numbers
- see [spreadsheet](https://app.zenhub.com/files/140626918/b39ed7cf-1cb0-4a3d-9f95-2e5e80f61f75/download)
### Steps
- [x] remove all columns from the spreadsheet except `TXN ID`, `Journal Voucher Date`, and `Journal Voucher ID`
- [x] import the file into the `inventory_transactions` object. match on `TXN ID` and map the other columns
- [x] the JV status field will update automatically from a field* rule
- [x] confirm that the JV status field is updated
|
1.0
|
Update Journal Voucher fields in AMD Data Tracker - - [ ] Import spreadsheet information to capture Journal Voucher Dates and Numbers
- see [spreadsheet](https://app.zenhub.com/files/140626918/b39ed7cf-1cb0-4a3d-9f95-2e5e80f61f75/download)
### Steps
- [x] remove all columns from the spreadsheet except `TXN ID`, `Journal Voucher Date`, and `Journal Voucher ID`
- [x] import the file into the `inventory_transactions` object. match on `TXN ID` and map the other columns
- [x] the JV status field will update automatically from a field* rule
- [x] confirm that the JV status field is updated
|
non_process
|
update journal voucher fields in amd data tracker import spreadsheet information to capture journal voucher dates and numbers see steps remove all columns from the spreadsheet except txn id journal voucher date and journal voucher id import the file into the inventory transactions object match on txn id and map the other columns the jv status field will update automatically from a field rule confirm that the jv status field is updated
| 0
|
145,098
| 11,648,610,108
|
IssuesEvent
|
2020-03-01 21:42:45
|
filecoin-project/go-filecoin
|
https://api.github.com/repos/filecoin-project/go-filecoin
|
closed
|
TestPaymentChannelLs/Works_with_specified_payer is flaky
|
A-tests C-transient-test-failure
|
### Description
Here is the log output from travis:
```
--- FAIL: TestPaymentChannelLs/Works_with_specified_payer (3.81s)
require.go:794:
Error Trace: payment_channel_daemon_test.go:583
payment_channel_daemon_test.go:109
Error: Received unexpected error:
filecoin command: [go-filecoin paych create t1o4t7xzfykkddl5cyn4rxx2h2iwd4qploy3scblq 1000 20 --from t1tqc4pjlf4q6womlxohc3exl2na23yh7gcbmmskq --gas-price 1 --gas-limit 300 --enc=json], exited with non-zero exitcode: 1
Test: TestPaymentChannelLs/Works_with_specified_payer
```
This sometimes fails and sometimes passes locally
### Acceptance criteria
### Risks + pitfalls
### Where to begin
|
2.0
|
TestPaymentChannelLs/Works_with_specified_payer is flaky - ### Description
Here is the log output from travis:
```
--- FAIL: TestPaymentChannelLs/Works_with_specified_payer (3.81s)
require.go:794:
Error Trace: payment_channel_daemon_test.go:583
payment_channel_daemon_test.go:109
Error: Received unexpected error:
filecoin command: [go-filecoin paych create t1o4t7xzfykkddl5cyn4rxx2h2iwd4qploy3scblq 1000 20 --from t1tqc4pjlf4q6womlxohc3exl2na23yh7gcbmmskq --gas-price 1 --gas-limit 300 --enc=json], exited with non-zero exitcode: 1
Test: TestPaymentChannelLs/Works_with_specified_payer
```
This sometimes fails and sometimes passes locally
### Acceptance criteria
### Risks + pitfalls
### Where to begin
|
non_process
|
testpaymentchannells works with specified payer is flaky description here is the log output from travis fail testpaymentchannells works with specified payer require go error trace payment channel daemon test go payment channel daemon test go error received unexpected error filecoin command exited with non zero exitcode test testpaymentchannells works with specified payer this sometimes fails and sometimes passes locally acceptance criteria risks pitfalls where to begin
| 0
|
10,165
| 13,044,162,683
|
IssuesEvent
|
2020-07-29 03:47:35
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AesDecrypt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AesDecrypt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AesDecrypt` from TiDB -
## Description
Port the scalar function `AesDecrypt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function aesdecrypt from tidb description port the scalar function aesdecrypt from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
18,273
| 24,350,965,408
|
IssuesEvent
|
2022-10-02 23:22:36
|
MasterPlayer/adxl345-sv
|
https://api.github.com/repos/MasterPlayer/adxl345-sv
|
closed
|
Add mechanism for output rules
|
enhancement software process
|
Output rules may be next :
1) X, Y, Z as int16 values, readed directly from device
2) X, Y, Z as float value, msmt in gravity values
3) roll & pitch values
add this functional for axi_adxl structure and realize changing procedure on API
|
1.0
|
Add mechanism for output rules - Output rules may be next :
1) X, Y, Z as int16 values, readed directly from device
2) X, Y, Z as float value, msmt in gravity values
3) roll & pitch values
add this functional for axi_adxl structure and realize changing procedure on API
|
process
|
add mechanism for output rules output rules may be next x y z as values readed directly from device x y z as float value msmt in gravity values roll pitch values add this functional for axi adxl structure and realize changing procedure on api
| 1
|
690,755
| 23,671,239,371
|
IssuesEvent
|
2022-08-27 11:41:32
|
fao89/pulp-operator
|
https://api.github.com/repos/fao89/pulp-operator
|
closed
|
As a user, I would like to be able to filter all publications for a given repository
|
Status: NEW Tracker: Story Priority: Normal Groomed: 1
|
Author: wibbit (wibbit)
Redmine Issue: 7036, https://pulp.plan.io/issues/7036
---
As a user, I would like to be able to filter all publications for a given repository.
|
1.0
|
As a user, I would like to be able to filter all publications for a given repository - Author: wibbit (wibbit)
Redmine Issue: 7036, https://pulp.plan.io/issues/7036
---
As a user, I would like to be able to filter all publications for a given repository.
|
non_process
|
as a user i would like to be able to filter all publications for a given repository author wibbit wibbit redmine issue as a user i would like to be able to filter all publications for a given repository
| 0
|
2,488
| 5,266,020,819
|
IssuesEvent
|
2017-02-04 07:32:01
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
enhancement: Array of tags for `docker-tag` post processor.
|
easy enhancement post-processor/docker
|
Sorry if this is covered elsewhere, but I wasn't able to find it. The way we are building images we would like to apply more than one tag. It seems applying the `docker-tag` tag twice yields this error:
```
build 'docker' errored: 1 error(s) occurred:
* Post-processor failed: Unknown artifact type: packer.post-processor.docker-tag
Can only tag from Docker builder artifacts.
```
We have the following two use cases:
1) Being able to apply a version number as well as `latest`
2) When pushing to multiple registries (internal in different locations), it requires that the image be tagged with the internal registry name. In this case we need to change the hostname portion of the tag.
The end result for two registries is 4 distinct tags. It would be awesome if we could do this natively with the post-processor.
|
1.0
|
enhancement: Array of tags for `docker-tag` post processor. - Sorry if this is covered elsewhere, but I wasn't able to find it. The way we are building images we would like to apply more than one tag. It seems applying the `docker-tag` tag twice yields this error:
```
build 'docker' errored: 1 error(s) occurred:
* Post-processor failed: Unknown artifact type: packer.post-processor.docker-tag
Can only tag from Docker builder artifacts.
```
We have the following two use cases:
1) Being able to apply a version number as well as `latest`
2) When pushing to multiple registries (internal in different locations), it requires that the image be tagged with the internal registry name. In this case we need to change the hostname portion of the tag.
The end result for two registries is 4 distinct tags. It would be awesome if we could do this natively with the post-processor.
|
process
|
enhancement array of tags for docker tag post processor sorry if this is covered elsewhere but i wasn t able to find it the way we are building images we would like to apply more than one tag it seems applying the docker tag tag twice yields this error build docker errored error s occurred post processor failed unknown artifact type packer post processor docker tag can only tag from docker builder artifacts we have the following two use cases being able to apply a version number as well as latest when pushing to multiple registries internal in different locations it requires that the image be tagged with the internal registry name in this case we need to change the hostname portion of the tag the end result for two registries is distinct tags it would be awesome if we could do this natively with the post processor
| 1
|
19,519
| 25,829,688,124
|
IssuesEvent
|
2022-12-12 15:17:16
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Add new environment variable to VSCode for Multi-Root workspace
|
feature-request *out-of-scope terminal-process
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I'd like to see new Environment variable for Multi-Root workspace
Example, <b>`workspaceFolderCurrentTerminal`, `workspaceFolder:Terminal`</b> or something like this
## Why
I am trying to add `node_modules/.bin` to new integrated terminal so i can access CLI tools such as `eslint, prettier, etc`
Example
```json
{
"terminal.integrated.env.osx": {
"PATH": "${workspaceFolder}/node_modules/.bin"
}
}
```
But only includes first workspace path, not all.
From https://code.visualstudio.com/docs/editor/variables-reference i'm tried these of variables
- `fileWorkspaceFolder` (not works at all)
- `workspaceFolder` (includes only first workspace path)
- `relativeFileDirname` (not works at all)
## How to use?
If this feature will be implemented, for many of us will be easier to project dev-dependencies tools such as `eslint`, `prettier`, etc.
Steps to use:
- Open workspaces
- Open new terminal
- Choose workspace
- Use terminal as excepted and with bonus of CLI tools included in project (dev-)dependencies
|
1.0
|
Add new environment variable to VSCode for Multi-Root workspace - <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I'd like to see new Environment variable for Multi-Root workspace
Example, <b>`workspaceFolderCurrentTerminal`, `workspaceFolder:Terminal`</b> or something like this
## Why
I am trying to add `node_modules/.bin` to new integrated terminal so i can access CLI tools such as `eslint, prettier, etc`
Example
```json
{
"terminal.integrated.env.osx": {
"PATH": "${workspaceFolder}/node_modules/.bin"
}
}
```
But only includes first workspace path, not all.
From https://code.visualstudio.com/docs/editor/variables-reference i'm tried these of variables
- `fileWorkspaceFolder` (not works at all)
- `workspaceFolder` (includes only first workspace path)
- `relativeFileDirname` (not works at all)
## How to use?
If this feature will be implemented, for many of us will be easier to project dev-dependencies tools such as `eslint`, `prettier`, etc.
Steps to use:
- Open workspaces
- Open new terminal
- Choose workspace
- Use terminal as excepted and with bonus of CLI tools included in project (dev-)dependencies
|
process
|
add new environment variable to vscode for multi root workspace i d like to see new environment variable for multi root workspace example workspacefoldercurrentterminal workspacefolder terminal or something like this why i am trying to add node modules bin to new integrated terminal so i can access cli tools such as eslint prettier etc example json terminal integrated env osx path workspacefolder node modules bin but only includes first workspace path not all from i m tried these of variables fileworkspacefolder not works at all workspacefolder includes only first workspace path relativefiledirname not works at all how to use if this feature will be implemented for many of us will be easier to project dev dependencies tools such as eslint prettier etc steps to use open workspaces open new terminal choose workspace use terminal as excepted and with bonus of cli tools included in project dev dependencies
| 1
|
25,016
| 7,611,801,880
|
IssuesEvent
|
2018-05-01 15:15:01
|
habitat-sh/habitat
|
https://api.github.com/repos/habitat-sh/habitat
|
opened
|
When uploading dependencies, honor the target of the root package, not the platform
|
A-builder A-cli C-bug E-easy L-rust V-bldr V-devx
|
This issue has popped up in a few guises in the past, but it still exists.
It can be seen when you try to upload a Linux artifact from a MacOS workstation (really, from any workstation that doesn't match the platform of the artifact to be uploaded). This frequently happens when users enter a Linux Studio on their MacOS workstation to build a Linux package, then try to upload that package from _outside_ the Studio.
When the `hab pkg upload` command starts up, it creates a Builder API client, whose pre-configured User-Agent header [includes architecture information _of the platform on which `hab` is running_ ](https://github.com/habitat-sh/core/blob/60720b714769b1aa6c0a6f4fa8e6d022efa71102/components/http-client/src/api_client.rs#L297-L308). This architecture information is used by the Builder API to determine which platform for which to resolve packages for. This is often exactly what you want. After all, when your Supervisors are asking for new versions of `myorigin/myservice`, they need to ensure they're getting it for the right platform. You're going to have a bad time if you're a Linux Supervisor trying to run a Windows package.
This starts to fall apart in this cross-platform upload scenario though, and in a slightly subtle way. Say you've built a Linux package locally; call it `myorigin-myservice-1.0.0-20180501103724-x86_65-linux.hart`. It will have depended on any number of Linux packages; let's say one of them is `core/cacerts/2017.09.20/20171014212239`. If we're uploading our `myservice` hart file from MacOS, we'll first see that we need to upload this `core/cacerts/2017.09.20/20171014212239` package if Builder doesn't already have it. Intuitively, it must; otherwise how would we have gotten it in the first place? However, because [the target platform plays into the Builder API's resolution logic](https://github.com/habitat-sh/builder/blob/d942bee4505c4ae49293ebe4516c089c492bcdb9/components/builder-depot/src/server.rs#L1814-L1819), it will report that it does not have this artifact. This initially sounds broken, but it isn't; the package identifier _does not_ include target platform, and it is theoretically possible (though rather unlikely) for there to be, say, a Linux release and a Windows release of the same software with the exact same fully-qualified package identifier.
Thus, `hab pkg upload` will dutifully look for a `core-cacerts-2017.09.20-20171014212239-x86_64-darwin.hart` in the same directory as our `myorigin-myservice-1.0.0-20180501103724-x86_65-linux.hart` package (under the assumption that this dependency is something you're developing alongside your `myservice` software, and will thus be present in the same `results` directory). It won't find it, and the entire upload process will stop.
The workaround solution would be to always upload packages from the platform for which they are built. On MacOS, this can be done from within the Studio, or from inside a Linux VM. For completeness, collecting all the Linux dependencies of the package in the same directory and trying to upload _is not_ a workaround, because a) it involves a lot more manual work on the user's part, b) results in a lot of extra network traffic as dependencies are uploaded to Builder, only for Builder to recognize that it already has them, and (most importantly!) c) only works for Habitat core team members with write access to the `core` origin, since everything ultimately depends on `core`!
The true fix will require a bit of refactoring of the Builder API Client to allow manual overriding of the User-Agent header to use the target platform _of the package being uploaded_, rather than the platform from which it is being uploaded. Revisiting the decision to encode this information as a header as opposed to a request parameter may also be worthwhile.
Related History:
* https://github.com/habitat-sh/habitat/issues/4097
* https://github.com/habitat-sh/habitat/pull/4194
* https://github.com/habitat-sh/habitat/pull/4978
* https://github.com/habitat-sh/builder/issues/258
|
1.0
|
When uploading dependencies, honor the target of the root package, not the platform - This issue has popped up in a few guises in the past, but it still exists.
It can be seen when you try to upload a Linux artifact from a MacOS workstation (really, from any workstation that doesn't match the platform of the artifact to be uploaded). This frequently happens when users enter a Linux Studio on their MacOS workstation to build a Linux package, then try to upload that package from _outside_ the Studio.
When the `hab pkg upload` command starts up, it creates a Builder API client, whose pre-configured User-Agent header [includes architecture information _of the platform on which `hab` is running_ ](https://github.com/habitat-sh/core/blob/60720b714769b1aa6c0a6f4fa8e6d022efa71102/components/http-client/src/api_client.rs#L297-L308). This architecture information is used by the Builder API to determine which platform for which to resolve packages for. This is often exactly what you want. After all, when your Supervisors are asking for new versions of `myorigin/myservice`, they need to ensure they're getting it for the right platform. You're going to have a bad time if you're a Linux Supervisor trying to run a Windows package.
This starts to fall apart in this cross-platform upload scenario though, and in a slightly subtle way. Say you've built a Linux package locally; call it `myorigin-myservice-1.0.0-20180501103724-x86_65-linux.hart`. It will have depended on any number of Linux packages; let's say one of them is `core/cacerts/2017.09.20/20171014212239`. If we're uploading our `myservice` hart file from MacOS, we'll first see that we need to upload this `core/cacerts/2017.09.20/20171014212239` package if Builder doesn't already have it. Intuitively, it must; otherwise how would we have gotten it in the first place? However, because [the target platform plays into the Builder API's resolution logic](https://github.com/habitat-sh/builder/blob/d942bee4505c4ae49293ebe4516c089c492bcdb9/components/builder-depot/src/server.rs#L1814-L1819), it will report that it does not have this artifact. This initially sounds broken, but it isn't; the package identifier _does not_ include target platform, and it is theoretically possible (though rather unlikely) for there to be, say, a Linux release and a Windows release of the same software with the exact same fully-qualified package identifier.
Thus, `hab pkg upload` will dutifully look for a `core-cacerts-2017.09.20-20171014212239-x86_64-darwin.hart` in the same directory as our `myorigin-myservice-1.0.0-20180501103724-x86_65-linux.hart` package (under the assumption that this dependency is something you're developing alongside your `myservice` software, and will thus be present in the same `results` directory). It won't find it, and the entire upload process will stop.
The workaround solution would be to always upload packages from the platform for which they are built. On MacOS, this can be done from within the Studio, or from inside a Linux VM. For completeness, collecting all the Linux dependencies of the package in the same directory and trying to upload _is not_ a workaround, because a) it involves a lot more manual work on the user's part, b) results in a lot of extra network traffic as dependencies are uploaded to Builder, only for Builder to recognize that it already has them, and (most importantly!) c) only works for Habitat core team members with write access to the `core` origin, since everything ultimately depends on `core`!
The true fix will require a bit of refactoring of the Builder API Client to allow manual overriding of the User-Agent header to use the target platform _of the package being uploaded_, rather than the platform from which it is being uploaded. Revisiting the decision to encode this information as a header as opposed to a request parameter may also be worthwhile.
Related History:
* https://github.com/habitat-sh/habitat/issues/4097
* https://github.com/habitat-sh/habitat/pull/4194
* https://github.com/habitat-sh/habitat/pull/4978
* https://github.com/habitat-sh/builder/issues/258
|
non_process
|
when uploading dependencies honor the target of the root package not the platform this issue has popped up in a few guises in the past but it still exists it can be seen when you try to upload a linux artifact from a macos workstation really from any workstation that doesn t match the platform of the artifact to be uploaded this frequently happens when users enter a linux studio on their macos workstation to build a linux package then try to upload that package from outside the studio when the hab pkg upload command starts up it creates a builder api client whose pre configured user agent header this architecture information is used by the builder api to determine which platform for which to resolve packages for this is often exactly what you want after all when your supervisors are asking for new versions of myorigin myservice they need to ensure they re getting it for the right platform you re going to have a bad time if you re a linux supervisor trying to run a windows package this starts to fall apart in this cross platform upload scenario though and in a slightly subtle way say you ve built a linux package locally call it myorigin myservice linux hart it will have depended on any number of linux packages let s say one of them is core cacerts if we re uploading our myservice hart file from macos we ll first see that we need to upload this core cacerts package if builder doesn t already have it intuitively it must otherwise how would we have gotten it in the first place however because it will report that it does not have this artifact this initially sounds broken but it isn t the package identifier does not include target platform and it is theoretically possible though rather unlikely for there to be say a linux release and a windows release of the same software with the exact same fully qualified package identifier thus hab pkg upload will dutifully look for a core cacerts darwin hart in the same directory as our myorigin myservice linux hart package under the assumption that this dependency is something you re developing alongside your myservice software and will thus be present in the same results directory it won t find it and the entire upload process will stop the workaround solution would be to always upload packages from the platform for which they are built on macos this can be done from within the studio or from inside a linux vm for completeness collecting all the linux dependencies of the package in the same directory and trying to upload is not a workaround because a it involves a lot more manual work on the user s part b results in a lot of extra network traffic as dependencies are uploaded to builder only for builder to recognize that it already has them and most importantly c only works for habitat core team members with write access to the core origin since everything ultimately depends on core the true fix will require a bit of refactoring of the builder api client to allow manual overriding of the user agent header to use the target platform of the package being uploaded rather than the platform from which it is being uploaded revisiting the decision to encode this information as a header as opposed to a request parameter may also be worthwhile related history
| 0
|
1,710
| 4,350,787,210
|
IssuesEvent
|
2016-07-31 13:43:59
|
P0cL4bs/WiFi-Pumpkin
|
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
|
closed
|
add support Parrot 3 with 2 wifi wireless adabter
|
enhancement in process
|
I wanna use 2 wireless adabter because cable network is not usefull how can ı do this?
|
1.0
|
add support Parrot 3 with 2 wifi wireless adabter - I wanna use 2 wireless adabter because cable network is not usefull how can ı do this?
|
process
|
add support parrot with wifi wireless adabter i wanna use wireless adabter because cable network is not usefull how can ı do this
| 1
|
159,188
| 6,041,813,592
|
IssuesEvent
|
2017-06-11 05:57:53
|
capnkirok/animania
|
https://api.github.com/repos/capnkirok/animania
|
closed
|
Compatibility with Hatchery/Chickens/Harvestcraft/Harvest Festival
|
compat priority: normal suggestion
|
I'd love to see compatibility for [hatchery](https://minecraft.curseforge.com/projects/hatchery) added.
As it is currently while you can capture chickens with the hatchery net, chickens will not go into nest boxes(chickens disappear).
For [Chickens](https://minecraft.curseforge.com/projects/chickens), this ties in with the [Breeding Genetics](https://github.com/capnkirok/animania/issues/7) issue, as chickens adds the stats/genetics component.
Requesting that harvestcraft crops be useable as food for pigs(most crops), and seeds be useable for chicken feed. Would also be handy if the meat provided in this mod was oredictionaried to work with Harvestcraft recipes.
For Harvest Festival, would love if the eggs/cheese/meats could be sold like HF items(ie there is an ingame selling mechanic unique to HF).
|
1.0
|
Compatibility with Hatchery/Chickens/Harvestcraft/Harvest Festival - I'd love to see compatibility for [hatchery](https://minecraft.curseforge.com/projects/hatchery) added.
As it is currently while you can capture chickens with the hatchery net, chickens will not go into nest boxes(chickens disappear).
For [Chickens](https://minecraft.curseforge.com/projects/chickens), this ties in with the [Breeding Genetics](https://github.com/capnkirok/animania/issues/7) issue, as chickens adds the stats/genetics component.
Requesting that harvestcraft crops be useable as food for pigs(most crops), and seeds be useable for chicken feed. Would also be handy if the meat provided in this mod was oredictionaried to work with Harvestcraft recipes.
For Harvest Festival, would love if the eggs/cheese/meats could be sold like HF items(ie there is an ingame selling mechanic unique to HF).
|
non_process
|
compatibility with hatchery chickens harvestcraft harvest festival i d love to see compatibility for added as it is currently while you can capture chickens with the hatchery net chickens will not go into nest boxes chickens disappear for this ties in with the issue as chickens adds the stats genetics component requesting that harvestcraft crops be useable as food for pigs most crops and seeds be useable for chicken feed would also be handy if the meat provided in this mod was oredictionaried to work with harvestcraft recipes for harvest festival would love if the eggs cheese meats could be sold like hf items ie there is an ingame selling mechanic unique to hf
| 0
|
21,306
| 28,499,976,331
|
IssuesEvent
|
2023-04-18 16:35:21
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
[MLv2] Clean up handling of field filters
|
Type:Bug .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
Currently the way field filters work is a bit messy, and there are user-visible issues.
Here's an example flow that's broken currently:
1. Define a field filter like `SELECT * FROM Orders WHERE {{tag}}`
2. Set the type to Field Filter
3. Select a particular field whose type has no widgets, perhaps `People.LONGITUDE`
4. the Filter widget type defaults to `None`, meaning that it should be invisible.
5. It doesn't show up at the top like a parameter's widget usually would.
6. Save the question and re-load it.
### Expected
Same state - tag has widget type `None`; it doesn't appear at the top.
### Actual
It appears as a (usually broken) `"category"` type widget.
This is caused by a hacky special case in MBQL normalization which forces a field filter (`type: "dimension"`) with unset `widget-type` to be `"category"`. That fixes an old legacy case, but breaks new ones. The FE expects **unset** `widget-type` for `"None"` but normalization always sets it.
We should move to setting this explicitly to `"none"` and adjust the FE's handling accordingly.
|
1.0
|
[MLv2] Clean up handling of field filters - Currently the way field filters work is a bit messy, and there are user-visible issues.
Here's an example flow that's broken currently:
1. Define a field filter like `SELECT * FROM Orders WHERE {{tag}}`
2. Set the type to Field Filter
3. Select a particular field whose type has no widgets, perhaps `People.LONGITUDE`
4. the Filter widget type defaults to `None`, meaning that it should be invisible.
5. It doesn't show up at the top like a parameter's widget usually would.
6. Save the question and re-load it.
### Expected
Same state - tag has widget type `None`; it doesn't appear at the top.
### Actual
It appears as a (usually broken) `"category"` type widget.
This is caused by a hacky special case in MBQL normalization which forces a field filter (`type: "dimension"`) with unset `widget-type` to be `"category"`. That fixes an old legacy case, but breaks new ones. The FE expects **unset** `widget-type` for `"None"` but normalization always sets it.
We should move to setting this explicitly to `"none"` and adjust the FE's handling accordingly.
|
process
|
clean up handling of field filters currently the way field filters work is a bit messy and there are user visible issues here s an example flow that s broken currently define a field filter like select from orders where tag set the type to field filter select a particular field whose type has no widgets perhaps people longitude the filter widget type defaults to none meaning that it should be invisible it doesn t show up at the top like a parameter s widget usually would save the question and re load it expected same state tag has widget type none it doesn t appear at the top actual it appears as a usually broken category type widget this is caused by a hacky special case in mbql normalization which forces a field filter type dimension with unset widget type to be category that fixes an old legacy case but breaks new ones the fe expects unset widget type for none but normalization always sets it we should move to setting this explicitly to none and adjust the fe s handling accordingly
| 1
|
1,612
| 4,227,025,900
|
IssuesEvent
|
2016-07-02 21:55:56
|
pelias/schema
|
https://api.github.com/repos/pelias/schema
|
closed
|
Create index leads to java.lang.ClassCastException exception in ES logs
|
processed
|
Setup was:
- installed latest develop branch
- new box on EC2: ubuntu 14.04, elastic search 1.6.0, java 1.7
- ran node scripts/create_index.js
Checking logs revealed following stack trace:
[2015-07-16 01:02:46,447][INFO ][gateway ] [Nemesis] recovered [1] indices into cluster_state
[2015-07-16 01:02:47,105][WARN ][index.warmer ] [Nemesis] [pelias][0] failed to warm-up global ordinals for [center_point]
java.lang.ClassCastException: org.elasticsearch.index.fielddata.plain.GeoPointDoubleArrayIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexFieldData$Global
at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:953)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
|
1.0
|
Create index leads to java.lang.ClassCastException exception in ES logs - Setup was:
- installed latest develop branch
- new box on EC2: ubuntu 14.04, elastic search 1.6.0, java 1.7
- ran node scripts/create_index.js
Checking logs revealed following stack trace:
[2015-07-16 01:02:46,447][INFO ][gateway ] [Nemesis] recovered [1] indices into cluster_state
[2015-07-16 01:02:47,105][WARN ][index.warmer ] [Nemesis] [pelias][0] failed to warm-up global ordinals for [center_point]
java.lang.ClassCastException: org.elasticsearch.index.fielddata.plain.GeoPointDoubleArrayIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexFieldData$Global
at org.elasticsearch.search.SearchService$FieldDataWarmer$3.run(SearchService.java:953)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
|
process
|
create index leads to java lang classcastexception exception in es logs setup was installed latest develop branch new box on ubuntu elastic search java ran node scripts create index js checking logs revealed following stack trace recovered indices into cluster state failed to warm up global ordinals for java lang classcastexception org elasticsearch index fielddata plain geopointdoublearrayindexfielddata cannot be cast to org elasticsearch index fielddata indexfielddata global at org elasticsearch search searchservice fielddatawarmer run searchservice java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 1
|
16,393
| 21,162,951,338
|
IssuesEvent
|
2022-04-07 11:03:54
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process stdin is not emitting 'drain' so that piping can resume
|
child_process
|
### Version
v16.14.2
### Platform
Linux mcws 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
### Subsystem
stream child_process
### What steps will reproduce the bug?
Run:
```js
import childProcess from 'child_process'
import stream from 'stream'
import util from "util";
const setTimeoutPromise = util.promisify(setTimeout);
// Environment variables
const cwd = process.cwd();
const { env } = process;
// Create a stream where the logs will be written
const logThrough = new stream.PassThrough();
// Log to multiple files using a separate process
const child = childProcess.spawn("tee", ['log'], { cwd, env });
logThrough.pipe(child.stdin);
// Writing some test logs
for (let i = 1; i < 1000; i++) {
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa WARNING ${i}\n`);
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ERROR ${i}\n`);
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa FATAL ${i}\n`);
}
// give some time to flush buffers
await setTimeoutPromise(5000);
```
The `log` file will _not_ contain all the lines written
### How often does it reproduce? Is there a required condition?
All the time
### What is the expected behavior?
All lines should be written
### What do you see instead?
The counter only reaches 353.
### Additional information
_No response_
|
1.0
|
child_process stdin is not emitting 'drain' so that piping can resume - ### Version
v16.14.2
### Platform
Linux mcws 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
### Subsystem
stream child_process
### What steps will reproduce the bug?
Run:
```js
import childProcess from 'child_process'
import stream from 'stream'
import util from "util";
const setTimeoutPromise = util.promisify(setTimeout);
// Environment variables
const cwd = process.cwd();
const { env } = process;
// Create a stream where the logs will be written
const logThrough = new stream.PassThrough();
// Log to multiple files using a separate process
const child = childProcess.spawn("tee", ['log'], { cwd, env });
logThrough.pipe(child.stdin);
// Writing some test logs
for (let i = 1; i < 1000; i++) {
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa WARNING ${i}\n`);
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa ERROR ${i}\n`);
logThrough.write(`aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa FATAL ${i}\n`);
}
// give some time to flush buffers
await setTimeoutPromise(5000);
```
The `log` file will _not_ contain all the lines written
### How often does it reproduce? Is there a required condition?
All the time
### What is the expected behavior?
All lines should be written
### What do you see instead?
The counter only reaches 353.
### Additional information
_No response_
|
process
|
child process stdin is not emitting drain so that piping can resume version platform linux mcws generic ubuntu smp wed jan utc gnu linux subsystem stream child process what steps will reproduce the bug run js import childprocess from child process import stream from stream import util from util const settimeoutpromise util promisify settimeout environment variables const cwd process cwd const env process create a stream where the logs will be written const logthrough new stream passthrough log to multiple files using a separate process const child childprocess spawn tee cwd env logthrough pipe child stdin writing some test logs for let i i i logthrough write aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa warning i n logthrough write aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa error i n logthrough write aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa fatal i n give some time to flush buffers await settimeoutpromise the log file will not contain all the lines written how often does it reproduce is there a required condition all the time what is the expected behavior all lines should be written what do you see instead the counter only reaches additional information no response
| 1
|
2,813
| 2,639,861,204
|
IssuesEvent
|
2015-03-11 07:18:56
|
NativeScript/cross-platform-modules
|
https://api.github.com/repos/NativeScript/cross-platform-modules
|
closed
|
TextField for iOS does not update its text property with the last character typed.
|
3 - Ready For Test bug ios
|
In other words, if you type "Eggs" and click a button, the text property of the TextField would still be "Egg".
|
1.0
|
TextField for iOS does not update its text property with the last character typed. - In other words, if you type "Eggs" and click a button, the text property of the TextField would still be "Egg".
|
non_process
|
textfield for ios does not update its text property with the last character typed in other words if you type eggs and click a button the text property of the textfield would still be egg
| 0
|
152,671
| 13,464,144,933
|
IssuesEvent
|
2020-09-09 18:45:14
|
hermitcore/libhermit-rs
|
https://api.github.com/repos/hermitcore/libhermit-rs
|
closed
|
Slim down README
|
Documentation
|
Follow up to https://github.com/hermitcore/libhermit-rs/pull/60
As this repository is not the "flagship" rusty-hermit repository, we should reduce the README to the parts which are related to the kernel code and put everything else into https://github.com/hermitcore/rusty-hermit/blob/master/README.md
|
1.0
|
Slim down README - Follow up to https://github.com/hermitcore/libhermit-rs/pull/60
As this repository is not the "flagship" rusty-hermit repository, we should reduce the README to the parts which are related to the kernel code and put everything else into https://github.com/hermitcore/rusty-hermit/blob/master/README.md
|
non_process
|
slim down readme follow up to as this repository is not the flagship rusty hermit repository we should reduce the readme to the parts which are related to the kernel code and put everything else into
| 0
|
1,738
| 4,425,478,108
|
IssuesEvent
|
2016-08-16 15:33:08
|
willdwyer/bcbsmaissuestracker
|
https://api.github.com/repos/willdwyer/bcbsmaissuestracker
|
closed
|
Quick View button on Configurator - BPMS-I-25
|
Environment-Production Priority-High Status- In-Process To_Be_Addressed_In_V1 Type-Enhancement
|
Hello,
This was brought to our attention by a couple of the BEs on our team.
We are requesting an enhancement for the Quick View button in configurator to operate properly. The Quick View button shoud allow the user to preview a plan before developing.
Upon clicking the Quick View button, the user should see either:
1. a preview of the plan design details
2. a preview of the screens in the configurator
This would be highly beneficial and would save the BEs a great deal of time.
Please provide an LOE and timing estimate for implementing this.

9
|
1.0
|
Quick View button on Configurator - BPMS-I-25 - Hello,
This was brought to our attention by a couple of the BEs on our team.
We are requesting an enhancement for the Quick View button in configurator to operate properly. The Quick View button shoud allow the user to preview a plan before developing.
Upon clicking the Quick View button, the user should see either:
1. a preview of the plan design details
2. a preview of the screens in the configurator
This would be highly beneficial and would save the BEs a great deal of time.
Please provide an LOE and timing estimate for implementing this.

9
|
process
|
quick view button on configurator bpms i hello this was brought to our attention by a couple of the bes on our team we are requesting an enhancement for the quick view button in configurator to operate properly the quick view button shoud allow the user to preview a plan before developing upon clicking the quick view button the user should see either a preview of the plan design details a preview of the screens in the configurator this would be highly beneficial and would save the bes a great deal of time please provide an loe and timing estimate for implementing this
| 1
|
5,729
| 8,570,060,542
|
IssuesEvent
|
2018-11-11 16:44:43
|
home-assistant/home-assistant
|
https://api.github.com/repos/home-assistant/home-assistant
|
closed
|
Tensorflow breaks with 'file_out' line in config
|
in progress platform: image_processing.tensorflow
|
**Home Assistant release with the issue:**
0.82.0
**Operating environment (Hass.io/Docker/Windows/etc.):**
Ubuntu 18.04
**Component/platform:**
The tensorflow component breaks when I add the file_out line to my `configuration.yaml`.
Here's the full config:
`
image_processing:
- platform: tensorflow
scan_interval: 300
source:
- entity_id: camera.ip_webcam
file_out:
- /tmp/tensor/flow_ip_webcam.jpg
model:
graph: /home/jesse/.homeassistant/tensorflow/frozen_inference_graph.pb
`
It works fine with this:
`
image_processing:
- platform: tensorflow
scan_interval: 300
source:
- entity_id: camera.ip_webcam
model:
graph: /home/jesse/.homeassistant/tensorflow/frozen_inference_graph.pb
`
And here's the error message
`
Update for image_processing.tensorflow_ip_webcam fails
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/homeassistant/helpers/entity.py", line 221, in async_update_ha_state
await self.async_device_update()
File "/usr/local/lib/python3.6/dist-packages/homeassistant/helpers/entity.py", line 347, in async_device_update
await self.async_update()
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/__init__.py", line 134, in async_update
await self.async_process_image(image.content)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/tensorflow.py", line 343, in process_image
self._save_image(image, matches, paths)
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/tensorflow.py", line 249, in _save_image
if self._category_areas[category] != [0, 0, 1, 1]:
KeyError: 'potted plant'
`
|
1.0
|
Tensorflow breaks with 'file_out' line in config - **Home Assistant release with the issue:**
0.82.0
**Operating environment (Hass.io/Docker/Windows/etc.):**
Ubuntu 18.04
**Component/platform:**
The tensorflow component breaks when I add the file_out line to my `configuration.yaml`.
Here's the full config:
`
image_processing:
- platform: tensorflow
scan_interval: 300
source:
- entity_id: camera.ip_webcam
file_out:
- /tmp/tensor/flow_ip_webcam.jpg
model:
graph: /home/jesse/.homeassistant/tensorflow/frozen_inference_graph.pb
`
It works fine with this:
`
image_processing:
- platform: tensorflow
scan_interval: 300
source:
- entity_id: camera.ip_webcam
model:
graph: /home/jesse/.homeassistant/tensorflow/frozen_inference_graph.pb
`
And here's the error message
`
Update for image_processing.tensorflow_ip_webcam fails
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/homeassistant/helpers/entity.py", line 221, in async_update_ha_state
await self.async_device_update()
File "/usr/local/lib/python3.6/dist-packages/homeassistant/helpers/entity.py", line 347, in async_device_update
await self.async_update()
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/__init__.py", line 134, in async_update
await self.async_process_image(image.content)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/tensorflow.py", line 343, in process_image
self._save_image(image, matches, paths)
File "/usr/local/lib/python3.6/dist-packages/homeassistant/components/image_processing/tensorflow.py", line 249, in _save_image
if self._category_areas[category] != [0, 0, 1, 1]:
KeyError: 'potted plant'
`
|
process
|
tensorflow breaks with file out line in config home assistant release with the issue operating environment hass io docker windows etc ubuntu component platform the tensorflow component breaks when i add the file out line to my configuration yaml here s the full config image processing platform tensorflow scan interval source entity id camera ip webcam file out tmp tensor flow ip webcam jpg model graph home jesse homeassistant tensorflow frozen inference graph pb it works fine with this image processing platform tensorflow scan interval source entity id camera ip webcam model graph home jesse homeassistant tensorflow frozen inference graph pb and here s the error message update for image processing tensorflow ip webcam fails traceback most recent call last file usr local lib dist packages homeassistant helpers entity py line in async update ha state await self async device update file usr local lib dist packages homeassistant helpers entity py line in async device update await self async update file usr local lib dist packages homeassistant components image processing init py line in async update await self async process image image content file usr lib concurrent futures thread py line in run result self fn self args self kwargs file usr local lib dist packages homeassistant components image processing tensorflow py line in process image self save image image matches paths file usr local lib dist packages homeassistant components image processing tensorflow py line in save image if self category areas keyerror potted plant
| 1
|
10,396
| 13,198,180,181
|
IssuesEvent
|
2020-08-14 01:36:01
|
segment-oj/segmentoj
|
https://api.github.com/repos/segment-oj/segmentoj
|
closed
|
Captcha font isn't easy to identify
|
Low Priority Need Processing / 需要处理
|
Please change font to Fira Code.
The font now shows 0 & o O or 1 & l almost the same.
|
1.0
|
Captcha font isn't easy to identify - Please change font to Fira Code.
The font now shows 0 & o O or 1 & l almost the same.
|
process
|
captcha font isn t easy to identify please change font to fira code the font now shows o o or l almost the same
| 1
|
59,001
| 8,318,819,180
|
IssuesEvent
|
2018-09-25 15:33:18
|
ryanisaacg/quicksilver
|
https://api.github.com/repos/ryanisaacg/quicksilver
|
opened
|
Some example that demonstrates how to impl Drawable
|
documentation-soft subsystem-graphics
|
The Drawable trait is very flexible but unfortunately not super intuitive when it comes to implementation.
|
1.0
|
Some example that demonstrates how to impl Drawable - The Drawable trait is very flexible but unfortunately not super intuitive when it comes to implementation.
|
non_process
|
some example that demonstrates how to impl drawable the drawable trait is very flexible but unfortunately not super intuitive when it comes to implementation
| 0
|
332,873
| 24,352,504,437
|
IssuesEvent
|
2022-10-03 02:30:11
|
FelipeGarcia01/git_flow_practice
|
https://api.github.com/repos/FelipeGarcia01/git_flow_practice
|
closed
|
Un commit que no sigue la convención de código o arreglo a realizar
|
documentation
|
La convención del mensaje del último commit no es la esperada:
`STEP3: release`
Recuerde que debe tener el siguiente formato: `<Identificador de la corrección>: <Comentario>`
Para realizar la corrección del mensaje de commit ejecute los comandos `git commit --amend` y `git push -f`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
1.0
|
Un commit que no sigue la convención de código o arreglo a realizar - La convención del mensaje del último commit no es la esperada:
`STEP3: release`
Recuerde que debe tener el siguiente formato: `<Identificador de la corrección>: <Comentario>`
Para realizar la corrección del mensaje de commit ejecute los comandos `git commit --amend` y `git push -f`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
non_process
|
un commit que no sigue la convención de código o arreglo a realizar la convención del mensaje del último commit no es la esperada release recuerde que debe tener el siguiente formato para realizar la corrección del mensaje de commit ejecute los comandos git commit amend y git push f este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado
| 0
|
163,822
| 25,880,706,140
|
IssuesEvent
|
2022-12-14 11:03:45
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Better Than New isn't exclusive to electrical devices
|
Bug Design Unstable
|
### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
The talent Better Than New states repaired electrical devices deteriorate 30% slower but non-electrical devices also benefit from it.
For comparison I used a Battery and a Oxygen Generator as they share the same deterioration speed. They both had the same slower deterioration speed after being repaired.
### Reproduction steps
_No response_
### Bug prevalence
Happens every time I play
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
1.0
|
Better Than New isn't exclusive to electrical devices - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
The talent Better Than New states repaired electrical devices deteriorate 30% slower but non-electrical devices also benefit from it.
For comparison I used a Battery and a Oxygen Generator as they share the same deterioration speed. They both had the same slower deterioration speed after being repaired.
### Reproduction steps
_No response_
### Bug prevalence
Happens every time I play
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
non_process
|
better than new isn t exclusive to electrical devices disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened the talent better than new states repaired electrical devices deteriorate slower but non electrical devices also benefit from it for comparison i used a battery and a oxygen generator as they share the same deterioration speed they both had the same slower deterioration speed after being repaired reproduction steps no response bug prevalence happens every time i play version faction endgame test branch no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response
| 0
|
233,683
| 17,874,592,795
|
IssuesEvent
|
2021-09-07 00:04:34
|
bruceneco/Source-Code-Inspection
|
https://api.github.com/repos/bruceneco/Source-Code-Inspection
|
opened
|
Omissão de informações
|
Omissão Documentation
|
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Line 10
**Description:** o modo como é fornecido não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Line 12
**Description:** o tipo de moeda não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Diagrama dos casos de uso
**Description:** nenhum dos casos de uso fazem referência a uma possível ação de cancelar a compra, caso o usuário não queira seguir com a compra do bilhete mesmo após ter inserido o dinheiro.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** table "1.3.1 CSU01 – Inserir dinheiro"
**Description:** o tipo de moeda não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** table "1.3.2 CSU03 – Solicitar troco"
**Description:** o trantamento em caso de não haver múltiplos de moedas o suficiente para o troco não é especificado.
|
1.0
|
Omissão de informações - - **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Line 10
**Description:** o modo como é fornecido não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Line 12
**Description:** o tipo de moeda não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** Diagrama dos casos de uso
**Description:** nenhum dos casos de uso fazem referência a uma possível ação de cancelar a compra, caso o usuário não queira seguir com a compra do bilhete mesmo após ter inserido o dinheiro.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** table "1.3.1 CSU01 – Inserir dinheiro"
**Description:** o tipo de moeda não é especificado.
- **Source:** Source-Code-Inspection/documentacao/Lab02_Ticket_Machine_InspecaoDocumentacao.doc
**Where:** table "1.3.2 CSU03 – Solicitar troco"
**Description:** o trantamento em caso de não haver múltiplos de moedas o suficiente para o troco não é especificado.
|
non_process
|
omissão de informações source source code inspection documentacao ticket machine inspecaodocumentacao doc where line description o modo como é fornecido não é especificado source source code inspection documentacao ticket machine inspecaodocumentacao doc where line description o tipo de moeda não é especificado source source code inspection documentacao ticket machine inspecaodocumentacao doc where diagrama dos casos de uso description nenhum dos casos de uso fazem referência a uma possível ação de cancelar a compra caso o usuário não queira seguir com a compra do bilhete mesmo após ter inserido o dinheiro source source code inspection documentacao ticket machine inspecaodocumentacao doc where table – inserir dinheiro description o tipo de moeda não é especificado source source code inspection documentacao ticket machine inspecaodocumentacao doc where table – solicitar troco description o trantamento em caso de não haver múltiplos de moedas o suficiente para o troco não é especificado
| 0
|
17,070
| 22,534,988,351
|
IssuesEvent
|
2022-06-25 04:38:38
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
The pseudoterminal extension API latency seems much higher than it was before
|
bug api terminal-process
|
Not sure if something happened on the terminal or extension API but latency can easily hit 500ms+ on a keystroke which leads to a pretty bad experience.
|
1.0
|
The pseudoterminal extension API latency seems much higher than it was before - Not sure if something happened on the terminal or extension API but latency can easily hit 500ms+ on a keystroke which leads to a pretty bad experience.
|
process
|
the pseudoterminal extension api latency seems much higher than it was before not sure if something happened on the terminal or extension api but latency can easily hit on a keystroke which leads to a pretty bad experience
| 1
|
7,813
| 10,964,369,907
|
IssuesEvent
|
2019-11-27 22:22:09
|
codeuniversity/smag-mvp
|
https://api.github.com/repos/codeuniversity/smag-mvp
|
closed
|
Create filter to fetch internal_picture_url
|
Image Processing
|
Fetch internal_picture_url from postgres and write into face detection job topic.
|
1.0
|
Create filter to fetch internal_picture_url - Fetch internal_picture_url from postgres and write into face detection job topic.
|
process
|
create filter to fetch internal picture url fetch internal picture url from postgres and write into face detection job topic
| 1
|
360,743
| 10,696,775,655
|
IssuesEvent
|
2019-10-23 15:17:37
|
conan-io/conan
|
https://api.github.com/repos/conan-io/conan
|
closed
|
tools.patch cant create files
|
complex: low priority: low stage: review type: bug
|
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Hello,
I am trying to apply some out-of-tree git patches, some of them create new files.
Some of them contain new files, and the tools.patch utility will then compail that absolute filepaths are not allowed (`/dev/null`).
Lacking this support, its cumbersome to add bugfixes and git commits in general.
Example patch (created with `git format-patch`):
```patch
From d0807313143bb35da65c2b858a2d9e17fd3fbf9e Mon Sep 17 00:00:00 2001
From: Norbert Lange <nolange79@gmail.com>
Date: Fri, 7 Jun 2019 21:49:19 +0200
Subject: [PATCH] add and remove file
---
newfile | 1 +
oldfile | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
create mode 100644 newfile
delete mode 100644 oldfile
diff --git a/newfile b/newfile
new file mode 100644
index 0000000..fdedddf
--- /dev/null
+++ b/newfile
@@ -0,0 +1 @@
+Hello mean world
diff --git a/oldfile b/oldfile
deleted file mode 100644
index 32332e1..0000000
--- a/oldfile
+++ /dev/null
@@ -1 +0,0 @@
-Old litter
--
2.20.1
```
My environment is:
```
Debian Buster x64
Conan version 1.16.0
```
|
1.0
|
tools.patch cant create files - To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Hello,
I am trying to apply some out-of-tree git patches, some of them create new files.
Some of them contain new files, and the tools.patch utility will then compail that absolute filepaths are not allowed (`/dev/null`).
Lacking this support, its cumbersome to add bugfixes and git commits in general.
Example patch (created with `git format-patch`):
```patch
From d0807313143bb35da65c2b858a2d9e17fd3fbf9e Mon Sep 17 00:00:00 2001
From: Norbert Lange <nolange79@gmail.com>
Date: Fri, 7 Jun 2019 21:49:19 +0200
Subject: [PATCH] add and remove file
---
newfile | 1 +
oldfile | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
create mode 100644 newfile
delete mode 100644 oldfile
diff --git a/newfile b/newfile
new file mode 100644
index 0000000..fdedddf
--- /dev/null
+++ b/newfile
@@ -0,0 +1 @@
+Hello mean world
diff --git a/oldfile b/oldfile
deleted file mode 100644
index 32332e1..0000000
--- a/oldfile
+++ /dev/null
@@ -1 +0,0 @@
-Old litter
--
2.20.1
```
My environment is:
```
Debian Buster x64
Conan version 1.16.0
```
|
non_process
|
tools patch cant create files to help us debug your issue please explain i ve read the i ve specified the conan version operating system version and any tool that can be relevant i ve explained the steps to reproduce the error or the motivation use case of the question suggestion hello i am trying to apply some out of tree git patches some of them create new files some of them contain new files and the tools patch utility will then compail that absolute filepaths are not allowed dev null lacking this support its cumbersome to add bugfixes and git commits in general example patch created with git format patch patch from mon sep from norbert lange date fri jun subject add and remove file newfile oldfile files changed insertion deletion create mode newfile delete mode oldfile diff git a newfile b newfile new file mode index fdedddf dev null b newfile hello mean world diff git a oldfile b oldfile deleted file mode index a oldfile dev null old litter my environment is debian buster conan version
| 0
|
3,388
| 6,515,638,899
|
IssuesEvent
|
2017-08-26 18:21:57
|
guyikcgg/mdpc
|
https://api.github.com/repos/guyikcgg/mdpc
|
closed
|
Remove outliers
|
Preprocessing
|
- [x] Univariate outliers
- [x] Multivariate outliers (might require to remove more attributes)
|
1.0
|
Remove outliers - - [x] Univariate outliers
- [x] Multivariate outliers (might require to remove more attributes)
|
process
|
remove outliers univariate outliers multivariate outliers might require to remove more attributes
| 1
|
8,391
| 11,563,289,403
|
IssuesEvent
|
2020-02-20 05:33:01
|
microsoft/LightGBM
|
https://api.github.com/repos/microsoft/LightGBM
|
closed
|
Upper and lower bound of the value a tree can give
|
enhancement feature-request in-process
|
## Summary
Have an upper and lower bound of the value a tree can give
## Motivation
It would be able if the output need too be rescaled or given an offset for instanc
## Description
Have an upper and lower bound of the value a tree can give, given by the sum of all the max or min leaf values of each subtree
## Proposal
Proposal PR with potential solution: https://github.com/microsoft/LightGBM/pull/2737
|
1.0
|
Upper and lower bound of the value a tree can give - ## Summary
Have an upper and lower bound of the value a tree can give
## Motivation
It would be able if the output need too be rescaled or given an offset for instanc
## Description
Have an upper and lower bound of the value a tree can give, given by the sum of all the max or min leaf values of each subtree
## Proposal
Proposal PR with potential solution: https://github.com/microsoft/LightGBM/pull/2737
|
process
|
upper and lower bound of the value a tree can give summary have an upper and lower bound of the value a tree can give motivation it would be able if the output need too be rescaled or given an offset for instanc description have an upper and lower bound of the value a tree can give given by the sum of all the max or min leaf values of each subtree proposal proposal pr with potential solution
| 1
|
8,084
| 6,399,473,188
|
IssuesEvent
|
2017-08-05 00:34:29
|
brenoalvs/monk
|
https://api.github.com/repos/brenoalvs/monk
|
closed
|
Create the Acceptance test files
|
enhancement needs review performance
|
We need to create the tests for the following features:
- [x] Activation/Deactivation
- [x] Configuration
- [x] Posts
- [x] Categories
- [x] Medias
- [x] Menus
- [x] Language Switcher widget
|
True
|
Create the Acceptance test files - We need to create the tests for the following features:
- [x] Activation/Deactivation
- [x] Configuration
- [x] Posts
- [x] Categories
- [x] Medias
- [x] Menus
- [x] Language Switcher widget
|
non_process
|
create the acceptance test files we need to create the tests for the following features activation deactivation configuration posts categories medias menus language switcher widget
| 0
|
3,250
| 4,287,552,927
|
IssuesEvent
|
2016-07-16 21:05:33
|
globaleaks/GlobaLeaks
|
https://api.github.com/repos/globaleaks/GlobaLeaks
|
opened
|
Implement specific data retention policy for unread submissions.
|
A: OpenWhistleblowing C: Backend F: Security
|
Most of the real use cases have data retention policies too broad; typical settings enable the submission to survive one year.
With this ticket I propose to add an additional specific configuration for managing automatic deletion of unread contents.
i think in fact that if from one side is ok that an organization set the data retention policy to one year, t would be important to enforce in the workflow that in case of unresponsible receivers that do not read submissions the content is deleted with a more short timeout (e.g. if the data retention policy is set to 1year, the deletion of unread submissions could be set to 3months)
|
True
|
Implement specific data retention policy for unread submissions. - Most of the real use cases have data retention policies too broad; typical settings enable the submission to survive one year.
With this ticket I propose to add an additional specific configuration for managing automatic deletion of unread contents.
i think in fact that if from one side is ok that an organization set the data retention policy to one year, t would be important to enforce in the workflow that in case of unresponsible receivers that do not read submissions the content is deleted with a more short timeout (e.g. if the data retention policy is set to 1year, the deletion of unread submissions could be set to 3months)
|
non_process
|
implement specific data retention policy for unread submissions most of the real use cases have data retention policies too broad typical settings enable the submission to survive one year with this ticket i propose to add an additional specific configuration for managing automatic deletion of unread contents i think in fact that if from one side is ok that an organization set the data retention policy to one year t would be important to enforce in the workflow that in case of unresponsible receivers that do not read submissions the content is deleted with a more short timeout e g if the data retention policy is set to the deletion of unread submissions could be set to
| 0
|
22,527
| 31,626,301,330
|
IssuesEvent
|
2023-09-06 05:45:49
|
threefoldfoundation/tft
|
https://api.github.com/repos/threefoldfoundation/tft
|
closed
|
Extract signer process?
|
type_question process_wontfix
|
It feels to me like the signer process is embedded in the bridge and a lot of code is just initialised whilst it has no purpose. I think it should be extracted and a seperate process should run the signing.
|
1.0
|
Extract signer process? - It feels to me like the signer process is embedded in the bridge and a lot of code is just initialised whilst it has no purpose. I think it should be extracted and a seperate process should run the signing.
|
process
|
extract signer process it feels to me like the signer process is embedded in the bridge and a lot of code is just initialised whilst it has no purpose i think it should be extracted and a seperate process should run the signing
| 1
|
14,273
| 17,226,751,672
|
IssuesEvent
|
2021-07-20 03:37:12
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
opened
|
IR optimization
|
area: processing kind: feature
|
Egg seems interesting:
- https://egraphs-good.github.io/
- https://arxiv.org/abs/2004.03082
My primitive take on this is - Egg is a fine instrument that can integrate with anything, including Naga's IR, in a fairly concise way, and providing a way for us to generate optimized IR.
|
1.0
|
IR optimization - Egg seems interesting:
- https://egraphs-good.github.io/
- https://arxiv.org/abs/2004.03082
My primitive take on this is - Egg is a fine instrument that can integrate with anything, including Naga's IR, in a fairly concise way, and providing a way for us to generate optimized IR.
|
process
|
ir optimization egg seems interesting my primitive take on this is egg is a fine instrument that can integrate with anything including naga s ir in a fairly concise way and providing a way for us to generate optimized ir
| 1
|
15,738
| 19,910,405,359
|
IssuesEvent
|
2022-01-25 16:37:05
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Unification App: Write E2E tests around "Specs Page" - New e2e testing project
|
process: tests type: chore stage: needs review
|
### What would you like?
Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Choose a Browser](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App around what the specs page when on a new e2e testing project.
### Why is this needed?
_No response_
### Other
_No response_
|
1.0
|
Unification App: Write E2E tests around "Specs Page" - New e2e testing project - ### What would you like?
Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Choose a Browser](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App around what the specs page when on a new e2e testing project.
### Why is this needed?
_No response_
### Other
_No response_
|
process
|
unification app write tests around specs page new testing project what would you like write end to end tests to cover the new unification work in release branch for in the app around what the specs page when on a new testing project why is this needed no response other no response
| 1
|
32,045
| 8,789,501,892
|
IssuesEvent
|
2018-12-21 04:00:42
|
juju-solutions/bundle-canonical-kubernetes
|
https://api.github.com/repos/juju-solutions/bundle-canonical-kubernetes
|
closed
|
Bundle builder should support test fragments
|
area/bundle-builder kind/feature
|
There are 2 very distinctly different test suites for the CDK and CDK-CORE bundles. We will need to encapsulate a way to build the bundles, and specify the path to these "test fragments" which are in reality 2 different test suites, to be included in the release bundle.
Perhaps we can just copy the directories into the respective fragment locations, and the bundler can then include them in the output directory of the build process.
|
1.0
|
Bundle builder should support test fragments - There are 2 very distinctly different test suites for the CDK and CDK-CORE bundles. We will need to encapsulate a way to build the bundles, and specify the path to these "test fragments" which are in reality 2 different test suites, to be included in the release bundle.
Perhaps we can just copy the directories into the respective fragment locations, and the bundler can then include them in the output directory of the build process.
|
non_process
|
bundle builder should support test fragments there are very distinctly different test suites for the cdk and cdk core bundles we will need to encapsulate a way to build the bundles and specify the path to these test fragments which are in reality different test suites to be included in the release bundle perhaps we can just copy the directories into the respective fragment locations and the bundler can then include them in the output directory of the build process
| 0
|
16,397
| 21,180,750,998
|
IssuesEvent
|
2022-04-08 07:41:40
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
closed
|
Incorrect error handling when citeproc-rs is disabled
|
Word Processor Integration Bug
|
https://forums.zotero.org/discussion/96065/unable-to-cite-papers-using-zotero-after-saving-word-document
```
[JavaScript Error: "This command is not available because no document is open. [getDocument:\vboxsvr\adomas\zotero\word-for-windows-integration\build\zoterowinwordintegration\document.cpp]"]
[JavaScript Error: "TypeError: invalid 'instanceof' operand Zotero.CiteprocRs.CiteprocRsDriverError" {file: "chrome://zotero/content/xpcom/integration.js" line: 375}]
```
Not sure if it makes a difference, but I'm guessing this is throwing incorrectly?
|
1.0
|
Incorrect error handling when citeproc-rs is disabled - https://forums.zotero.org/discussion/96065/unable-to-cite-papers-using-zotero-after-saving-word-document
```
[JavaScript Error: "This command is not available because no document is open. [getDocument:\vboxsvr\adomas\zotero\word-for-windows-integration\build\zoterowinwordintegration\document.cpp]"]
[JavaScript Error: "TypeError: invalid 'instanceof' operand Zotero.CiteprocRs.CiteprocRsDriverError" {file: "chrome://zotero/content/xpcom/integration.js" line: 375}]
```
Not sure if it makes a difference, but I'm guessing this is throwing incorrectly?
|
process
|
incorrect error handling when citeproc rs is disabled not sure if it makes a difference but i m guessing this is throwing incorrectly
| 1
|
629,348
| 20,029,819,686
|
IssuesEvent
|
2022-02-02 03:23:45
|
ResonantGeoData/RGD-Vue
|
https://api.github.com/repos/ResonantGeoData/RGD-Vue
|
closed
|
Copy as STAC item in Raster metadata drawer
|
priority
|
Similar to https://github.com/ResonantGeoData/ResonantGeoData/issues/653
Add a copy as STAC button to the metadata drawer for Rasters
|
1.0
|
Copy as STAC item in Raster metadata drawer - Similar to https://github.com/ResonantGeoData/ResonantGeoData/issues/653
Add a copy as STAC button to the metadata drawer for Rasters
|
non_process
|
copy as stac item in raster metadata drawer similar to add a copy as stac button to the metadata drawer for rasters
| 0
|
73,009
| 19,548,090,476
|
IssuesEvent
|
2022-01-02 08:27:23
|
lf-lang/lingua-franca
|
https://api.github.com/repos/lf-lang/lingua-franca
|
closed
|
The gradle build on master breaks after changing ASTUtils.xtend
|
bug build system
|
This happens even in a fresh clone. Steps to reproduce:
```
git clone git@github.com:icyphy/lingua-franca.git
cd lingua-franca
./gradlew generateStandaloneCompiler
echo " " >> org.lflang/src/org/lflang/ASTUtils.xtend
./gradlew generateStandaloneCompiler
```
where the `echo " " >>` line adds an empty line to the file. This yields the following errors:
```
RROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 303 column : 27)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 306 column : 35)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 303 column : 27)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 306 column : 35)
```
Running `.gradlew clean` does not help. Strangely, this only happens when changing ASTUtils.xtend. I also tried with a few other files, but it seems to work fine.
@oowekyala was also able to reproduce this issue.
|
1.0
|
The gradle build on master breaks after changing ASTUtils.xtend - This happens even in a fresh clone. Steps to reproduce:
```
git clone git@github.com:icyphy/lingua-franca.git
cd lingua-franca
./gradlew generateStandaloneCompiler
echo " " >> org.lflang/src/org/lflang/ASTUtils.xtend
./gradlew generateStandaloneCompiler
```
where the `echo " " >>` line adds an empty line to the file. This yields the following errors:
```
RROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 303 column : 27)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 306 column : 35)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 303 column : 27)
ERROR:The method or field GEN_DELAY_CLASS_NAME is undefined for the type Class<GeneratorBase> (file:/home/cmenard/tmp/lingua-franca/org.lflang/src/org/lflang/ASTUtils.xtend line : 306 column : 35)
```
Running `.gradlew clean` does not help. Strangely, this only happens when changing ASTUtils.xtend. I also tried with a few other files, but it seems to work fine.
@oowekyala was also able to reproduce this issue.
|
non_process
|
the gradle build on master breaks after changing astutils xtend this happens even in a fresh clone steps to reproduce git clone git github com icyphy lingua franca git cd lingua franca gradlew generatestandalonecompiler echo org lflang src org lflang astutils xtend gradlew generatestandalonecompiler where the echo line adds an empty line to the file this yields the following errors rror the method or field gen delay class name is undefined for the type class file home cmenard tmp lingua franca org lflang src org lflang astutils xtend line column error the method or field gen delay class name is undefined for the type class file home cmenard tmp lingua franca org lflang src org lflang astutils xtend line column error the method or field gen delay class name is undefined for the type class file home cmenard tmp lingua franca org lflang src org lflang astutils xtend line column error the method or field gen delay class name is undefined for the type class file home cmenard tmp lingua franca org lflang src org lflang astutils xtend line column running gradlew clean does not help strangely this only happens when changing astutils xtend i also tried with a few other files but it seems to work fine oowekyala was also able to reproduce this issue
| 0
|
301,533
| 9,221,400,035
|
IssuesEvent
|
2019-03-11 19:53:15
|
SacredDuckwhale/AltMastery
|
https://api.github.com/repos/SacredDuckwhale/AltMastery
|
opened
|
Add milestone for BFA outposts
|
complexity: low module:core priority:normal status:accepted type:task
|
* Unlock
* Upgrade
* Both factions have different ones
* Tracking via Quest and possibly available mission (?)
|
1.0
|
Add milestone for BFA outposts - * Unlock
* Upgrade
* Both factions have different ones
* Tracking via Quest and possibly available mission (?)
|
non_process
|
add milestone for bfa outposts unlock upgrade both factions have different ones tracking via quest and possibly available mission
| 0
|
10,053
| 13,044,161,670
|
IssuesEvent
|
2020-07-29 03:47:25
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateIntString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateIntString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateIntString` from TiDB -
## Description
Port the scalar function `SubDateIntString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdateintstring from tidb description port the scalar function subdateintstring from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
11,347
| 14,169,248,614
|
IssuesEvent
|
2020-11-12 12:58:08
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Custom logs doesn't support schemas with "similar" name
|
bug p0 team:data processing
|
### Describe the bug
Custom logs doesn't support schemas with "similar" name. E.g. specifying two fields, `user-agent` and `user_agent` schema generation to fail
### Steps to reproduce
Steps to reproduce the behavior:
1. Create a new schema locally like the following:
```
version: 0
fields:
- name: user-agent
type: string
- name: user_agent
type: string
```
2. Generate a new log file with this content:
```
{"user-agent": "test", "user_agent": "test"}
```
3. Run the customlogs CLI tool
4. See error
### Expected behavior
We are able to process the above fields.
### Environment
How are you deploying or using Panther?
- Panther version or commit: 1.12
|
1.0
|
Custom logs doesn't support schemas with "similar" name - ### Describe the bug
Custom logs doesn't support schemas with "similar" name. E.g. specifying two fields, `user-agent` and `user_agent` schema generation to fail
### Steps to reproduce
Steps to reproduce the behavior:
1. Create a new schema locally like the following:
```
version: 0
fields:
- name: user-agent
type: string
- name: user_agent
type: string
```
2. Generate a new log file with this content:
```
{"user-agent": "test", "user_agent": "test"}
```
3. Run the customlogs CLI tool
4. See error
### Expected behavior
We are able to process the above fields.
### Environment
How are you deploying or using Panther?
- Panther version or commit: 1.12
|
process
|
custom logs doesn t support schemas with similar name describe the bug custom logs doesn t support schemas with similar name e g specifying two fields user agent and user agent schema generation to fail steps to reproduce steps to reproduce the behavior create a new schema locally like the following version fields name user agent type string name user agent type string generate a new log file with this content user agent test user agent test run the customlogs cli tool see error expected behavior we are able to process the above fields environment how are you deploying or using panther panther version or commit
| 1
|
4,956
| 7,801,846,265
|
IssuesEvent
|
2018-06-10 03:51:41
|
udacity/aind-issue-reports
|
https://api.github.com/repos/udacity/aind-issue-reports
|
closed
|
[Neutral]2018-05-28
|
1.0.0 43. Intro to Natural Language Processing
|
I didn't like IBM's "merchandise" approach to sell itself to us\, students. I understand that UDACITY is a private\, for profit company\, but teaching subjects should be agnostic to service providers.
|
1.0
|
[Neutral]2018-05-28 - I didn't like IBM's "merchandise" approach to sell itself to us\, students. I understand that UDACITY is a private\, for profit company\, but teaching subjects should be agnostic to service providers.
|
process
|
i didn t like ibm s merchandise approach to sell itself to us students i understand that udacity is a private for profit company but teaching subjects should be agnostic to service providers
| 1
|
6,620
| 9,725,329,595
|
IssuesEvent
|
2019-05-30 08:23:51
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing - Export to PostgreSQL (available connections)
|
Bug Processing
|
Author Name: **Rocco Pispico** (Rocco Pispico)
Original Redmine Issue: [21356](https://issues.qgis.org/issues/21356)
Affected QGIS version: 3.4.4
Redmine category:processing/gdal
Assignee: Giovanni Manghi
---
This script has several problems especially when use _iterate over..._
- doesn't create multiple layers but only a single layer, if overwrite is checked remain only the last feature;
- if the source layer is a postgres layer, the name of new one layer is a concatenation of <new schema>.<source schema>.<table name> instead of <new schema>.<table name>;
- if the source is postgres doesn't check if source layer and new layer are the same.
I think that the names of output layers could be created in a _composer_ in order to have maximum flexibility.
Finally two cosmetic wish:
- the counter goes from 0 to number of feature instead from 1 to number of feature;
- change the iterator icon is in a checkbox widget.
|
1.0
|
Processing - Export to PostgreSQL (available connections) - Author Name: **Rocco Pispico** (Rocco Pispico)
Original Redmine Issue: [21356](https://issues.qgis.org/issues/21356)
Affected QGIS version: 3.4.4
Redmine category:processing/gdal
Assignee: Giovanni Manghi
---
This script has several problems especially when use _iterate over..._
- doesn't create multiple layers but only a single layer, if overwrite is checked remain only the last feature;
- if the source layer is a postgres layer, the name of new one layer is a concatenation of <new schema>.<source schema>.<table name> instead of <new schema>.<table name>;
- if the source is postgres doesn't check if source layer and new layer are the same.
I think that the names of output layers could be created in a _composer_ in order to have maximum flexibility.
Finally two cosmetic wish:
- the counter goes from 0 to number of feature instead from 1 to number of feature;
- change the iterator icon is in a checkbox widget.
|
process
|
processing export to postgresql available connections author name rocco pispico rocco pispico original redmine issue affected qgis version redmine category processing gdal assignee giovanni manghi this script has several problems especially when use iterate over doesn t create multiple layers but only a single layer if overwrite is checked remain only the last feature if the source layer is a postgres layer the name of new one layer is a concatenation of instead of if the source is postgres doesn t check if source layer and new layer are the same i think that the names of output layers could be created in a composer in order to have maximum flexibility finally two cosmetic wish the counter goes from to number of feature instead from to number of feature change the iterator icon is in a checkbox widget
| 1
|
48,298
| 2,997,031,531
|
IssuesEvent
|
2015-07-23 03:01:44
|
theminted/lesswrong-migrated
|
https://api.github.com/repos/theminted/lesswrong-migrated
|
closed
|
Homepage [login, cookies, subdomain etc.]
|
imported Priority-Critical Type-Discussion
|
_From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on January 29, 2009 14:35:19_
Two main alternatives:
* Separate Domain Solution (eg. lesswrong.com & overcomingbias.com)
** Requires user to log in for each site separately (cookie limitation)
** Can use same account for both sites (however)
** Or, implement 'single sign on' type login behaviour (not insignificant time)
* Subdomain Solution (eg. overcomingbias.com & community.overcomingbias.com)
** One cookie would work for both domains (presuming cookie was set for 'overcomingbias.com')
** Login once, already logged in on both (without significant coding effort)
_Original issue: http://code.google.com/p/lesswrong/issues/detail?id=16_
|
1.0
|
Homepage [login, cookies, subdomain etc.] - _From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on January 29, 2009 14:35:19_
Two main alternatives:
* Separate Domain Solution (eg. lesswrong.com & overcomingbias.com)
** Requires user to log in for each site separately (cookie limitation)
** Can use same account for both sites (however)
** Or, implement 'single sign on' type login behaviour (not insignificant time)
* Subdomain Solution (eg. overcomingbias.com & community.overcomingbias.com)
** One cookie would work for both domains (presuming cookie was set for 'overcomingbias.com')
** Login once, already logged in on both (without significant coding effort)
_Original issue: http://code.google.com/p/lesswrong/issues/detail?id=16_
|
non_process
|
homepage from on january two main alternatives separate domain solution eg lesswrong com overcomingbias com requires user to log in for each site separately cookie limitation can use same account for both sites however or implement single sign on type login behaviour not insignificant time subdomain solution eg overcomingbias com community overcomingbias com one cookie would work for both domains presuming cookie was set for overcomingbias com login once already logged in on both without significant coding effort original issue
| 0
|
258,638
| 8,178,447,807
|
IssuesEvent
|
2018-08-28 13:51:41
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Networking Problems, Size Missmatch 15 vs 13
|
area: Networking bug priority: low
|
Hello,
i am currently working on porting zephyr to the Texas Instruments cc2538 SoC. While doing that, i am trying to get Networking to work over the built-in IEEE802154 radio in the cc2538. I already got that to working as far that the echo_client / echo_server samples can communicate with each other successfully with data beeing transmitted correctly.
My development started on Zephyr 1.7 and i recently have moved my implementation to Zephyr 1.11, when i was running on 1.7 the transmission was very smooth and was only subject to errors very rarely (that is less than 1 lost paket in a minute, mostly due to an error on the physical layer). With Zephyr 1.11 i am experiencing delays which i could not identify the source exaclty yet, but i suspected some network caching mechanism, which led me to disabling "Multicast Listener Discovery support" and "Neighbor cache".
In doing so i loose the functionality with the following error:

I was able to track the issue to some degree, but since my understanding of the layer 2 and layer 3 implementation is limited i had given up on my own. My findings were that there seems to exist a discrepancy with what address type is used (PAN ID, Short Address, Extended Address) when computing the header size in the function ieee802154_compute_header_size.
In this context i also found it interesting that the layer 2 implementation in l2/ieee802154_frame.c seems to make a call to a higher layer, the ipv6 layer, which seems uncommon / incorrect in a layered architecture to me. Below you see the piece of code in question, the function call to "net_ipv6_nbr_lookup", i think the other branch is taken between working/nonworking.

Some one else pointed out to me that it might be related to "joining a network" on a certain level, on which it does not know the mode of addressing yet, but sadly i'm could not do much with that information.
Attached you can find the autoconf.h from the working variant (which still suffers from frequent timeouts, but otherwise works) and the nonworking variant which does not send anything because of the explained error. I hope this provides reasonable information about my configuration, if i should / can provide addition information, please tell me to.
[autoconf_working.h.txt](https://github.com/zephyrproject-rtos/zephyr/files/2108412/autoconf_working.h.txt)
[autoconf_nonworking.h.txt](https://github.com/zephyrproject-rtos/zephyr/files/2108416/autoconf_nonworking.h.txt)
The final question therefore is, is this a bug (in calculating the header sizes) or is my second configuration at fault?
|
1.0
|
Networking Problems, Size Missmatch 15 vs 13 - Hello,
i am currently working on porting zephyr to the Texas Instruments cc2538 SoC. While doing that, i am trying to get Networking to work over the built-in IEEE802154 radio in the cc2538. I already got that to working as far that the echo_client / echo_server samples can communicate with each other successfully with data beeing transmitted correctly.
My development started on Zephyr 1.7 and i recently have moved my implementation to Zephyr 1.11, when i was running on 1.7 the transmission was very smooth and was only subject to errors very rarely (that is less than 1 lost paket in a minute, mostly due to an error on the physical layer). With Zephyr 1.11 i am experiencing delays which i could not identify the source exaclty yet, but i suspected some network caching mechanism, which led me to disabling "Multicast Listener Discovery support" and "Neighbor cache".
In doing so i loose the functionality with the following error:

I was able to track the issue to some degree, but since my understanding of the layer 2 and layer 3 implementation is limited i had given up on my own. My findings were that there seems to exist a discrepancy with what address type is used (PAN ID, Short Address, Extended Address) when computing the header size in the function ieee802154_compute_header_size.
In this context i also found it interesting that the layer 2 implementation in l2/ieee802154_frame.c seems to make a call to a higher layer, the ipv6 layer, which seems uncommon / incorrect in a layered architecture to me. Below you see the piece of code in question, the function call to "net_ipv6_nbr_lookup", i think the other branch is taken between working/nonworking.

Some one else pointed out to me that it might be related to "joining a network" on a certain level, on which it does not know the mode of addressing yet, but sadly i'm could not do much with that information.
Attached you can find the autoconf.h from the working variant (which still suffers from frequent timeouts, but otherwise works) and the nonworking variant which does not send anything because of the explained error. I hope this provides reasonable information about my configuration, if i should / can provide addition information, please tell me to.
[autoconf_working.h.txt](https://github.com/zephyrproject-rtos/zephyr/files/2108412/autoconf_working.h.txt)
[autoconf_nonworking.h.txt](https://github.com/zephyrproject-rtos/zephyr/files/2108416/autoconf_nonworking.h.txt)
The final question therefore is, is this a bug (in calculating the header sizes) or is my second configuration at fault?
|
non_process
|
networking problems size missmatch vs hello i am currently working on porting zephyr to the texas instruments soc while doing that i am trying to get networking to work over the built in radio in the i already got that to working as far that the echo client echo server samples can communicate with each other successfully with data beeing transmitted correctly my development started on zephyr and i recently have moved my implementation to zephyr when i was running on the transmission was very smooth and was only subject to errors very rarely that is less than lost paket in a minute mostly due to an error on the physical layer with zephyr i am experiencing delays which i could not identify the source exaclty yet but i suspected some network caching mechanism which led me to disabling multicast listener discovery support and neighbor cache in doing so i loose the functionality with the following error i was able to track the issue to some degree but since my understanding of the layer and layer implementation is limited i had given up on my own my findings were that there seems to exist a discrepancy with what address type is used pan id short address extended address when computing the header size in the function compute header size in this context i also found it interesting that the layer implementation in frame c seems to make a call to a higher layer the layer which seems uncommon incorrect in a layered architecture to me below you see the piece of code in question the function call to net nbr lookup i think the other branch is taken between working nonworking some one else pointed out to me that it might be related to joining a network on a certain level on which it does not know the mode of addressing yet but sadly i m could not do much with that information attached you can find the autoconf h from the working variant which still suffers from frequent timeouts but otherwise works and the nonworking variant which does not send anything because of the explained error i hope this provides reasonable information about my configuration if i should can provide addition information please tell me to the final question therefore is is this a bug in calculating the header sizes or is my second configuration at fault
| 0
|
23,360
| 4,932,476,777
|
IssuesEvent
|
2016-11-28 13:49:10
|
Jumpscale/jscockpit
|
https://api.github.com/repos/Jumpscale/jscockpit
|
opened
|
cockpit-doc: JWT Walkthrough
|
type_documentation type_feature
|
## GOAL:
Explain what the **JWT** page is all about
## DESCRIPTION:
Placeholder: https://github.com/Jumpscale/jscockpit/blob/master/docs/walkthrough/JWT/JWT.md
|
1.0
|
cockpit-doc: JWT Walkthrough - ## GOAL:
Explain what the **JWT** page is all about
## DESCRIPTION:
Placeholder: https://github.com/Jumpscale/jscockpit/blob/master/docs/walkthrough/JWT/JWT.md
|
non_process
|
cockpit doc jwt walkthrough goal explain what the jwt page is all about description placeholder
| 0
|
11,894
| 14,689,105,545
|
IssuesEvent
|
2021-01-02 07:31:36
|
yuta252/startlens_web_backend
|
https://api.github.com/repos/yuta252/startlens_web_backend
|
closed
|
プロフィールモデルに画像アップロード機能を追加
|
dev process
|
## 概要
フロントエンドからユーザーのプロフィール情報を操作できるようにProfileモデル(ユーザーのプロフィール情報のCRUD)の作成とCarrierWaveを利用した画像のアップロードを実現する。
## 変更点
- [x] Profile modelの作成
- [x] ProfileController(show, update)の作成
- [x] Carrierwaveを利用したProfileThumbnailUploader(設定ファイル)の作成
- 画像のリサイズ
- Exif情報の削除
- アップロードディレクトリ 及びファイル名の変更
- [x] ProfileモデルによるフロントエンドからのBase64画像をでコード処理する
- Model Concernにおけるcarrierwave_base64_uploader.rbの追加
## 課題
- [ ] 住所情報登録時に非同期でlatitude, longitudeを取得する
- [ ] レーティング情報及び画像認識におけるファイル保存機能の実装
- [ ] 本番環境用にプロフィール画像をS3ストレージにアップロードする設定を追加する
## 参照
- [モデルのデータ型](https://qiita.com/s_tatsuki/items/900d662a905c7e36b3d4)
- [MySQLのデータ型とRailsのマイグレーションファイルのデータ定義の対応まとめ](https://qiita.com/vermilionfog/items/816fa7de1d0213979929)
- [CarrierWave を利用して画像をアップロードする form もしくは API を作成する](https://www.d-wood.com/blog/2017/02/10_8801.html)
- [Exif情報の削除](https://qiita.com/goyachanpuru/items/5939dbc1637e5ea4be74)
- [画像のリサイズ](http://noodles-mtb.hatenablog.com/entry/2013/07/08/151316)
- [base64エンコードされた画像をcarrierwaveに保存する](https://blog.hello-world.jp.net/ruby/2281/)
- [Rails Carrierwave Base64イメージのアップロード](https://www.366service.com/jp/qa/a2bbba3322d32bf3babba89a65839877)
## 備考
- 画像をBase64にエンコードするサイト[Baes64 Encoder](https://www.base64-image.de/)
|
1.0
|
プロフィールモデルに画像アップロード機能を追加 - ## 概要
フロントエンドからユーザーのプロフィール情報を操作できるようにProfileモデル(ユーザーのプロフィール情報のCRUD)の作成とCarrierWaveを利用した画像のアップロードを実現する。
## 変更点
- [x] Profile modelの作成
- [x] ProfileController(show, update)の作成
- [x] Carrierwaveを利用したProfileThumbnailUploader(設定ファイル)の作成
- 画像のリサイズ
- Exif情報の削除
- アップロードディレクトリ 及びファイル名の変更
- [x] ProfileモデルによるフロントエンドからのBase64画像をでコード処理する
- Model Concernにおけるcarrierwave_base64_uploader.rbの追加
## 課題
- [ ] 住所情報登録時に非同期でlatitude, longitudeを取得する
- [ ] レーティング情報及び画像認識におけるファイル保存機能の実装
- [ ] 本番環境用にプロフィール画像をS3ストレージにアップロードする設定を追加する
## 参照
- [モデルのデータ型](https://qiita.com/s_tatsuki/items/900d662a905c7e36b3d4)
- [MySQLのデータ型とRailsのマイグレーションファイルのデータ定義の対応まとめ](https://qiita.com/vermilionfog/items/816fa7de1d0213979929)
- [CarrierWave を利用して画像をアップロードする form もしくは API を作成する](https://www.d-wood.com/blog/2017/02/10_8801.html)
- [Exif情報の削除](https://qiita.com/goyachanpuru/items/5939dbc1637e5ea4be74)
- [画像のリサイズ](http://noodles-mtb.hatenablog.com/entry/2013/07/08/151316)
- [base64エンコードされた画像をcarrierwaveに保存する](https://blog.hello-world.jp.net/ruby/2281/)
- [Rails Carrierwave Base64イメージのアップロード](https://www.366service.com/jp/qa/a2bbba3322d32bf3babba89a65839877)
## 備考
- 画像をBase64にエンコードするサイト[Baes64 Encoder](https://www.base64-image.de/)
|
process
|
プロフィールモデルに画像アップロード機能を追加 概要 フロントエンドからユーザーのプロフィール情報を操作できるようにprofileモデル(ユーザーのプロフィール情報のcrud)の作成とcarrierwaveを利用した画像のアップロードを実現する。 変更点 profile modelの作成 profilecontroller show update の作成 carrierwaveを利用したprofilethumbnailuploader 設定ファイル)の作成 画像のリサイズ exif情報の削除 アップロードディレクトリ 及びファイル名の変更 model concernにおけるcarrierwave uploader rbの追加 課題 住所情報登録時に非同期でlatitude longitudeを取得する レーティング情報及び画像認識におけるファイル保存機能の実装 参照 備考
| 1
|
6,049
| 8,871,244,216
|
IssuesEvent
|
2019-01-11 11:58:29
|
scieloorg/opac_proc
|
https://api.github.com/repos/scieloorg/opac_proc
|
opened
|
Problema na criação dos registros de diferenças de ADD
|
Processamento bug
|
A criação de registros de diferença de ADD tem dado resultado constantes timeouts no MongoDB.
Melhorar a lógica de verificação dos registros que devem ser inseridos no fluxo de ETL.
|
1.0
|
Problema na criação dos registros de diferenças de ADD - A criação de registros de diferença de ADD tem dado resultado constantes timeouts no MongoDB.
Melhorar a lógica de verificação dos registros que devem ser inseridos no fluxo de ETL.
|
process
|
problema na criação dos registros de diferenças de add a criação de registros de diferença de add tem dado resultado constantes timeouts no mongodb melhorar a lógica de verificação dos registros que devem ser inseridos no fluxo de etl
| 1
|
357,185
| 10,603,254,781
|
IssuesEvent
|
2019-10-10 15:38:22
|
wevote/WeVoteServer
|
https://api.github.com/repos/wevote/WeVoteServer
|
opened
|
Send text message via Twilio
|
Difficulty: Medium Priority 1
|
Our new "sms" app in WeVoteServer gets all ready to send a text message. This issue is to get actual sending set up with Twilio. (Dale can provide the developer account with credits.)
The sending should happen in this file:
WeVoteServer/sms/models.py
...in the SMSManager function: send_scheduled_sms_via_twilio
Please add variables for Twilio configuration settings to this file (not including the actual credentials):
WeVoteServer/config/environment_variables-template.json
|
1.0
|
Send text message via Twilio - Our new "sms" app in WeVoteServer gets all ready to send a text message. This issue is to get actual sending set up with Twilio. (Dale can provide the developer account with credits.)
The sending should happen in this file:
WeVoteServer/sms/models.py
...in the SMSManager function: send_scheduled_sms_via_twilio
Please add variables for Twilio configuration settings to this file (not including the actual credentials):
WeVoteServer/config/environment_variables-template.json
|
non_process
|
send text message via twilio our new sms app in wevoteserver gets all ready to send a text message this issue is to get actual sending set up with twilio dale can provide the developer account with credits the sending should happen in this file wevoteserver sms models py in the smsmanager function send scheduled sms via twilio please add variables for twilio configuration settings to this file not including the actual credentials wevoteserver config environment variables template json
| 0
|
298
| 2,732,629,657
|
IssuesEvent
|
2015-04-17 08:05:14
|
tomchristie/django-rest-framework
|
https://api.github.com/repos/tomchristie/django-rest-framework
|
opened
|
Update contribution guidelines.
|
Process
|
I think we should probably slim the CONTRIBUTING.md text right down to 2-5 key points.
Something like:
* Usage questions should be directed to the discussion group.
* Narrow issues down to the the most minimal possible reproducing case before raising them.
There's other possible stuff, but those are probably the two most critical points. The less we say, the more weight what we do say should have.
We can also link to the longer form contribution guidelines.
/cc @kevin-brown @jpadilla @carltongibson @xordoquy
|
1.0
|
Update contribution guidelines. - I think we should probably slim the CONTRIBUTING.md text right down to 2-5 key points.
Something like:
* Usage questions should be directed to the discussion group.
* Narrow issues down to the the most minimal possible reproducing case before raising them.
There's other possible stuff, but those are probably the two most critical points. The less we say, the more weight what we do say should have.
We can also link to the longer form contribution guidelines.
/cc @kevin-brown @jpadilla @carltongibson @xordoquy
|
process
|
update contribution guidelines i think we should probably slim the contributing md text right down to key points something like usage questions should be directed to the discussion group narrow issues down to the the most minimal possible reproducing case before raising them there s other possible stuff but those are probably the two most critical points the less we say the more weight what we do say should have we can also link to the longer form contribution guidelines cc kevin brown jpadilla carltongibson xordoquy
| 1
|
19,661
| 26,021,242,090
|
IssuesEvent
|
2022-12-21 12:50:25
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release 6.0.0 - December 2022
|
P1 type: process release team-OSS
|
# Status of Bazel 6.0.0
- Target baseline: 2022-10-24
- Expected release date: 2022-12-19
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/38)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
<!-- uncomment this line when the release branch is open
To cherry-pick a mainline commit into 6.0.0, simply send a PR against the `release-6.0.0` branch. -->
Task list:
- [x] Pick release baseline
- [x] Create release candidate
- [x] Check downstream projects
- [x] [Create draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit?usp=sharing)
- [x] ~Send for review the release announcement PR~
- [x] Push the release, notify package maintainers
- [x] Update the documentation
- [x] Push the blog post
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 6.0.0 - December 2022 - # Status of Bazel 6.0.0
- Target baseline: 2022-10-24
- Expected release date: 2022-12-19
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/38)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
<!-- uncomment this line when the release branch is open
To cherry-pick a mainline commit into 6.0.0, simply send a PR against the `release-6.0.0` branch. -->
Task list:
- [x] Pick release baseline
- [x] Create release candidate
- [x] Check downstream projects
- [x] [Create draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit?usp=sharing)
- [x] ~Send for review the release announcement PR~
- [x] Push the release, notify package maintainers
- [x] Update the documentation
- [x] Push the blog post
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release december status of bazel target baseline expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone uncomment this line when the release branch is open to cherry pick a mainline commit into simply send a pr against the release branch task list pick release baseline create release candidate check downstream projects send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
| 1
|
18,913
| 24,857,488,301
|
IssuesEvent
|
2022-10-27 04:35:44
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
MIPS64: relocation issue
|
Type: Bug Feature: Loader/ELF Feature: Processor/MIPS Status: Internal
|
**Describe the bug**
MIPS64 relocation handler discards the old symbol value resulting in symbols having the EA as their value. This binary is built to load at 0x0, so basically all of my `.rel.dyn` relocations are NULL pointers.
For example readelf shows:
```
000000667788 000000001203 R_MIPS_REL32
Type2: R_MIPS_64
Type3: R_MIPS_NONE
```
with file data of: `000000667788: 00 00 00 00 XX YY ZZ WW`
and ghidra loads: `000000667788: 00 00 00 00 00 00 00 00`
`R_MIPS_REL32` handling has saveValue true and `symbolIndex` 0x0, resulting in symbolValue 0x0
Fall through in `R_MIPS_32` extractAddend() false
`R_MIPS_64` handling has saveValue false, addend 0x0, and symbolValue 0x0 resulting in overwriting the previous value with 0x0
**Environment (please complete the following information):**
- OS: 20.04
- Java Version: 17.0.4
- Ghidra Version: 10.2-DEV b88cf85d5cef4e766e6093223b6320ee5ae7a113
- Ghidra Origin: eclipse
|
1.0
|
MIPS64: relocation issue - **Describe the bug**
MIPS64 relocation handler discards the old symbol value resulting in symbols having the EA as their value. This binary is built to load at 0x0, so basically all of my `.rel.dyn` relocations are NULL pointers.
For example readelf shows:
```
000000667788 000000001203 R_MIPS_REL32
Type2: R_MIPS_64
Type3: R_MIPS_NONE
```
with file data of: `000000667788: 00 00 00 00 XX YY ZZ WW`
and ghidra loads: `000000667788: 00 00 00 00 00 00 00 00`
`R_MIPS_REL32` handling has saveValue true and `symbolIndex` 0x0, resulting in symbolValue 0x0
Fall through in `R_MIPS_32` extractAddend() false
`R_MIPS_64` handling has saveValue false, addend 0x0, and symbolValue 0x0 resulting in overwriting the previous value with 0x0
**Environment (please complete the following information):**
- OS: 20.04
- Java Version: 17.0.4
- Ghidra Version: 10.2-DEV b88cf85d5cef4e766e6093223b6320ee5ae7a113
- Ghidra Origin: eclipse
|
process
|
relocation issue describe the bug relocation handler discards the old symbol value resulting in symbols having the ea as their value this binary is built to load at so basically all of my rel dyn relocations are null pointers for example readelf shows r mips r mips r mips none with file data of xx yy zz ww and ghidra loads r mips handling has savevalue true and symbolindex resulting in symbolvalue fall through in r mips extractaddend false r mips handling has savevalue false addend and symbolvalue resulting in overwriting the previous value with environment please complete the following information os java version ghidra version dev ghidra origin eclipse
| 1
|
87,976
| 3,770,027,413
|
IssuesEvent
|
2016-03-16 13:13:03
|
gstreamer-java/gstreamer-java
|
https://api.github.com/repos/gstreamer-java/gstreamer-java
|
closed
|
JNA Direct Call Mapping
|
auto-migrated Priority-Medium Type-Enhancement
|
```
Have you considered moving to the latest JNA (3.2.3) and using direct call
mapping (https://jna.dev.java.net/#direct) where possible?
```
Original issue reported on code.google.com by `david.g.hoyt` on 22 Oct 2009 at 4:59
|
1.0
|
JNA Direct Call Mapping - ```
Have you considered moving to the latest JNA (3.2.3) and using direct call
mapping (https://jna.dev.java.net/#direct) where possible?
```
Original issue reported on code.google.com by `david.g.hoyt` on 22 Oct 2009 at 4:59
|
non_process
|
jna direct call mapping have you considered moving to the latest jna and using direct call mapping where possible original issue reported on code google com by david g hoyt on oct at
| 0
|
15,123
| 18,853,197,629
|
IssuesEvent
|
2021-11-12 00:30:51
|
kwongustj/project_Silbi
|
https://api.github.com/repos/kwongustj/project_Silbi
|
opened
|
프로젝트 진행사항 - 김가람
|
project_process
|
지금까지 현대백화점 홈페이지에 있는 점포, 편의시설 정보 크롤링과 네이버 리뷰 평균 평점 크롤링을 하였습니다.
진행 사항은 계속해서 올려 공유하겠습니다.
|
1.0
|
프로젝트 진행사항 - 김가람 - 지금까지 현대백화점 홈페이지에 있는 점포, 편의시설 정보 크롤링과 네이버 리뷰 평균 평점 크롤링을 하였습니다.
진행 사항은 계속해서 올려 공유하겠습니다.
|
process
|
프로젝트 진행사항 김가람 지금까지 현대백화점 홈페이지에 있는 점포 편의시설 정보 크롤링과 네이버 리뷰 평균 평점 크롤링을 하였습니다 진행 사항은 계속해서 올려 공유하겠습니다
| 1
|
17,692
| 10,757,044,500
|
IssuesEvent
|
2019-10-31 12:31:38
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Hyphens (incorrectly) not allowed in Azure Bastion name
|
bug service/bastion
|
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform (and AzureRM Provider) Version
1.36.0
### Affected Resource(s)
azurerm_bastion_host
### Terraform Configuration Files
```
resource "azurerm_bastion_host" "bastion" {
name = "myname-bastion-myenvironment"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "configuration"
subnet_id = "${azurerm_subnet.bastion.id}"
public_ip_address_id = "${azurerm_public_ip.bhpip.id}"
}
}
```
### Debug Output
### Panic Output
### Expected Behavior
Resource name should be **myname-bastion-myenvironment** which is allowed in the portal
### Actual Behavior
This error is generated: **Error: lowercase letters, highercase letters numbers only are allowed in "name": "myname-bastion-myenvironment"**
### Steps to Reproduce
Create a bastion that contains hyphens as part of the name
### Important Factoids
### References
|
1.0
|
Hyphens (incorrectly) not allowed in Azure Bastion name - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform (and AzureRM Provider) Version
1.36.0
### Affected Resource(s)
azurerm_bastion_host
### Terraform Configuration Files
```
resource "azurerm_bastion_host" "bastion" {
name = "myname-bastion-myenvironment"
location = "${azurerm_resource_group.rg.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "configuration"
subnet_id = "${azurerm_subnet.bastion.id}"
public_ip_address_id = "${azurerm_public_ip.bhpip.id}"
}
}
```
### Debug Output
### Panic Output
### Expected Behavior
Resource name should be **myname-bastion-myenvironment** which is allowed in the portal
### Actual Behavior
This error is generated: **Error: lowercase letters, highercase letters numbers only are allowed in "name": "myname-bastion-myenvironment"**
### Steps to Reproduce
Create a bastion that contains hyphens as part of the name
### Important Factoids
### References
|
non_process
|
hyphens incorrectly not allowed in azure bastion name community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version affected resource s azurerm bastion host terraform configuration files resource azurerm bastion host bastion name myname bastion myenvironment location azurerm resource group rg location resource group name azurerm resource group rg name ip configuration name configuration subnet id azurerm subnet bastion id public ip address id azurerm public ip bhpip id debug output panic output expected behavior resource name should be myname bastion myenvironment which is allowed in the portal actual behavior this error is generated error lowercase letters highercase letters numbers only are allowed in name myname bastion myenvironment steps to reproduce create a bastion that contains hyphens as part of the name important factoids references
| 0
|
7,842
| 11,013,770,906
|
IssuesEvent
|
2019-12-04 21:12:29
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
README.md.tmpl should reference the BOM
|
semver: patch type: process
|
google-cloud-java/utilities/templates/README.md.tmpl
This section needs to be expanded
Quickstart
----------
[//]: # ({x-version-update-start:google-cloud-{{service}}:released})
If you are using Maven, add this to your pom.xml file
```xml
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-{{service}}</artifactId>
<version>{{version}}</version>
</dependency>
```
|
1.0
|
README.md.tmpl should reference the BOM - google-cloud-java/utilities/templates/README.md.tmpl
This section needs to be expanded
Quickstart
----------
[//]: # ({x-version-update-start:google-cloud-{{service}}:released})
If you are using Maven, add this to your pom.xml file
```xml
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-{{service}}</artifactId>
<version>{{version}}</version>
</dependency>
```
|
process
|
readme md tmpl should reference the bom google cloud java utilities templates readme md tmpl this section needs to be expanded quickstart x version update start google cloud service released if you are using maven add this to your pom xml file xml com google cloud google cloud service version
| 1
|
1,541
| 2,874,925,514
|
IssuesEvent
|
2015-06-09 03:00:05
|
piwik/piwik
|
https://api.github.com/repos/piwik/piwik
|
closed
|
Ecommerce & Goals overview: rename categories in left selector
|
c: Usability Enhancement
|
The goal of this issue is to rename Goals and Ecommerce left menu selector categories so they make sense.
## Goals overview
* rename View goals by Referrers -> Goals by Referrers
* rename View goals by Visit -> Goals engagement
* rename View goals by Visitors -> Goals by User location
* rename View goals by Visits Summary -> Goals by User attribute
* move the "Custom Variables" report under 'Goals by User attribute'
## Ecommerce overview
* rename View sales by Referrers -> Sales by Referrers
* rename View sales by Visit -> Sales engagement
* rename View sales by Visitors -> Sales by User location
* rename View sales by Visits Summary -> Sales by User attribute
* move the "Custom Variables" report under "Sales by User attribute"
## Current design
### Goals

### Ecommerce

|
True
|
Ecommerce & Goals overview: rename categories in left selector - The goal of this issue is to rename Goals and Ecommerce left menu selector categories so they make sense.
## Goals overview
* rename View goals by Referrers -> Goals by Referrers
* rename View goals by Visit -> Goals engagement
* rename View goals by Visitors -> Goals by User location
* rename View goals by Visits Summary -> Goals by User attribute
* move the "Custom Variables" report under 'Goals by User attribute'
## Ecommerce overview
* rename View sales by Referrers -> Sales by Referrers
* rename View sales by Visit -> Sales engagement
* rename View sales by Visitors -> Sales by User location
* rename View sales by Visits Summary -> Sales by User attribute
* move the "Custom Variables" report under "Sales by User attribute"
## Current design
### Goals

### Ecommerce

|
non_process
|
ecommerce goals overview rename categories in left selector the goal of this issue is to rename goals and ecommerce left menu selector categories so they make sense goals overview rename view goals by referrers goals by referrers rename view goals by visit goals engagement rename view goals by visitors goals by user location rename view goals by visits summary goals by user attribute move the custom variables report under goals by user attribute ecommerce overview rename view sales by referrers sales by referrers rename view sales by visit sales engagement rename view sales by visitors sales by user location rename view sales by visits summary sales by user attribute move the custom variables report under sales by user attribute current design goals ecommerce
| 0
|
27,360
| 21,656,076,291
|
IssuesEvent
|
2022-05-06 14:14:01
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[release/3.1] Add area owners file to ensure that there is a packaging review on the servicing-approved prs
|
area-Infrastructure-libraries
|
The 3.1 prs require more work around shipping out of band packages than 5.0 and 6.0. So it would be nice to always have a packaging approval before prs getting merged.
This could be done by adding a code-owners file in the repo.
cc @ViktorHofer @ericstj @safern
|
1.0
|
[release/3.1] Add area owners file to ensure that there is a packaging review on the servicing-approved prs - The 3.1 prs require more work around shipping out of band packages than 5.0 and 6.0. So it would be nice to always have a packaging approval before prs getting merged.
This could be done by adding a code-owners file in the repo.
cc @ViktorHofer @ericstj @safern
|
non_process
|
add area owners file to ensure that there is a packaging review on the servicing approved prs the prs require more work around shipping out of band packages than and so it would be nice to always have a packaging approval before prs getting merged this could be done by adding a code owners file in the repo cc viktorhofer ericstj safern
| 0
|
14,256
| 17,192,474,616
|
IssuesEvent
|
2021-07-16 13:01:15
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Status of Bazel 5.0.0-pre.20210708.4
|
P1 release team-XProduct type: process
|
- Expected release date: 2021-07-16
Task list:
- [x] Pick release baseline: [ca1d20fd](https://github.com/bazelbuild/bazel/commit/ca1d20fdfa95dad533c64aba08ba9d7d98be41b7) with cherrypicks [802901e6](https://github.com/bazelbuild/bazel/commit/802901e697015ee6a56ac36cd0000c1079207d12) [aa768ada](https://github.com/bazelbuild/bazel/commit/aa768ada9ef6bcd8de878a5ca2dbd9932f0868fc) [4bcf2e83](https://github.com/bazelbuild/bazel/commit/4bcf2e83c5cb4f459aae815b38f1edd823286a29) [b27fd22f](https://github.com/bazelbuild/bazel/commit/b27fd22f1bd1e29ec2475a3935c9004cc14713bf) [5d926349](https://github.com/bazelbuild/bazel/commit/5d926349949060c5a3e6699550fa1ac64761901e)
- [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4rc1/index.html
- [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16815
- [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4/index.html
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Status of Bazel 5.0.0-pre.20210708.4 -
- Expected release date: 2021-07-16
Task list:
- [x] Pick release baseline: [ca1d20fd](https://github.com/bazelbuild/bazel/commit/ca1d20fdfa95dad533c64aba08ba9d7d98be41b7) with cherrypicks [802901e6](https://github.com/bazelbuild/bazel/commit/802901e697015ee6a56ac36cd0000c1079207d12) [aa768ada](https://github.com/bazelbuild/bazel/commit/aa768ada9ef6bcd8de878a5ca2dbd9932f0868fc) [4bcf2e83](https://github.com/bazelbuild/bazel/commit/4bcf2e83c5cb4f459aae815b38f1edd823286a29) [b27fd22f](https://github.com/bazelbuild/bazel/commit/b27fd22f1bd1e29ec2475a3935c9004cc14713bf) [5d926349](https://github.com/bazelbuild/bazel/commit/5d926349949060c5a3e6699550fa1ac64761901e)
- [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4rc1/index.html
- [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16815
- [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210708.4/index.html
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
status of bazel pre expected release date task list pick release baseline with cherrypicks create release candidate post submit push the release update the
| 1
|
241,301
| 26,256,743,563
|
IssuesEvent
|
2023-01-06 01:53:44
|
rgordon95/conFusionNode3
|
https://api.github.com/repos/rgordon95/conFusionNode3
|
opened
|
CVE-2021-41580 (Medium) detected in passport-oauth2-1.5.0.tgz
|
security vulnerability
|
## CVE-2021-41580 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>passport-oauth2-1.5.0.tgz</b></p></summary>
<p>OAuth 2.0 authentication strategy for Passport.</p>
<p>Library home page: <a href="https://registry.npmjs.org/passport-oauth2/-/passport-oauth2-1.5.0.tgz">https://registry.npmjs.org/passport-oauth2/-/passport-oauth2-1.5.0.tgz</a></p>
<p>Path to dependency file: /conFusionNode3/package.json</p>
<p>Path to vulnerable library: /node_modules/passport-oauth2/package.json</p>
<p>
Dependency Hierarchy:
- passport-facebook-token-3.3.0.tgz (Root Library)
- passport-oauth-1.0.0.tgz
- :x: **passport-oauth2-1.5.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** The passport-oauth2 package before 1.6.1 for Node.js mishandles the error condition of failure to obtain an access token. This is exploitable in certain use cases where an OAuth identity provider uses an HTTP 200 status code for authentication-failure error reports, and an application grants authorization upon simply receiving the access token (i.e., does not try to use the token). NOTE: the passport-oauth2 vendor does not consider this a passport-oauth2 vulnerability.
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41580>CVE-2021-41580</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41580">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41580</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution (passport-oauth2): 1.6.1</p>
<p>Direct dependency fix Resolution (passport-facebook-token): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41580 (Medium) detected in passport-oauth2-1.5.0.tgz - ## CVE-2021-41580 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>passport-oauth2-1.5.0.tgz</b></p></summary>
<p>OAuth 2.0 authentication strategy for Passport.</p>
<p>Library home page: <a href="https://registry.npmjs.org/passport-oauth2/-/passport-oauth2-1.5.0.tgz">https://registry.npmjs.org/passport-oauth2/-/passport-oauth2-1.5.0.tgz</a></p>
<p>Path to dependency file: /conFusionNode3/package.json</p>
<p>Path to vulnerable library: /node_modules/passport-oauth2/package.json</p>
<p>
Dependency Hierarchy:
- passport-facebook-token-3.3.0.tgz (Root Library)
- passport-oauth-1.0.0.tgz
- :x: **passport-oauth2-1.5.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** The passport-oauth2 package before 1.6.1 for Node.js mishandles the error condition of failure to obtain an access token. This is exploitable in certain use cases where an OAuth identity provider uses an HTTP 200 status code for authentication-failure error reports, and an application grants authorization upon simply receiving the access token (i.e., does not try to use the token). NOTE: the passport-oauth2 vendor does not consider this a passport-oauth2 vulnerability.
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41580>CVE-2021-41580</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41580">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41580</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution (passport-oauth2): 1.6.1</p>
<p>Direct dependency fix Resolution (passport-facebook-token): 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in passport tgz cve medium severity vulnerability vulnerable library passport tgz oauth authentication strategy for passport library home page a href path to dependency file package json path to vulnerable library node modules passport package json dependency hierarchy passport facebook token tgz root library passport oauth tgz x passport tgz vulnerable library vulnerability details disputed the passport package before for node js mishandles the error condition of failure to obtain an access token this is exploitable in certain use cases where an oauth identity provider uses an http status code for authentication failure error reports and an application grants authorization upon simply receiving the access token i e does not try to use the token note the passport vendor does not consider this a passport vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution passport direct dependency fix resolution passport facebook token step up your open source security game with mend
| 0
|
106,444
| 9,134,651,765
|
IssuesEvent
|
2019-02-26 00:45:39
|
Microsoft/AzureStorageExplorer
|
https://api.github.com/repos/Microsoft/AzureStorageExplorer
|
closed
|
Break lease error message displays wrong number of spaces for blob name starting with multiple spaces
|
:gear: blobs 🧪 testing
|
**Storage Explorer Version**: 1.7.0/20190221.2
**Platform/OS Version**: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04
**Architecture**: ia32
**Commit**: 81c1ae6b
**Regression From**: Not a regression
#### Steps to Reproduce: ####
1. Expand one non adls gen2 account -> 'Blob Containers'.
2. Open one blob container which contains one blob at least.
3. Rename one blob starting with multiple spaces -> Acquire lease for the renamed blob.
4. Right click the leased blob then select 'Break Lease' -> Enter an invalid name -> Check the error message.
#### Expected Experience: ####
The error message shows a matched numbers of spaces for the blob.
#### Actual Experience: ####
The error message shows one space if you inputted an invalid name for the blob. So we have no way to break the lease according to the error message.
The renamed blob:


|
1.0
|
Break lease error message displays wrong number of spaces for blob name starting with multiple spaces - **Storage Explorer Version**: 1.7.0/20190221.2
**Platform/OS Version**: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04
**Architecture**: ia32
**Commit**: 81c1ae6b
**Regression From**: Not a regression
#### Steps to Reproduce: ####
1. Expand one non adls gen2 account -> 'Blob Containers'.
2. Open one blob container which contains one blob at least.
3. Rename one blob starting with multiple spaces -> Acquire lease for the renamed blob.
4. Right click the leased blob then select 'Break Lease' -> Enter an invalid name -> Check the error message.
#### Expected Experience: ####
The error message shows a matched numbers of spaces for the blob.
#### Actual Experience: ####
The error message shows one space if you inputted an invalid name for the blob. So we have no way to break the lease according to the error message.
The renamed blob:


|
non_process
|
break lease error message displays wrong number of spaces for blob name starting with multiple spaces storage explorer version platform os version windows macos high sierra linux ubuntu architecture commit regression from not a regression steps to reproduce expand one non adls account blob containers open one blob container which contains one blob at least rename one blob starting with multiple spaces acquire lease for the renamed blob right click the leased blob then select break lease enter an invalid name check the error message expected experience the error message shows a matched numbers of spaces for the blob actual experience the error message shows one space if you inputted an invalid name for the blob so we have no way to break the lease according to the error message the renamed blob
| 0
|
9,441
| 12,425,709,694
|
IssuesEvent
|
2020-05-24 17:35:54
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
GRASS processing: missing parameter name
|
Bug Easy fix Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
(This issue was originally reported at: https://github.com/OSGeo/grass/issues/643).
A few GRASS processing commands such as `r.distance`, `r.transect`, `r.category.out`, which have no original parameter named 'output' and outputs in standard output, and intended QGIS processing output would be a \*.txt file. These cases fail with a
```
ERROR: r.distance: Sorry, <output> is not a valid parameter
```
message.
The command param is described as (for e.g. python/plugins/processing/algs/grass7/description/r.distance.txt):
```
QgsProcessingParameterFileDestination|output|Distance|Txt files (*.txt)|None|False
```
but this case is not handled as the case is for html output.
Changing the line to:
```
QgsProcessingParameterFileDestination|html|Distance|Html files (*.html)|distance.html|False
```
just for testing purposes, succeeds.
**QGIS and OS versions**
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
QGIS version | 3.10.5-A Coruña | QGIS code revision | 984615fe1e
-- | -- | -- | --
Compiled against Qt | 5.12.3 | Running against Qt | 5.12.3
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.2 | Running against GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.28.0 | Running against SQLite | 3.28.0
PostgreSQL Client Version | 11.3 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.1
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | macOS Mojave (10.14)
Active python plugins | firstaid; VectorBender; QPackage; ImportPhotos; pluginbuilder3; QuickWKT; noktools; remotedebug; openlayers_plugin; plugin_reloader; processing; db_manager; MetaSearch
|
1.0
|
GRASS processing: missing parameter name - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
(This issue was originally reported at: https://github.com/OSGeo/grass/issues/643).
A few GRASS processing commands such as `r.distance`, `r.transect`, `r.category.out`, which have no original parameter named 'output' and outputs in standard output, and intended QGIS processing output would be a \*.txt file. These cases fail with a
```
ERROR: r.distance: Sorry, <output> is not a valid parameter
```
message.
The command param is described as (for e.g. python/plugins/processing/algs/grass7/description/r.distance.txt):
```
QgsProcessingParameterFileDestination|output|Distance|Txt files (*.txt)|None|False
```
but this case is not handled as the case is for html output.
Changing the line to:
```
QgsProcessingParameterFileDestination|html|Distance|Html files (*.html)|distance.html|False
```
just for testing purposes, succeeds.
**QGIS and OS versions**
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
QGIS version | 3.10.5-A Coruña | QGIS code revision | 984615fe1e
-- | -- | -- | --
Compiled against Qt | 5.12.3 | Running against Qt | 5.12.3
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.2 | Running against GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.28.0 | Running against SQLite | 3.28.0
PostgreSQL Client Version | 11.3 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.1
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | macOS Mojave (10.14)
Active python plugins | firstaid; VectorBender; QPackage; ImportPhotos; pluginbuilder3; QuickWKT; noktools; remotedebug; openlayers_plugin; plugin_reloader; processing; db_manager; MetaSearch
|
process
|
grass processing missing parameter name bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug this issue was originally reported at a few grass processing commands such as r distance r transect r category out which have no original parameter named output and outputs in standard output and intended qgis processing output would be a txt file these cases fail with a error r distance sorry is not a valid parameter message the command param is described as for e g python plugins processing algs description r distance txt qgsprocessingparameterfiledestination output distance txt files txt none false but this case is not handled as the case is for html output changing the line to qgsprocessingparameterfiledestination html distance html files html distance html false just for testing purposes succeeds qgis and os versions about click in the table ctrl a and then ctrl c finally paste here qgis version a coruña qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel september os version macos mojave active python plugins firstaid vectorbender qpackage importphotos quickwkt noktools remotedebug openlayers plugin plugin reloader processing db manager metasearch
| 1
|
2,433
| 5,215,845,791
|
IssuesEvent
|
2017-01-26 07:53:47
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Please give `validator` access to row data
|
inprocess
|
Validator currently only has access to cell value
It would be useful for when rows have different types of values to be able to use different validators depending on other values in the row
```js
const data = [
{type: 'name', value: 'John Snow'},
{type: 'email', value: 'johnsnow@example.com'},
// ...etc
];
```
```js
const data = [
{type: 'start_date', value: someDate},
{type: 'end_date', value: someDateThatShouldBeAfterStartDate},
// ...etc
];
```
|
1.0
|
Please give `validator` access to row data - Validator currently only has access to cell value
It would be useful for when rows have different types of values to be able to use different validators depending on other values in the row
```js
const data = [
{type: 'name', value: 'John Snow'},
{type: 'email', value: 'johnsnow@example.com'},
// ...etc
];
```
```js
const data = [
{type: 'start_date', value: someDate},
{type: 'end_date', value: someDateThatShouldBeAfterStartDate},
// ...etc
];
```
|
process
|
please give validator access to row data validator currently only has access to cell value it would be useful for when rows have different types of values to be able to use different validators depending on other values in the row js const data type name value john snow type email value johnsnow example com etc js const data type start date value somedate type end date value somedatethatshouldbeafterstartdate etc
| 1
|
68,311
| 8,248,564,490
|
IssuesEvent
|
2018-09-11 18:49:23
|
dojo/framework
|
https://api.github.com/repos/dojo/framework
|
closed
|
[Projector#merge] Existing CSS classes are not preserved when topmost render toggles rendering
|
working as designed
|
@jcfranco commented on [Mon May 07 2018](https://github.com/dojo/widget-core/issues/923)
**Bug**
When using `Projector#merge`, if the topmost `render` toggles rendering (returns valid VDOM or undefined between renders), existing CSS classes are not preserved on subsequent renders:
Package Version: 2.0.0
**Code**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<title>Preserve CSS classes when merging</title>
</head>
<body>
<div id="target" class="preserve-me" />
</body>
</html>
```
```ts
import { ProjectorMixin } from '@dojo/widget-core/mixins/Projector';
import App from './widgets/App';
let visible = true;
const Projector = ProjectorMixin(App);
const projector = new Projector();
projector.setProperties({ visible });
projector.merge(document.getElementById('target')!);
setInterval(() => {
visible = !visible;
projector.setProperties({ visible });
}, 1000);
```
```ts
export default class App extends WidgetBase<AppProperties> {
protected render() {
const { visible } = this.properties;
return visible ?
v('div', { classes: css.app }, [
v('div', [
'hello'
])
]) :
null;
}
}
```
[Demo](https://jcfranco.github.io/dojo-2-test-cases/merge-not-preserving-original-classes/output/dist/)
**Expected behavior:**
`preserve-me` class is preserved whenever the app is rendered.
**Actual behavior:**
`preserve-me` class is dropped when the app is re-rendered (1st render is 👌).
|
1.0
|
[Projector#merge] Existing CSS classes are not preserved when topmost render toggles rendering - @jcfranco commented on [Mon May 07 2018](https://github.com/dojo/widget-core/issues/923)
**Bug**
When using `Projector#merge`, if the topmost `render` toggles rendering (returns valid VDOM or undefined between renders), existing CSS classes are not preserved on subsequent renders:
Package Version: 2.0.0
**Code**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<title>Preserve CSS classes when merging</title>
</head>
<body>
<div id="target" class="preserve-me" />
</body>
</html>
```
```ts
import { ProjectorMixin } from '@dojo/widget-core/mixins/Projector';
import App from './widgets/App';
let visible = true;
const Projector = ProjectorMixin(App);
const projector = new Projector();
projector.setProperties({ visible });
projector.merge(document.getElementById('target')!);
setInterval(() => {
visible = !visible;
projector.setProperties({ visible });
}, 1000);
```
```ts
export default class App extends WidgetBase<AppProperties> {
protected render() {
const { visible } = this.properties;
return visible ?
v('div', { classes: css.app }, [
v('div', [
'hello'
])
]) :
null;
}
}
```
[Demo](https://jcfranco.github.io/dojo-2-test-cases/merge-not-preserving-original-classes/output/dist/)
**Expected behavior:**
`preserve-me` class is preserved whenever the app is rendered.
**Actual behavior:**
`preserve-me` class is dropped when the app is re-rendered (1st render is 👌).
|
non_process
|
existing css classes are not preserved when topmost render toggles rendering jcfranco commented on bug when using projector merge if the topmost render toggles rendering returns valid vdom or undefined between renders existing css classes are not preserved on subsequent renders package version code html preserve css classes when merging ts import projectormixin from dojo widget core mixins projector import app from widgets app let visible true const projector projectormixin app const projector new projector projector setproperties visible projector merge document getelementbyid target setinterval visible visible projector setproperties visible ts export default class app extends widgetbase protected render const visible this properties return visible v div classes css app v div hello null expected behavior preserve me class is preserved whenever the app is rendered actual behavior preserve me class is dropped when the app is re rendered render is 👌
| 0
|
7,584
| 10,696,317,830
|
IssuesEvent
|
2019-10-23 14:32:41
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
The `is_process_function` property of process functions is broken
|
priority/nice-to-have topic/engine topic/processes type/bug
|
When called, it will simply return the property and not actually call it. This does not cause problems but only by accident. The `aiida.engine.utils.is_process_function` will not return a boolean in this case, but just the property. When used in a conditional this still works because a property is "truthy".
|
1.0
|
The `is_process_function` property of process functions is broken - When called, it will simply return the property and not actually call it. This does not cause problems but only by accident. The `aiida.engine.utils.is_process_function` will not return a boolean in this case, but just the property. When used in a conditional this still works because a property is "truthy".
|
process
|
the is process function property of process functions is broken when called it will simply return the property and not actually call it this does not cause problems but only by accident the aiida engine utils is process function will not return a boolean in this case but just the property when used in a conditional this still works because a property is truthy
| 1
|
36,595
| 2,803,379,900
|
IssuesEvent
|
2015-05-14 04:43:19
|
JukkaL/mypy
|
https://api.github.com/repos/JukkaL/mypy
|
closed
|
Cannot determine type of a function with a no_type_check decorator
|
bug priority
|
Mypy complains if I call a `no_type_check` function above the definition:
```
import typing
def f() -> None:
foo() # Error: Cannot determine type of 'foo'
@typing.no_type_check
def foo(x: {1:2}) -> [1]:
1 + 'x'
```
We should detect `no_type_check` during semantic analysis and set the type of the `Decorator` object to `Any` there, before type checking.
|
1.0
|
Cannot determine type of a function with a no_type_check decorator - Mypy complains if I call a `no_type_check` function above the definition:
```
import typing
def f() -> None:
foo() # Error: Cannot determine type of 'foo'
@typing.no_type_check
def foo(x: {1:2}) -> [1]:
1 + 'x'
```
We should detect `no_type_check` during semantic analysis and set the type of the `Decorator` object to `Any` there, before type checking.
|
non_process
|
cannot determine type of a function with a no type check decorator mypy complains if i call a no type check function above the definition import typing def f none foo error cannot determine type of foo typing no type check def foo x x we should detect no type check during semantic analysis and set the type of the decorator object to any there before type checking
| 0
|
57,652
| 6,552,598,820
|
IssuesEvent
|
2017-09-05 18:59:36
|
devtools-html/debugger.html
|
https://api.github.com/repos/devtools-html/debugger.html
|
closed
|
[ProjectSearch] fix intermittent test
|
testing
|
We are getting an intermittent with `CmdOrCtrl+Shift+F` sometimes not opening project search.
perhaps we are trying to call it before the search UI is ready to handle it?
|
1.0
|
[ProjectSearch] fix intermittent test - We are getting an intermittent with `CmdOrCtrl+Shift+F` sometimes not opening project search.
perhaps we are trying to call it before the search UI is ready to handle it?
|
non_process
|
fix intermittent test we are getting an intermittent with cmdorctrl shift f sometimes not opening project search perhaps we are trying to call it before the search ui is ready to handle it
| 0
|
3,275
| 6,362,643,222
|
IssuesEvent
|
2017-07-31 15:22:07
|
pelias/wof-admin-lookup
|
https://api.github.com/repos/pelias/wof-admin-lookup
|
closed
|
Worker processes are hitting the Node.js memory limit
|
processed
|
As the amount of data in Who's on First grows, the admin lookup workers have to store more and more data. The `county` layer is now so big that the admin lookup worker process for counties frequently hits the Node.js memory limit (which defaults to 1.5GB). This causes the admin lookup system to stop processing all requests as it waits for responses from the county layer worker that will never come.
This affects both the importers and the pip-service.
We should consider taking the following actions:
- [x] Increase the memory limit using the `--max_old_space_size` V8 flag to fix the issue temporarily
- [ ] Add the ability to detect workers that have stopped for some reason and either restart them or shut down the importer/pip-service
- [x] Plan to make architectural improvements to the admin lookup system so that it's more flexible and resilient.
|
1.0
|
Worker processes are hitting the Node.js memory limit - As the amount of data in Who's on First grows, the admin lookup workers have to store more and more data. The `county` layer is now so big that the admin lookup worker process for counties frequently hits the Node.js memory limit (which defaults to 1.5GB). This causes the admin lookup system to stop processing all requests as it waits for responses from the county layer worker that will never come.
This affects both the importers and the pip-service.
We should consider taking the following actions:
- [x] Increase the memory limit using the `--max_old_space_size` V8 flag to fix the issue temporarily
- [ ] Add the ability to detect workers that have stopped for some reason and either restart them or shut down the importer/pip-service
- [x] Plan to make architectural improvements to the admin lookup system so that it's more flexible and resilient.
|
process
|
worker processes are hitting the node js memory limit as the amount of data in who s on first grows the admin lookup workers have to store more and more data the county layer is now so big that the admin lookup worker process for counties frequently hits the node js memory limit which defaults to this causes the admin lookup system to stop processing all requests as it waits for responses from the county layer worker that will never come this affects both the importers and the pip service we should consider taking the following actions increase the memory limit using the max old space size flag to fix the issue temporarily add the ability to detect workers that have stopped for some reason and either restart them or shut down the importer pip service plan to make architectural improvements to the admin lookup system so that it s more flexible and resilient
| 1
|
250,965
| 27,127,593,670
|
IssuesEvent
|
2023-02-16 07:07:42
|
monizb/FireShort
|
https://api.github.com/repos/monizb/FireShort
|
closed
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- jest-24.9.0.tgz
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/monizb/FireShort/commit/01d2522e4209e107bda54c059ee7caae1a2713dc">01d2522e4209e107bda54c059ee7caae1a2713dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution (node-notifier): 5.4.4</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz - autoclosed - ## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- jest-24.9.0.tgz
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/monizb/FireShort/commit/01d2522e4209e107bda54c059ee7caae1a2713dc">01d2522e4209e107bda54c059ee7caae1a2713dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution (node-notifier): 5.4.4</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in node notifier tgz autoclosed cve medium severity vulnerability vulnerable library node notifier tgz a node js module for sending notifications on native mac windows post and pre and linux or growl as fallback library home page a href path to dependency file package json path to vulnerable library node modules node notifier package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz core tgz reporters tgz x node notifier tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package node notifier before it allows an attacker to run arbitrary commands on linux machines due to the options params not being sanitised when being passed an array publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node notifier direct dependency fix resolution react scripts step up your open source security game with mend
| 0
|
2,379
| 5,185,648,887
|
IssuesEvent
|
2017-01-20 11:09:06
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[Subtitles] [FR] COMMENT FINANCER LE REMBOURSEMENT À 100% DES SOINS DE SANTÉ PRESCRITS ?
|
Language: French Process: [6] Approved
|
# Video title
COMMENT FINANCER LE REMBOURSEMENT À 100% DES SOINS DE SANTÉ PRESCRITS ?
# URL
https://www.youtube.com/watch?v=p4WrCjgCbaA&t=2s
# Youtube subtitles language
Français
# Duration
4:36
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&bl=vmp&v=p4WrCjgCbaA&action_mde_edit_form=1&tab=captions&ui=hd&lang=fr
|
1.0
|
[Subtitles] [FR] COMMENT FINANCER LE REMBOURSEMENT À 100% DES SOINS DE SANTÉ PRESCRITS ? - # Video title
COMMENT FINANCER LE REMBOURSEMENT À 100% DES SOINS DE SANTÉ PRESCRITS ?
# URL
https://www.youtube.com/watch?v=p4WrCjgCbaA&t=2s
# Youtube subtitles language
Français
# Duration
4:36
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&bl=vmp&v=p4WrCjgCbaA&action_mde_edit_form=1&tab=captions&ui=hd&lang=fr
|
process
|
comment financer le remboursement à des soins de santé prescrits video title comment financer le remboursement à des soins de santé prescrits url youtube subtitles language français duration subtitles url
| 1
|
3,126
| 6,158,113,199
|
IssuesEvent
|
2017-06-28 20:29:31
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
vsphere post-processor failing with " invalid target disk adapter type: "thin""
|
bug post-processor/vsphere
|
Trying to get vsphere post-processor working with this config:
``` json
"post-processors": [
{
"type": "vsphere",
"cluster": "pod-01",
"datacenter": "myDC",
"datastore": "nfs-ds01",
"host": "vcsa.dc.local",
"username": "root",
"password": "VMware1!",
"vm_name": "packer-us-01",
"vm_network": "VM Network",
"disk_mode": "thin"
}
]
```
builder used is vmware-iso. Simple one.
Keep having the following error when ovftool is invoked at post-processors time.
```
"Error: Invalid target disk adapter type: "thin""
```
My destination datastore is an NFS datastore, so thin must be ok. I've also tried with sparse, seSparse, same result.
I've then tried to use ovftool manually with the same parameters:
``` shell
$ ovftool --name="packer-us-01" --diskMode="thin" --acceptAllEulas --network="VM Network" --datastore="nfs-ds01" packer-vmware-iso.ovf vi://root:VMware1!@vcsa.dc.local/myDC/host/pod-01
```
Result is successfull
- Packer Version: 0.10.0
- Host platform: ubuntu 64 trusty
- gist: https://gist.github.com/vfiftyfive/565062a0ec8616d60e64d294ca7d6339
|
1.0
|
vsphere post-processor failing with " invalid target disk adapter type: "thin"" - Trying to get vsphere post-processor working with this config:
``` json
"post-processors": [
{
"type": "vsphere",
"cluster": "pod-01",
"datacenter": "myDC",
"datastore": "nfs-ds01",
"host": "vcsa.dc.local",
"username": "root",
"password": "VMware1!",
"vm_name": "packer-us-01",
"vm_network": "VM Network",
"disk_mode": "thin"
}
]
```
builder used is vmware-iso. Simple one.
Keep having the following error when ovftool is invoked at post-processors time.
```
"Error: Invalid target disk adapter type: "thin""
```
My destination datastore is an NFS datastore, so thin must be ok. I've also tried with sparse, seSparse, same result.
I've then tried to use ovftool manually with the same parameters:
``` shell
$ ovftool --name="packer-us-01" --diskMode="thin" --acceptAllEulas --network="VM Network" --datastore="nfs-ds01" packer-vmware-iso.ovf vi://root:VMware1!@vcsa.dc.local/myDC/host/pod-01
```
Result is successfull
- Packer Version: 0.10.0
- Host platform: ubuntu 64 trusty
- gist: https://gist.github.com/vfiftyfive/565062a0ec8616d60e64d294ca7d6339
|
process
|
vsphere post processor failing with invalid target disk adapter type thin trying to get vsphere post processor working with this config json post processors type vsphere cluster pod datacenter mydc datastore nfs host vcsa dc local username root password vm name packer us vm network vm network disk mode thin builder used is vmware iso simple one keep having the following error when ovftool is invoked at post processors time error invalid target disk adapter type thin my destination datastore is an nfs datastore so thin must be ok i ve also tried with sparse sesparse same result i ve then tried to use ovftool manually with the same parameters shell ovftool name packer us diskmode thin acceptalleulas network vm network datastore nfs packer vmware iso ovf vi root vcsa dc local mydc host pod result is successfull packer version host platform ubuntu trusty gist
| 1
|
70,755
| 13,529,620,426
|
IssuesEvent
|
2020-09-15 18:35:24
|
brightdigit/MistKit
|
https://api.github.com/repos/brightdigit/MistKit
|
closed
|
Fix "method_lines" issue in Sources/mistdemoc/Commands/ListCommand.swift
|
cc-code-smell
|
Function `runAsync` has 33 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/brightdigit/MistKit/Sources/mistdemoc/Commands/ListCommand.swift#issue_5f6107f4be7ee400010000cd
|
1.0
|
Fix "method_lines" issue in Sources/mistdemoc/Commands/ListCommand.swift - Function `runAsync` has 33 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/brightdigit/MistKit/Sources/mistdemoc/Commands/ListCommand.swift#issue_5f6107f4be7ee400010000cd
|
non_process
|
fix method lines issue in sources mistdemoc commands listcommand swift function runasync has lines of code exceeds allowed consider refactoring
| 0
|
10,319
| 13,160,851,989
|
IssuesEvent
|
2020-08-10 18:23:33
|
GoogleCloudPlatform/stackdriver-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/stackdriver-sandbox
|
closed
|
Cloud Shell images aren't updated on each commit
|
priority: p2 type: process
|
I believe this is being worked on, but adding an issue just in case
We [build and push new tagged images](https://github.com/GoogleCloudPlatform/stackdriver-sandbox/blob/master/.github/workflows/push-master.yml) for all services on each push to master. We should do the same with the new Cloud Shell image
|
1.0
|
Cloud Shell images aren't updated on each commit - I believe this is being worked on, but adding an issue just in case
We [build and push new tagged images](https://github.com/GoogleCloudPlatform/stackdriver-sandbox/blob/master/.github/workflows/push-master.yml) for all services on each push to master. We should do the same with the new Cloud Shell image
|
process
|
cloud shell images aren t updated on each commit i believe this is being worked on but adding an issue just in case we for all services on each push to master we should do the same with the new cloud shell image
| 1
|
173,997
| 21,191,733,037
|
IssuesEvent
|
2022-04-08 18:14:55
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Parameter Store / Vets-API] Lighthouse Health FHIR API RSA key installation and configuration
|
operations devops security
|
## Description
Installation of an RSA key that will be used when accessing the **sandbox environment** of Lighthouse's Health FHIR API from vets-api.
## Background/context
The VA Mobile App, via the mobile API, has been approved by Lighthouse to access their Veterans Health API (FHIR) to retrieve immunizations records.
The next step is to generate an RSA key pair that will be used to authenticate with their sandbox authorization server. The private key should be installed and configured in a similar manner as the key for the HealthQuest app (vets-api-server-vagov-sandbox.yml etc).
The public key should be encoded as a JWK and sent to Beau Grantham <beau.grantham@va.gov> and the LH team and they'll provision us a new client for the API.
[This slack thread](https://dsva.slack.com/archives/CBU0KDSB1/p1632439082099600) has additional details and the mobile team was advised that they should generate the key pair and deliver it to Ops. The mobile team will also handle the JWK encoding and deliver that to Lighthouse.
[Here's an example PR where this was done before](https://github.com/department-of-veterans-affairs/devops/pull/9096/files)
---
## Tasks
- [ ] Create an IAM group for the VA Mobile team (@kreek, @jperk51)
- [ ] Create a IAM policy to allow this newly created group permission to write to a path in parameter store
- [ ] Render any assistance needed to ensure that this team can put their key in that path
- [ ] Operations to modify vets-api deployment to include the key and variables that are in new settings.yaml (similar to what was done in [this pr](PR))
## Acceptance Criteria
- [ ] The key is installed and API requests to the Lighthouse Health FHIR API are successful (returning 200s)
|
True
|
[Parameter Store / Vets-API] Lighthouse Health FHIR API RSA key installation and configuration - ## Description
Installation of an RSA key that will be used when accessing the **sandbox environment** of Lighthouse's Health FHIR API from vets-api.
## Background/context
The VA Mobile App, via the mobile API, has been approved by Lighthouse to access their Veterans Health API (FHIR) to retrieve immunizations records.
The next step is to generate an RSA key pair that will be used to authenticate with their sandbox authorization server. The private key should be installed and configured in a similar manner as the key for the HealthQuest app (vets-api-server-vagov-sandbox.yml etc).
The public key should be encoded as a JWK and sent to Beau Grantham <beau.grantham@va.gov> and the LH team and they'll provision us a new client for the API.
[This slack thread](https://dsva.slack.com/archives/CBU0KDSB1/p1632439082099600) has additional details and the mobile team was advised that they should generate the key pair and deliver it to Ops. The mobile team will also handle the JWK encoding and deliver that to Lighthouse.
[Here's an example PR where this was done before](https://github.com/department-of-veterans-affairs/devops/pull/9096/files)
---
## Tasks
- [ ] Create an IAM group for the VA Mobile team (@kreek, @jperk51)
- [ ] Create a IAM policy to allow this newly created group permission to write to a path in parameter store
- [ ] Render any assistance needed to ensure that this team can put their key in that path
- [ ] Operations to modify vets-api deployment to include the key and variables that are in new settings.yaml (similar to what was done in [this pr](PR))
## Acceptance Criteria
- [ ] The key is installed and API requests to the Lighthouse Health FHIR API are successful (returning 200s)
|
non_process
|
lighthouse health fhir api rsa key installation and configuration description installation of an rsa key that will be used when accessing the sandbox environment of lighthouse s health fhir api from vets api background context the va mobile app via the mobile api has been approved by lighthouse to access their veterans health api fhir to retrieve immunizations records the next step is to generate an rsa key pair that will be used to authenticate with their sandbox authorization server the private key should be installed and configured in a similar manner as the key for the healthquest app vets api server vagov sandbox yml etc the public key should be encoded as a jwk and sent to beau grantham and the lh team and they ll provision us a new client for the api has additional details and the mobile team was advised that they should generate the key pair and deliver it to ops the mobile team will also handle the jwk encoding and deliver that to lighthouse tasks create an iam group for the va mobile team kreek create a iam policy to allow this newly created group permission to write to a path in parameter store render any assistance needed to ensure that this team can put their key in that path operations to modify vets api deployment to include the key and variables that are in new settings yaml similar to what was done in pr acceptance criteria the key is installed and api requests to the lighthouse health fhir api are successful returning
| 0
|
8,482
| 11,643,970,314
|
IssuesEvent
|
2020-02-29 16:34:12
|
jaredlujr/blog_comment
|
https://api.github.com/repos/jaredlujr/blog_comment
|
opened
|
Processor-based Optimization | Jared.Lu
|
/2020/02/29/processor-opt.html Gitalk
|
https://jiaruilu.com/2020/02/29/processor-opt.html
Followed by Introduction to Modern processor, we are going to talk about the several optimization strategies for computation performance in details.
|
1.0
|
Processor-based Optimization | Jared.Lu - https://jiaruilu.com/2020/02/29/processor-opt.html
Followed by Introduction to Modern processor, we are going to talk about the several optimization strategies for computation performance in details.
|
process
|
processor based optimization jared lu followed by introduction to modern processor we are going to talk about the several optimization strategies for computation performance in details
| 1
|
35,676
| 17,193,249,044
|
IssuesEvent
|
2021-07-16 13:56:27
|
femiwiki/docker-mediawiki
|
https://api.github.com/repos/femiwiki/docker-mediawiki
|
closed
|
Run of jobs completely in the background
|
performance
|
> It is recommended that you instead schedule the running of jobs completely in the background, via the command line. By default, jobs are run at the end of a web request. Disable this default behaviour by setting [`$wgJobRunRate`](https://www.mediawiki.org/wiki/Manual:$wgJobRunRate) to `0`.
Reference:
https://www.mediawiki.org/wiki/Manual:Job_queue
|
True
|
Run of jobs completely in the background - > It is recommended that you instead schedule the running of jobs completely in the background, via the command line. By default, jobs are run at the end of a web request. Disable this default behaviour by setting [`$wgJobRunRate`](https://www.mediawiki.org/wiki/Manual:$wgJobRunRate) to `0`.
Reference:
https://www.mediawiki.org/wiki/Manual:Job_queue
|
non_process
|
run of jobs completely in the background it is recommended that you instead schedule the running of jobs completely in the background via the command line by default jobs are run at the end of a web request disable this default behaviour by setting to reference
| 0
|
20,260
| 26,877,620,323
|
IssuesEvent
|
2023-02-05 08:14:44
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Why Bazel a crossplatform building tool need mysys2??????
|
more data needed type: support / not a bug (process) team-OSS
|
### Description of the bug:
I really don't understand, Bazel is sooooo hard to use, why it need mysy2?
Error in fail: BAZEL_SH environment variable is not set
I build any repo tensorflow meidap[ipe got this damn it error on windows.
So confusing.... why it need BAZEL_SH???? AM on on windows!!!
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which operating system are you running Bazel on?
_No response_
### What is the output of `bazel info release`?
_No response_
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
Why Bazel a crossplatform building tool need mysys2?????? - ### Description of the bug:
I really don't understand, Bazel is sooooo hard to use, why it need mysy2?
Error in fail: BAZEL_SH environment variable is not set
I build any repo tensorflow meidap[ipe got this damn it error on windows.
So confusing.... why it need BAZEL_SH???? AM on on windows!!!
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which operating system are you running Bazel on?
_No response_
### What is the output of `bazel info release`?
_No response_
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
why bazel a crossplatform building tool need description of the bug i really don t understand bazel is sooooo hard to use why it need error in fail bazel sh environment variable is not set i build any repo tensorflow meidap ipe got this damn it error on windows so confusing why it need bazel sh am on on windows what s the simplest easiest way to reproduce this bug please provide a minimal example if possible no response which operating system are you running bazel on no response what is the output of bazel info release no response if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.