Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,757
| 5,348,389,306
|
IssuesEvent
|
2017-02-18 04:23:27
|
amitdholiya/vqmod
|
https://api.github.com/repos/amitdholiya/vqmod
|
reopened
|
Empty vqcache on addon domain
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.vqmod is installed correctly
2.file permissions are all 777 on /vqmod and contents
3.cache and log are empty
What is the expected output? What do you see instead?
I expect my mods to be generating cache files, they are not.
vQmod Version:2.5.1
Server Operating System:linux
Please provide any additional information below.
root site works perfectly, about 30 vqmods installed and working great.
same mods, same configuration on addon domain (public_html/www.mydomain2/) mods
do not work. cache does not generate. thanks for your help
```
Original issue reported on code.google.com by `dale.jac...@gmail.com` on 28 Nov 2014 at 5:43
|
1.0
|
Empty vqcache on addon domain - ```
What steps will reproduce the problem?
1.vqmod is installed correctly
2.file permissions are all 777 on /vqmod and contents
3.cache and log are empty
What is the expected output? What do you see instead?
I expect my mods to be generating cache files, they are not.
vQmod Version:2.5.1
Server Operating System:linux
Please provide any additional information below.
root site works perfectly, about 30 vqmods installed and working great.
same mods, same configuration on addon domain (public_html/www.mydomain2/) mods
do not work. cache does not generate. thanks for your help
```
Original issue reported on code.google.com by `dale.jac...@gmail.com` on 28 Nov 2014 at 5:43
|
non_process
|
empty vqcache on addon domain what steps will reproduce the problem vqmod is installed correctly file permissions are all on vqmod and contents cache and log are empty what is the expected output what do you see instead i expect my mods to be generating cache files they are not vqmod version server operating system linux please provide any additional information below root site works perfectly about vqmods installed and working great same mods same configuration on addon domain public html mods do not work cache does not generate thanks for your help original issue reported on code google com by dale jac gmail com on nov at
| 0
|
301,584
| 9,222,039,308
|
IssuesEvent
|
2019-03-11 21:34:58
|
lbryio/chainquery
|
https://api.github.com/repos/lbryio/chainquery
|
opened
|
Abandoned claims not updated
|
priority: high type: bug
|
Abandoned claims are not showing up as spent. This should happen as part of the mempool processing also.
https://baremetal.chainquery.lbry.io/api/sql?query=SELECT%20*%20FROM%20claim%20where%20name=%22test-announce-03%22
|
1.0
|
Abandoned claims not updated - Abandoned claims are not showing up as spent. This should happen as part of the mempool processing also.
https://baremetal.chainquery.lbry.io/api/sql?query=SELECT%20*%20FROM%20claim%20where%20name=%22test-announce-03%22
|
non_process
|
abandoned claims not updated abandoned claims are not showing up as spent this should happen as part of the mempool processing also
| 0
|
38,127
| 12,528,267,184
|
IssuesEvent
|
2020-06-04 09:17:58
|
ckauhaus/nixpkgs
|
https://api.github.com/repos/ckauhaus/nixpkgs
|
opened
|
Vulnerability roundup 4: balsa-2.5.6: 1 advisory
|
1.severity: security
|
[search](https://search.nix.gsc.io/?q=balsa&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=balsa+in%3Apath&type=Code)
* [ ] [CVE-2020-13645](https://nvd.nist.gov/vuln/detail/CVE-2020-13645) CVSSv3=6.5 (nixos-19.03)
Scanned versions: nixos-19.03: 34c7eb7545d. May contain false positives.
|
True
|
Vulnerability roundup 4: balsa-2.5.6: 1 advisory - [search](https://search.nix.gsc.io/?q=balsa&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=balsa+in%3Apath&type=Code)
* [ ] [CVE-2020-13645](https://nvd.nist.gov/vuln/detail/CVE-2020-13645) CVSSv3=6.5 (nixos-19.03)
Scanned versions: nixos-19.03: 34c7eb7545d. May contain false positives.
|
non_process
|
vulnerability roundup balsa advisory nixos scanned versions nixos may contain false positives
| 0
|
679,951
| 23,251,241,311
|
IssuesEvent
|
2022-08-04 04:07:15
|
bigbinary/org_incineration
|
https://api.github.com/repos/bigbinary/org_incineration
|
closed
|
Add `ActionText::EncryptedRichText` under ignored rails models
|
high priority
|
After upgrading rails getting the following error in incineration service. We should add `ActionText::EncryptedRichText` in skipped models under ignored rails models category.
<img width="1126" alt="Screenshot 2022-08-03 at 10 16 00 PM" src="https://user-images.githubusercontent.com/47141466/182663869-fb290955-438b-47ff-943d-9540b8a7335c.png">
cc: @unnitallman
|
1.0
|
Add `ActionText::EncryptedRichText` under ignored rails models - After upgrading rails getting the following error in incineration service. We should add `ActionText::EncryptedRichText` in skipped models under ignored rails models category.
<img width="1126" alt="Screenshot 2022-08-03 at 10 16 00 PM" src="https://user-images.githubusercontent.com/47141466/182663869-fb290955-438b-47ff-943d-9540b8a7335c.png">
cc: @unnitallman
|
non_process
|
add actiontext encryptedrichtext under ignored rails models after upgrading rails getting the following error in incineration service we should add actiontext encryptedrichtext in skipped models under ignored rails models category img width alt screenshot at pm src cc unnitallman
| 0
|
153,373
| 13,503,977,376
|
IssuesEvent
|
2020-09-13 15:55:53
|
dankamongmen/notcurses
|
https://api.github.com/repos/dankamongmen/notcurses
|
opened
|
Multiselector ought have same functionality as selector
|
documentation enhancement
|
An `ncselector` can add and remove items at runtime, and can control the widget through `_nextitem()` and `_previtem()`. `ncmultiselector` ought have these same capabilities, along with `ncmultiselector_toggle_selected()`.
|
1.0
|
Multiselector ought have same functionality as selector - An `ncselector` can add and remove items at runtime, and can control the widget through `_nextitem()` and `_previtem()`. `ncmultiselector` ought have these same capabilities, along with `ncmultiselector_toggle_selected()`.
|
non_process
|
multiselector ought have same functionality as selector an ncselector can add and remove items at runtime and can control the widget through nextitem and previtem ncmultiselector ought have these same capabilities along with ncmultiselector toggle selected
| 0
|
16,357
| 21,035,735,262
|
IssuesEvent
|
2022-03-31 07:39:40
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Dissect: ignore_failure doesn't work as expected
|
Filebeat libbeat :Processors Team:Elastic-Agent-Data-Plane
|
Filebeat 7.13.0
The `ignore_failure` setting in the `dissect` processor doesn't seem to work as expected and behaves as if it were `true`.
```yaml
filebeat.inputs:
- type: stdin
output.console:
pretty: true
processors:
- add_tags:
tags: ['before_dissects']
target: "tags"
- dissect:
tokenizer: "%{word1} %{word2}"
field: "message"
target_prefix: "dissect"
ignore_failure: false
- dissect:
tokenizer: "%{word1} %{word2}"
field: "message"
target_prefix: "dissect"
ignore_failure: false
- add_tags:
tags: ['after_dissects']
target: "tags"
```
When the first `dissect` processor fails, the processor _should_ log an error, preventing execution of other processors if `ignore_failure` is `false` (according to https://www.elastic.co/guide/en/beats/filebeat/current/dissect.html).
This is not what we see here:
```
{
"@timestamp": "2021-06-01T07:54:13.072Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.13.0"
},
"tags": [
"before_dissects",
"after_dissects"
],
"log": {
"offset": 0,
"file": {
"path": ""
},
"flags": [
"dissect_parsing_error",
"dissect_parsing_error"
]
},
"message": "abc",
"input": {
"type": "stdin"
},
"ecs": {
"version": "1.8.0"
},
"host": {
"name": "fred.home"
},
"agent": {
"ephemeral_id": "a18dc3ee-7a5f-440b-a062-870194fb3cf3",
"id": "fce66f38-e7a3-4bae-aad8-40df579ee1d4",
"name": "fred.home",
"type": "filebeat",
"version": "7.13.0",
"hostname": "fred.home"
}
}
```
The two `dissect` processors fail and all the processors are executed nonetheless.
With `ignore_failure` set to `false` (the default) I would not expect the other processors to be executed and the event to be sent to the output.
|
1.0
|
Dissect: ignore_failure doesn't work as expected - Filebeat 7.13.0
The `ignore_failure` setting in the `dissect` processor doesn't seem to work as expected and behaves as if it were `true`.
```yaml
filebeat.inputs:
- type: stdin
output.console:
pretty: true
processors:
- add_tags:
tags: ['before_dissects']
target: "tags"
- dissect:
tokenizer: "%{word1} %{word2}"
field: "message"
target_prefix: "dissect"
ignore_failure: false
- dissect:
tokenizer: "%{word1} %{word2}"
field: "message"
target_prefix: "dissect"
ignore_failure: false
- add_tags:
tags: ['after_dissects']
target: "tags"
```
When the first `dissect` processor fails, the processor _should_ log an error, preventing execution of other processors if `ignore_failure` is `false` (according to https://www.elastic.co/guide/en/beats/filebeat/current/dissect.html).
This is not what we see here:
```
{
"@timestamp": "2021-06-01T07:54:13.072Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.13.0"
},
"tags": [
"before_dissects",
"after_dissects"
],
"log": {
"offset": 0,
"file": {
"path": ""
},
"flags": [
"dissect_parsing_error",
"dissect_parsing_error"
]
},
"message": "abc",
"input": {
"type": "stdin"
},
"ecs": {
"version": "1.8.0"
},
"host": {
"name": "fred.home"
},
"agent": {
"ephemeral_id": "a18dc3ee-7a5f-440b-a062-870194fb3cf3",
"id": "fce66f38-e7a3-4bae-aad8-40df579ee1d4",
"name": "fred.home",
"type": "filebeat",
"version": "7.13.0",
"hostname": "fred.home"
}
}
```
The two `dissect` processors fail and all the processors are executed nonetheless.
With `ignore_failure` set to `false` (the default) I would not expect the other processors to be executed and the event to be sent to the output.
|
process
|
dissect ignore failure doesn t work as expected filebeat the ignore failure setting in the dissect processor doesn t seem to work as expected and behaves as if it were true yaml filebeat inputs type stdin output console pretty true processors add tags tags target tags dissect tokenizer field message target prefix dissect ignore failure false dissect tokenizer field message target prefix dissect ignore failure false add tags tags target tags when the first dissect processor fails the processor should log an error preventing execution of other processors if ignore failure is false according to this is not what we see here timestamp metadata beat filebeat type doc version tags before dissects after dissects log offset file path flags dissect parsing error dissect parsing error message abc input type stdin ecs version host name fred home agent ephemeral id id name fred home type filebeat version hostname fred home the two dissect processors fail and all the processors are executed nonetheless with ignore failure set to false the default i would not expect the other processors to be executed and the event to be sent to the output
| 1
|
21,196
| 28,213,989,094
|
IssuesEvent
|
2023-04-05 07:33:47
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Since a recent master, some images make darktable crashes when opening them on darkroom
|
priority: high scope: image processing bug: pending
|
**Describe the bug/issue**
I launch again, after nearly 2 weeks, darktable (and so update master), yesterday. I then discover that darktable crashes, when opening all my images of my recent filmrolls (on what I test) when I open them on darkroom.
I've check that some older images opened without any crashes. I so test to clone images that crashes, copy/paste history of good images and can open duplicates. I tested on original and same issue: no more crashes. Then I tried to remove history from lighttable for some images that crashes but again: a crash.
But images that crashes open correctly when I go back to older master I used (nearly 300 commits behind).
**To Reproduce**
1. Use XMP provided below on an image
[XMP-with-crash.zip](https://github.com/darktable-org/darktable/files/11141630/XMP-with-crash.zip)
2. Open image on darkroom
3. See loading image toast message
4. Then darktable should crash
**Expected behavior**
No crash on those images, like I can have with an older master on march.
**Which commit introduced the error**
Git bisect gives such result:
```
177b1e4f9c7c7d50b402c803b64381950fb1de1c is the first bad commit
commit 177b1e4f9c7c7d50b402c803b64381950fb1de1c
Author: Pascal Obry <pascal@obry.net>
Date: Wed Mar 22 08:29:45 2023 +0100
Use preset's multi_name as module label.
In an attempt to give better control to the actual module label
used (avoiding also long preset names possibly hard to read and
ellipsize) the module label is now using the preset multi_name
if set. If not set, the preset name is used as before.
```
So should be for you @TurboGit. As I know that sometimes git bisect is not precise, I'm sure regarding all tests, compile I've made that this is from commits merged on march 25th (same day that one had been merged). On that day, main updates are PR from @ralfbrown and @jenshannoschwalm.
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : recent master (since at least last master of march 25th
|
1.0
|
Since a recent master, some images make darktable crashes when opening them on darkroom - **Describe the bug/issue**
I launch again, after nearly 2 weeks, darktable (and so update master), yesterday. I then discover that darktable crashes, when opening all my images of my recent filmrolls (on what I test) when I open them on darkroom.
I've check that some older images opened without any crashes. I so test to clone images that crashes, copy/paste history of good images and can open duplicates. I tested on original and same issue: no more crashes. Then I tried to remove history from lighttable for some images that crashes but again: a crash.
But images that crashes open correctly when I go back to older master I used (nearly 300 commits behind).
**To Reproduce**
1. Use XMP provided below on an image
[XMP-with-crash.zip](https://github.com/darktable-org/darktable/files/11141630/XMP-with-crash.zip)
2. Open image on darkroom
3. See loading image toast message
4. Then darktable should crash
**Expected behavior**
No crash on those images, like I can have with an older master on march.
**Which commit introduced the error**
Git bisect gives such result:
```
177b1e4f9c7c7d50b402c803b64381950fb1de1c is the first bad commit
commit 177b1e4f9c7c7d50b402c803b64381950fb1de1c
Author: Pascal Obry <pascal@obry.net>
Date: Wed Mar 22 08:29:45 2023 +0100
Use preset's multi_name as module label.
In an attempt to give better control to the actual module label
used (avoiding also long preset names possibly hard to read and
ellipsize) the module label is now using the preset multi_name
if set. If not set, the preset name is used as before.
```
So should be for you @TurboGit. As I know that sometimes git bisect is not precise, I'm sure regarding all tests, compile I've made that this is from commits merged on march 25th (same day that one had been merged). On that day, main updates are PR from @ralfbrown and @jenshannoschwalm.
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : recent master (since at least last master of march 25th
|
process
|
since a recent master some images make darktable crashes when opening them on darkroom describe the bug issue i launch again after nearly weeks darktable and so update master yesterday i then discover that darktable crashes when opening all my images of my recent filmrolls on what i test when i open them on darkroom i ve check that some older images opened without any crashes i so test to clone images that crashes copy paste history of good images and can open duplicates i tested on original and same issue no more crashes then i tried to remove history from lighttable for some images that crashes but again a crash but images that crashes open correctly when i go back to older master i used nearly commits behind to reproduce use xmp provided below on an image open image on darkroom see loading image toast message then darktable should crash expected behavior no crash on those images like i can have with an older master on march which commit introduced the error git bisect gives such result is the first bad commit commit author pascal obry date wed mar use preset s multi name as module label in an attempt to give better control to the actual module label used avoiding also long preset names possibly hard to read and ellipsize the module label is now using the preset multi name if set if not set the preset name is used as before so should be for you turbogit as i know that sometimes git bisect is not precise i m sure regarding all tests compile i ve made that this is from commits merged on march same day that one had been merged on that day main updates are pr from ralfbrown and jenshannoschwalm platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version recent master since at least last master of march
| 1
|
7,147
| 10,291,590,155
|
IssuesEvent
|
2019-08-27 12:50:09
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
closed
|
Use darwin_libproc crate for macOS process routines
|
A-process C-enhancement C-good-first-issue O-macos
|
`process::Process::exe` for macOS should use the `darwin_libproc::pid_path` from the `darwin-libproc` crate instead of bundled bindings.
Also, git dependency should be changed to a published version.
|
1.0
|
Use darwin_libproc crate for macOS process routines - `process::Process::exe` for macOS should use the `darwin_libproc::pid_path` from the `darwin-libproc` crate instead of bundled bindings.
Also, git dependency should be changed to a published version.
|
process
|
use darwin libproc crate for macos process routines process process exe for macos should use the darwin libproc pid path from the darwin libproc crate instead of bundled bindings also git dependency should be changed to a published version
| 1
|
8,556
| 11,731,041,236
|
IssuesEvent
|
2020-03-10 22:53:01
|
gearboxworks/gearbox
|
https://api.github.com/repos/gearboxworks/gearbox
|
closed
|
Set up Docker build process using Actions
|
Task process-docker process-github process-workflow
|
This includes consolidating duplicated logic into one place such as `Makefile`.
Also renaming [github.com/gearboxworks/docker-gearbox](https://github.com/gearboxworks/docker-gearbox) to [github.com/gearboxworks/docker-base](https://github.com/gearboxworks/docker-base)
It *might* include moving to [Github Package repository](https://github.com/features/packages) from Docker Hub *if* it is a lot easier to build and deploy to Github.com via. DockerHub.com.
|
3.0
|
Set up Docker build process using Actions - This includes consolidating duplicated logic into one place such as `Makefile`.
Also renaming [github.com/gearboxworks/docker-gearbox](https://github.com/gearboxworks/docker-gearbox) to [github.com/gearboxworks/docker-base](https://github.com/gearboxworks/docker-base)
It *might* include moving to [Github Package repository](https://github.com/features/packages) from Docker Hub *if* it is a lot easier to build and deploy to Github.com via. DockerHub.com.
|
process
|
set up docker build process using actions this includes consolidating duplicated logic into one place such as makefile also renaming to it might include moving to from docker hub if it is a lot easier to build and deploy to github com via dockerhub com
| 1
|
31,604
| 7,416,395,729
|
IssuesEvent
|
2018-03-22 00:59:48
|
Microsoft/ChakraCore
|
https://api.github.com/repos/Microsoft/ChakraCore
|
closed
|
Move WinRTDate code from DateUtilities.cpp
|
Codebase Quality Task
|
lib/common/common/DateUtilities.cpp has some WinRTDate utilities that seems not used in ChakraCore. Consider move them.
|
1.0
|
Move WinRTDate code from DateUtilities.cpp - lib/common/common/DateUtilities.cpp has some WinRTDate utilities that seems not used in ChakraCore. Consider move them.
|
non_process
|
move winrtdate code from dateutilities cpp lib common common dateutilities cpp has some winrtdate utilities that seems not used in chakracore consider move them
| 0
|
119,997
| 15,688,896,190
|
IssuesEvent
|
2021-03-25 15:08:05
|
MetaMask/metamask-extension
|
https://api.github.com/repos/MetaMask/metamask-extension
|
opened
|
Update copy of "Password" to "Device Passcode"
|
N00-needsDesign T01-enhancement ux-enhancement
|
**Problem**
There is a lot of user confusion with user's passwords. Many users assume:
1) The password will be the same across devices (both extension and mobile)
2) MetaMask is able to recover their password if it is lost.
These are standard patterns for "Passwords" across web2. Typically to unlock a device, the unlock code is called "Pin", "Pin code", "Passcode" etc. And it is often not retrievable in the same way that a password is.
**Solution**
To match user's mental model we should use familiar language. We should update the terminology across extension and mobile from "password" to "device passcode" or "device pin" (although "pin" may be associated with numbers only)
|
1.0
|
Update copy of "Password" to "Device Passcode" - **Problem**
There is a lot of user confusion with user's passwords. Many users assume:
1) The password will be the same across devices (both extension and mobile)
2) MetaMask is able to recover their password if it is lost.
These are standard patterns for "Passwords" across web2. Typically to unlock a device, the unlock code is called "Pin", "Pin code", "Passcode" etc. And it is often not retrievable in the same way that a password is.
**Solution**
To match user's mental model we should use familiar language. We should update the terminology across extension and mobile from "password" to "device passcode" or "device pin" (although "pin" may be associated with numbers only)
|
non_process
|
update copy of password to device passcode problem there is a lot of user confusion with user s passwords many users assume the password will be the same across devices both extension and mobile metamask is able to recover their password if it is lost these are standard patterns for passwords across typically to unlock a device the unlock code is called pin pin code passcode etc and it is often not retrievable in the same way that a password is solution to match user s mental model we should use familiar language we should update the terminology across extension and mobile from password to device passcode or device pin although pin may be associated with numbers only
| 0
|
2,449
| 5,226,483,990
|
IssuesEvent
|
2017-01-27 21:31:58
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
fix flaky equential/test-child-process-pass-fd on fedora 24
|
child_process test
|
Example failure:
https://ci.nodejs.org/job/node-test-commit-linux/7552/nodes=fedora24/console
```console
duration_ms: 1.113
severity: fail
stack: |-
events.js:161
throw er; // Unhandled 'error' event
^
Error: spawn /home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/out/Release/node EAGAIN
at exports._errnoException (util.js:1023:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
at Module.runMain (module.js:607:11)
at run (bootstrap_node.js:418:7)
at startup (bootstrap_node.js:139:9)
at bootstrap_node.js:533:3
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
...
```
/cc @santigimeno
|
1.0
|
fix flaky equential/test-child-process-pass-fd on fedora 24 - Example failure:
https://ci.nodejs.org/job/node-test-commit-linux/7552/nodes=fedora24/console
```console
duration_ms: 1.113
severity: fail
stack: |-
events.js:161
throw er; // Unhandled 'error' event
^
Error: spawn /home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/out/Release/node EAGAIN
at exports._errnoException (util.js:1023:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:193:32)
at onErrorNT (internal/child_process.js:359:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
at Module.runMain (module.js:607:11)
at run (bootstrap_node.js:418:7)
at startup (bootstrap_node.js:139:9)
at bootstrap_node.js:533:3
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
events.js:161
throw er; // Unhandled 'error' event
^
Error: channel closed
at process.target.send (internal/child_process.js:553:16)
at Socket.socketConnected (/home/iojs/build/workspace/node-test-commit-linux/nodes/fedora24/test/sequential/test-child-process-pass-fd.js:39:15)
at Object.onceWrapper (events.js:291:19)
at emitNone (events.js:86:13)
at Socket.emit (events.js:186:7)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1077:10)
...
```
/cc @santigimeno
|
process
|
fix flaky equential test child process pass fd on fedora example failure console duration ms severity fail stack events js throw er unhandled error event error spawn home iojs build workspace node test commit linux nodes out release node eagain at exports errnoexception util js at process childprocess handle onexit internal child process js at onerrornt internal child process js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js at module runmain module js at run bootstrap node js at startup bootstrap node js at bootstrap node js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js events js throw er unhandled error event error channel closed at process target send internal child process js at socket socketconnected home iojs build workspace node test commit linux nodes test sequential test child process pass fd js at object oncewrapper events js at emitnone events js at socket emit events js at tcpconnectwrap afterconnect net js cc santigimeno
| 1
|
3,407
| 6,520,469,720
|
IssuesEvent
|
2017-08-28 16:37:32
|
w3c/w3process
|
https://api.github.com/repos/w3c/w3process
|
closed
|
can the AB/TAG chair ask for a special election in *advance* of a known upcoming vacancy?
|
Process2018Candidate
|
[Section 2.5.3 Advisory Board and Technical Architecture Group Vacated Seats](https://w3c.github.io/w3process/#AB-TAG-vacated) says:
> When an elected seat on either the AB or TAG is vacated, the seat is filled at the next regularly scheduled election for the group unless the group Chair requests that W3C hold an election before then (for instance, due to the group's workload). The group Chair should not request an exceptional election if the next regularly scheduled election is fewer than three months away.
It seems to me that, since some vacancies are known in advance of when they occur, it ought to be possible for the Chair to ask for the election when an upcoming vacancy is known even if the seat is not yet vacant (much like regular elections are held before the terms expire), so that the vacant seat can be filled from the result of a special election more quickly.
It's not clear to me whether the above text allows this. I tend to think it should be clarified so that it does allow it.
|
1.0
|
can the AB/TAG chair ask for a special election in *advance* of a known upcoming vacancy? - [Section 2.5.3 Advisory Board and Technical Architecture Group Vacated Seats](https://w3c.github.io/w3process/#AB-TAG-vacated) says:
> When an elected seat on either the AB or TAG is vacated, the seat is filled at the next regularly scheduled election for the group unless the group Chair requests that W3C hold an election before then (for instance, due to the group's workload). The group Chair should not request an exceptional election if the next regularly scheduled election is fewer than three months away.
It seems to me that, since some vacancies are known in advance of when they occur, it ought to be possible for the Chair to ask for the election when an upcoming vacancy is known even if the seat is not yet vacant (much like regular elections are held before the terms expire), so that the vacant seat can be filled from the result of a special election more quickly.
It's not clear to me whether the above text allows this. I tend to think it should be clarified so that it does allow it.
|
process
|
can the ab tag chair ask for a special election in advance of a known upcoming vacancy says when an elected seat on either the ab or tag is vacated the seat is filled at the next regularly scheduled election for the group unless the group chair requests that hold an election before then for instance due to the group s workload the group chair should not request an exceptional election if the next regularly scheduled election is fewer than three months away it seems to me that since some vacancies are known in advance of when they occur it ought to be possible for the chair to ask for the election when an upcoming vacancy is known even if the seat is not yet vacant much like regular elections are held before the terms expire so that the vacant seat can be filled from the result of a special election more quickly it s not clear to me whether the above text allows this i tend to think it should be clarified so that it does allow it
| 1
|
37,385
| 12,477,454,139
|
IssuesEvent
|
2020-05-29 14:59:13
|
LibrIT/passhport
|
https://api.github.com/repos/LibrIT/passhport
|
closed
|
CVE-2019-8331 (Medium) detected in bootstrap-3.3.2.min.js, bootstrap-3.3.7.min.js
|
New security vulnerability
|
## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.2.min.js</b>, <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.2.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.2/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.2/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/website/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/website/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.2.min.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/280394daf60b8887c5eebccaca5e3c390a11b1f2">280394daf60b8887c5eebccaca5e3c390a11b1f2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-8331 (Medium) detected in bootstrap-3.3.2.min.js, bootstrap-3.3.7.min.js - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.2.min.js</b>, <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.2.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.2/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.2/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/website/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/website/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.2.min.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/280394daf60b8887c5eebccaca5e3c390a11b1f2">280394daf60b8887c5eebccaca5e3c390a11b1f2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bootstrap min js bootstrap min js cve medium severity vulnerability vulnerable libraries bootstrap min js bootstrap min js bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components bootstrap daterangepicker website index html path to vulnerable library passhport passhweb app static bower components bootstrap daterangepicker website index html passhport passhweb app static bower components bootstrap daterangepicker demo html dependency hierarchy x bootstrap min js vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components bootstrap colorpicker index html path to vulnerable library passhport passhweb app static bower components bootstrap colorpicker index html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass step up your open source security game with whitesource
| 0
|
2,968
| 5,960,738,887
|
IssuesEvent
|
2017-05-29 14:55:51
|
orbardugo/Hahot-Hameshulash
|
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
|
closed
|
Add Queries
|
in process priorty 1 Ruben
|
Todo: Add Queries - drugs, alcohol, criminal record, external contact, occupateon, religion by Ruben
|
1.0
|
Add Queries - Todo: Add Queries - drugs, alcohol, criminal record, external contact, occupateon, religion by Ruben
|
process
|
add queries todo add queries drugs alcohol criminal record external contact occupateon religion by ruben
| 1
|
92,474
| 11,648,497,684
|
IssuesEvent
|
2020-03-01 21:03:48
|
adavijit/BlogMan
|
https://api.github.com/repos/adavijit/BlogMan
|
opened
|
Create wireframe for a particular book
|
design gssoc20 medium
|
BlogMan is all about sharing knowledge. It will have a books suggestions section.
Create a simple wireframe for the page dedicated to a particular book.. it will display:
name of book, author, pic, stars, genre, description.
add to favorite button for book
anything that comes to your mind and you think is needed on this page can be added.
|
1.0
|
Create wireframe for a particular book - BlogMan is all about sharing knowledge. It will have a books suggestions section.
Create a simple wireframe for the page dedicated to a particular book.. it will display:
name of book, author, pic, stars, genre, description.
add to favorite button for book
anything that comes to your mind and you think is needed on this page can be added.
|
non_process
|
create wireframe for a particular book blogman is all about sharing knowledge it will have a books suggestions section create a simple wireframe for the page dedicated to a particular book it will display name of book author pic stars genre description add to favorite button for book anything that comes to your mind and you think is needed on this page can be added
| 0
|
18,698
| 24,595,507,019
|
IssuesEvent
|
2022-10-14 07:58:59
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] Unable to submit the responses for all the activities
|
Bug Blocker P0 Process: Fixed Process: Tested dev
|
Unable to submit the responses for all the activities
**AR:** The activity will be in 'Resume' status if the participant clicks the Done button on the 'Activity completed' screen
**ER:** Participant should be submitted the response successfully and activity should be in 'Completed' status
|
2.0
|
[FHIR] Unable to submit the responses for all the activities - Unable to submit the responses for all the activities
**AR:** The activity will be in 'Resume' status if the participant clicks the Done button on the 'Activity completed' screen
**ER:** Participant should be submitted the response successfully and activity should be in 'Completed' status
|
process
|
unable to submit the responses for all the activities unable to submit the responses for all the activities ar the activity will be in resume status if the participant clicks the done button on the activity completed screen er participant should be submitted the response successfully and activity should be in completed status
| 1
|
12,188
| 14,742,251,162
|
IssuesEvent
|
2021-01-07 11:58:14
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Site 056 Changes made to Client Account (UI Logging and Tracking)
|
anc-ops anp-1.5 ant-enhancement ant-parent/primary ant-support grt-ui processes
|
In GitLab by @kdjstudios on Apr 2, 2019, 11:23
**Submitted by:** Rich Montano <richard.montano@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-02-23193/conversation
**Server:** Internal
**Client/Site:** 056
**Account:** MX4082
**Issue:**
It has come to my attention that Santa Rosa's account MX4082,
MED-Project, has had it's Monthly Base Rate and Monthly Portal Access
turned off since the 2/1/2019 invoice at the very least. To my knowledge
this is not a change made by anyone in Santa Rosa. We need to know how
this happened and if it can be discerned, who made the change.
|
1.0
|
Site 056 Changes made to Client Account (UI Logging and Tracking) - In GitLab by @kdjstudios on Apr 2, 2019, 11:23
**Submitted by:** Rich Montano <richard.montano@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-02-23193/conversation
**Server:** Internal
**Client/Site:** 056
**Account:** MX4082
**Issue:**
It has come to my attention that Santa Rosa's account MX4082,
MED-Project, has had it's Monthly Base Rate and Monthly Portal Access
turned off since the 2/1/2019 invoice at the very least. To my knowledge
this is not a change made by anyone in Santa Rosa. We need to know how
this happened and if it can be discerned, who made the change.
|
process
|
site changes made to client account ui logging and tracking in gitlab by kdjstudios on apr submitted by rich montano helpdesk server internal client site account issue it has come to my attention that santa rosa s account med project has had it s monthly base rate and monthly portal access turned off since the invoice at the very least to my knowledge this is not a change made by anyone in santa rosa we need to know how this happened and if it can be discerned who made the change
| 1
|
87,147
| 15,756,002,513
|
IssuesEvent
|
2021-03-31 02:45:28
|
turkdevops/nexus-iq-chrome-extension
|
https://api.github.com/repos/turkdevops/nexus-iq-chrome-extension
|
opened
|
CVE-2011-4969 (Medium) detected in jquery-1.3.2.min.js
|
security vulnerability
|
## CVE-2011-4969 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.3.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js</a></p>
<p>Path to dependency file: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_standalone.html</p>
<p>Path to vulnerable library: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_underscore/vendor/jquery.js,nexus-iq-chrome-extension/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_underscore/vendor/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.3.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>fixVersionHistory-gh</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag.
<p>Publish Date: 2013-03-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p>
<p>Release Date: 2013-03-08</p>
<p>Fix Resolution: 1.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2011-4969 (Medium) detected in jquery-1.3.2.min.js - ## CVE-2011-4969 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.3.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js</a></p>
<p>Path to dependency file: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_standalone.html</p>
<p>Path to vulnerable library: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_underscore/vendor/jquery.js,nexus-iq-chrome-extension/src/Scripts/lib/jquery-ui-1.12.1/node_modules/underscore.string/test/test_underscore/vendor/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.3.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>fixVersionHistory-gh</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag.
<p>Publish Date: 2013-03-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p>
<p>Release Date: 2013-03-08</p>
<p>Fix Resolution: 1.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file nexus iq chrome extension release src scripts lib jquery ui node modules underscore string test test standalone html path to vulnerable library nexus iq chrome extension release src scripts lib jquery ui node modules underscore string test test underscore vendor jquery js nexus iq chrome extension src scripts lib jquery ui node modules underscore string test test underscore vendor jquery js dependency hierarchy x jquery min js vulnerable library found in base branch fixversionhistory gh vulnerability details cross site scripting xss vulnerability in jquery before when using location hash to select elements allows remote attackers to inject arbitrary web script or html via a crafted tag publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
5,961
| 8,784,243,238
|
IssuesEvent
|
2018-12-20 09:16:56
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
cant delete doc template
|
Process bug
|
after deleting a document template, it still shows up when choosing a file from template in documents
here, i deleted the gfgdg.txt template but it still shows up in documents

|
1.0
|
cant delete doc template - after deleting a document template, it still shows up when choosing a file from template in documents
here, i deleted the gfgdg.txt template but it still shows up in documents

|
process
|
cant delete doc template after deleting a document template it still shows up when choosing a file from template in documents here i deleted the gfgdg txt template but it still shows up in documents
| 1
|
22,250
| 30,801,982,816
|
IssuesEvent
|
2023-08-01 02:40:19
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pih 1.47212 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.47212",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip",
"silent-process-execution": [
{
"location": "pih-1.47212/pih/tools.py:774",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpe_03uveq/pih"
}
}```
|
1.0
|
pih 1.47212 has 2 GuardDog issues - https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.47212",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip",
"silent-process-execution": [
{
"location": "pih-1.47212/pih/tools.py:774",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpe_03uveq/pih"
}
}```
|
process
|
pih has guarddog issues dependency pih version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pid pip silent process execution location pih pih tools py code result subprocess run command stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpe pih
| 1
|
60,904
| 3,135,544,543
|
IssuesEvent
|
2015-09-10 15:41:34
|
ceylon/ceylon-ide-eclipse
|
https://api.github.com/repos/ceylon/ceylon-ide-eclipse
|
closed
|
NPE when the saving in the structured comparator.
|
bug high priority
|
```
java.lang.NullPointerException
at com.redhat.ceylon.eclipse.code.outline.CeylonStructureCreator.buildCompareTree(CeylonStructureCreator.java:102)
at com.redhat.ceylon.eclipse.code.outline.CeylonStructureCreator.createStructureComparator(CeylonStructureCreator.java:87)
at org.eclipse.compare.structuremergeviewer.StructureCreator.internalCreateStructure(StructureCreator.java:121)
at org.eclipse.compare.structuremergeviewer.StructureCreator.access$0(StructureCreator.java:109)
at org.eclipse.compare.structuremergeviewer.StructureCreator$1.run(StructureCreator.java:96)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.compare.internal.Utilities.runInUIThread(Utilities.java:859)
at org.eclipse.compare.structuremergeviewer.StructureCreator.createStructure(StructureCreator.java:102)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.createStructure(StructureDiffViewer.java:155)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.refresh(StructureDiffViewer.java:133)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.setInput(StructureDiffViewer.java:104)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer.compareInputChanged(StructureDiffViewer.java:347)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$2.run(StructureDiffViewer.java:74)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$6.run(StructureDiffViewer.java:322)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer.compareInputChanged(StructureDiffViewer.java:319)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$5.compareInputChanged(StructureDiffViewer.java:213)
at org.eclipse.compare.structuremergeviewer.DiffNode.fireChange(DiffNode.java:137)
at org.eclipse.egit.ui.internal.NotifiableDiffNode.fireChange(NotifiableDiffNode.java:26)
at org.eclipse.egit.ui.internal.GitCompareFileRevisionEditorInput.fireInputChange(GitCompareFileRevisionEditorInput.java:420)
at org.eclipse.egit.ui.internal.GitCompareFileRevisionEditorInput$InternalResourceSaveableComparison.fireInputChange(GitCompareFileRevisionEditorInput.java:604)
at org.eclipse.team.internal.ui.synchronize.LocalResourceSaveableComparison.performSave(LocalResourceSaveableComparison.java:143)
at org.eclipse.team.ui.mapping.SaveableComparison.doSave(SaveableComparison.java:49)
at org.eclipse.ui.Saveable.doSave(Saveable.java:216)
at org.eclipse.ui.internal.SaveableHelper.doSaveModel(SaveableHelper.java:355)
at org.eclipse.ui.internal.SaveableHelper$3.run(SaveableHelper.java:199)
at org.eclipse.ui.internal.SaveableHelper$5.run(SaveableHelper.java:283)
at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:466)
at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:374)
at org.eclipse.ui.internal.WorkbenchWindow$13.run(WorkbenchWindow.java:2157)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.ui.internal.WorkbenchWindow.run(WorkbenchWindow.java:2153)
at org.eclipse.ui.internal.SaveableHelper.runProgressMonitorOperation(SaveableHelper.java:291)
at org.eclipse.ui.internal.SaveableHelper.runProgressMonitorOperation(SaveableHelper.java:269)
at org.eclipse.ui.internal.SaveableHelper.saveModels(SaveableHelper.java:211)
at org.eclipse.ui.internal.SaveableHelper.savePart(SaveableHelper.java:146)
at org.eclipse.ui.internal.WorkbenchPage.saveSaveable(WorkbenchPage.java:3915)
at org.eclipse.ui.internal.WorkbenchPage.saveEditor(WorkbenchPage.java:3929)
at org.eclipse.ui.internal.handlers.SaveHandler.execute(SaveHandler.java:54)
```
|
1.0
|
NPE when the saving in the structured comparator. - ```
java.lang.NullPointerException
at com.redhat.ceylon.eclipse.code.outline.CeylonStructureCreator.buildCompareTree(CeylonStructureCreator.java:102)
at com.redhat.ceylon.eclipse.code.outline.CeylonStructureCreator.createStructureComparator(CeylonStructureCreator.java:87)
at org.eclipse.compare.structuremergeviewer.StructureCreator.internalCreateStructure(StructureCreator.java:121)
at org.eclipse.compare.structuremergeviewer.StructureCreator.access$0(StructureCreator.java:109)
at org.eclipse.compare.structuremergeviewer.StructureCreator$1.run(StructureCreator.java:96)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.compare.internal.Utilities.runInUIThread(Utilities.java:859)
at org.eclipse.compare.structuremergeviewer.StructureCreator.createStructure(StructureCreator.java:102)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.createStructure(StructureDiffViewer.java:155)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.refresh(StructureDiffViewer.java:133)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$StructureInfo.setInput(StructureDiffViewer.java:104)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer.compareInputChanged(StructureDiffViewer.java:347)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$2.run(StructureDiffViewer.java:74)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$6.run(StructureDiffViewer.java:322)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer.compareInputChanged(StructureDiffViewer.java:319)
at org.eclipse.compare.structuremergeviewer.StructureDiffViewer$5.compareInputChanged(StructureDiffViewer.java:213)
at org.eclipse.compare.structuremergeviewer.DiffNode.fireChange(DiffNode.java:137)
at org.eclipse.egit.ui.internal.NotifiableDiffNode.fireChange(NotifiableDiffNode.java:26)
at org.eclipse.egit.ui.internal.GitCompareFileRevisionEditorInput.fireInputChange(GitCompareFileRevisionEditorInput.java:420)
at org.eclipse.egit.ui.internal.GitCompareFileRevisionEditorInput$InternalResourceSaveableComparison.fireInputChange(GitCompareFileRevisionEditorInput.java:604)
at org.eclipse.team.internal.ui.synchronize.LocalResourceSaveableComparison.performSave(LocalResourceSaveableComparison.java:143)
at org.eclipse.team.ui.mapping.SaveableComparison.doSave(SaveableComparison.java:49)
at org.eclipse.ui.Saveable.doSave(Saveable.java:216)
at org.eclipse.ui.internal.SaveableHelper.doSaveModel(SaveableHelper.java:355)
at org.eclipse.ui.internal.SaveableHelper$3.run(SaveableHelper.java:199)
at org.eclipse.ui.internal.SaveableHelper$5.run(SaveableHelper.java:283)
at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:466)
at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:374)
at org.eclipse.ui.internal.WorkbenchWindow$13.run(WorkbenchWindow.java:2157)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.ui.internal.WorkbenchWindow.run(WorkbenchWindow.java:2153)
at org.eclipse.ui.internal.SaveableHelper.runProgressMonitorOperation(SaveableHelper.java:291)
at org.eclipse.ui.internal.SaveableHelper.runProgressMonitorOperation(SaveableHelper.java:269)
at org.eclipse.ui.internal.SaveableHelper.saveModels(SaveableHelper.java:211)
at org.eclipse.ui.internal.SaveableHelper.savePart(SaveableHelper.java:146)
at org.eclipse.ui.internal.WorkbenchPage.saveSaveable(WorkbenchPage.java:3915)
at org.eclipse.ui.internal.WorkbenchPage.saveEditor(WorkbenchPage.java:3929)
at org.eclipse.ui.internal.handlers.SaveHandler.execute(SaveHandler.java:54)
```
|
non_process
|
npe when the saving in the structured comparator java lang nullpointerexception at com redhat ceylon eclipse code outline ceylonstructurecreator buildcomparetree ceylonstructurecreator java at com redhat ceylon eclipse code outline ceylonstructurecreator createstructurecomparator ceylonstructurecreator java at org eclipse compare structuremergeviewer structurecreator internalcreatestructure structurecreator java at org eclipse compare structuremergeviewer structurecreator access structurecreator java at org eclipse compare structuremergeviewer structurecreator run structurecreator java at org eclipse swt custom busyindicator showwhile busyindicator java at org eclipse compare internal utilities runinuithread utilities java at org eclipse compare structuremergeviewer structurecreator createstructure structurecreator java at org eclipse compare structuremergeviewer structurediffviewer structureinfo createstructure structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer structureinfo refresh structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer structureinfo setinput structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer compareinputchanged structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer run structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer run structurediffviewer java at org eclipse swt custom busyindicator showwhile busyindicator java at org eclipse compare structuremergeviewer structurediffviewer compareinputchanged structurediffviewer java at org eclipse compare structuremergeviewer structurediffviewer compareinputchanged structurediffviewer java at org eclipse compare structuremergeviewer diffnode firechange diffnode java at org eclipse egit ui internal notifiablediffnode firechange notifiablediffnode java at org eclipse egit ui internal gitcomparefilerevisioneditorinput fireinputchange gitcomparefilerevisioneditorinput java at org eclipse egit ui internal gitcomparefilerevisioneditorinput internalresourcesaveablecomparison fireinputchange gitcomparefilerevisioneditorinput java at org eclipse team internal ui synchronize localresourcesaveablecomparison performsave localresourcesaveablecomparison java at org eclipse team ui mapping saveablecomparison dosave saveablecomparison java at org eclipse ui saveable dosave saveable java at org eclipse ui internal saveablehelper dosavemodel saveablehelper java at org eclipse ui internal saveablehelper run saveablehelper java at org eclipse ui internal saveablehelper run saveablehelper java at org eclipse jface operation modalcontext runincurrentthread modalcontext java at org eclipse jface operation modalcontext run modalcontext java at org eclipse ui internal workbenchwindow run workbenchwindow java at org eclipse swt custom busyindicator showwhile busyindicator java at org eclipse ui internal workbenchwindow run workbenchwindow java at org eclipse ui internal saveablehelper runprogressmonitoroperation saveablehelper java at org eclipse ui internal saveablehelper runprogressmonitoroperation saveablehelper java at org eclipse ui internal saveablehelper savemodels saveablehelper java at org eclipse ui internal saveablehelper savepart saveablehelper java at org eclipse ui internal workbenchpage savesaveable workbenchpage java at org eclipse ui internal workbenchpage saveeditor workbenchpage java at org eclipse ui internal handlers savehandler execute savehandler java
| 0
|
153,612
| 12,153,152,406
|
IssuesEvent
|
2020-04-25 00:54:48
|
gisellemartel/CONPASS
|
https://api.github.com/repos/gisellemartel/CONPASS
|
closed
|
AT-21 : (US4E - As a user, I want to be able to select/specify start and end rooms to and from any floor.)
|
Acceptance Test (SPRINT 4)
|
1. Launch the application
2. Tap on a building that has an interior mode (e.g. Hall building)
3. Once in interior mode, Click the direction button (blue button bottom right-hand side)
4. Enter a starting point (e.g H-103) (on the floor you are currently on) and a destination point (e.g. H-907)
5. A path will be drawn from H-103 to H-907, verify that by navigating to the 9th floor
|
1.0
|
AT-21 : (US4E - As a user, I want to be able to select/specify start and end rooms to and from any floor.) - 1. Launch the application
2. Tap on a building that has an interior mode (e.g. Hall building)
3. Once in interior mode, Click the direction button (blue button bottom right-hand side)
4. Enter a starting point (e.g H-103) (on the floor you are currently on) and a destination point (e.g. H-907)
5. A path will be drawn from H-103 to H-907, verify that by navigating to the 9th floor
|
non_process
|
at as a user i want to be able to select specify start and end rooms to and from any floor launch the application tap on a building that has an interior mode e g hall building once in interior mode click the direction button blue button bottom right hand side enter a starting point e g h on the floor you are currently on and a destination point e g h a path will be drawn from h to h verify that by navigating to the floor
| 0
|
16,882
| 22,162,389,749
|
IssuesEvent
|
2022-06-04 17:47:52
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
reopened
|
Secure nuggetless forces behaviour
|
Developability Constraint Processing The Rules Symbolic
|
When carrying coupled node info into slaves, we end up add forced variables to the CSP problem, where the values are taken from the master's search, and the values themselves, aka the X nodes, may have been removed from the X tree during master replace, _so that the values are not in the domain_.
We get away with this because the only constraints involving those forced variables are coupling constraints, and coupling constraints have expressions (and solves) that don't require any nuggets of knowledge (we just get equality and equivalence predicates)(but we may want to use the knowledge in other ways than grabbing nuggets).
However it feels like standing on the edge of a precipice. When this goes wrong (eg by solving to the wrong set operator, one that _does_ require knowledge) it looks like everything's wrong.
We should bake explicit stuff in to clarify intent, and to discover future issues more easily:
- Expressions should reveal what knowledge inputs need.
- Use a virtual getter, eg `GetRequiredKnowledgeLevel()` model after `GetRequiredVariables()`.
- Enum should be: `NONE`, `GENERAL`, `NUGGETS`, in strictly increasing strictness.
- Expressions requiring NONE should be evaluatable with a `nullptr` knowledge pointer in the kit.
- Constraints should determine _their_ knowledge requirement
- from their consistency expression, and make available as a getter.
- <strike>They should require that solves are no stricter (a reasonable assumption, _I think_).
- Mention this ticket by that check.</strike>
- Nope, it's perfectly normal to move from direct querying of nodes in sat expressions and then use general knowledge in the solve
- Split forces by domain membership.
- The forces that are passed into solvers need to be split into:
- domain forces (which will be the root plink) and
- non-domain forces (which will be master boundary plinks).
- <strike>Enum should be: `DOMAIN` and `ARBITRARY`
- Free vars are always `DOMAIN`
- Solver prevents `ARBITRARY` forces being involved in `NUGGETS`-requiring constraints.</strike>
- `domain_` and `arbitrary_` versions of the forces are passed from engine to solver
- This spells out the policy, which is a **rule**
|
1.0
|
Secure nuggetless forces behaviour - When carrying coupled node info into slaves, we end up add forced variables to the CSP problem, where the values are taken from the master's search, and the values themselves, aka the X nodes, may have been removed from the X tree during master replace, _so that the values are not in the domain_.
We get away with this because the only constraints involving those forced variables are coupling constraints, and coupling constraints have expressions (and solves) that don't require any nuggets of knowledge (we just get equality and equivalence predicates)(but we may want to use the knowledge in other ways than grabbing nuggets).
However it feels like standing on the edge of a precipice. When this goes wrong (eg by solving to the wrong set operator, one that _does_ require knowledge) it looks like everything's wrong.
We should bake explicit stuff in to clarify intent, and to discover future issues more easily:
- Expressions should reveal what knowledge inputs need.
- Use a virtual getter, eg `GetRequiredKnowledgeLevel()` model after `GetRequiredVariables()`.
- Enum should be: `NONE`, `GENERAL`, `NUGGETS`, in strictly increasing strictness.
- Expressions requiring NONE should be evaluatable with a `nullptr` knowledge pointer in the kit.
- Constraints should determine _their_ knowledge requirement
- from their consistency expression, and make available as a getter.
- <strike>They should require that solves are no stricter (a reasonable assumption, _I think_).
- Mention this ticket by that check.</strike>
- Nope, it's perfectly normal to move from direct querying of nodes in sat expressions and then use general knowledge in the solve
- Split forces by domain membership.
- The forces that are passed into solvers need to be split into:
- domain forces (which will be the root plink) and
- non-domain forces (which will be master boundary plinks).
- <strike>Enum should be: `DOMAIN` and `ARBITRARY`
- Free vars are always `DOMAIN`
- Solver prevents `ARBITRARY` forces being involved in `NUGGETS`-requiring constraints.</strike>
- `domain_` and `arbitrary_` versions of the forces are passed from engine to solver
- This spells out the policy, which is a **rule**
|
process
|
secure nuggetless forces behaviour when carrying coupled node info into slaves we end up add forced variables to the csp problem where the values are taken from the master s search and the values themselves aka the x nodes may have been removed from the x tree during master replace so that the values are not in the domain we get away with this because the only constraints involving those forced variables are coupling constraints and coupling constraints have expressions and solves that don t require any nuggets of knowledge we just get equality and equivalence predicates but we may want to use the knowledge in other ways than grabbing nuggets however it feels like standing on the edge of a precipice when this goes wrong eg by solving to the wrong set operator one that does require knowledge it looks like everything s wrong we should bake explicit stuff in to clarify intent and to discover future issues more easily expressions should reveal what knowledge inputs need use a virtual getter eg getrequiredknowledgelevel model after getrequiredvariables enum should be none general nuggets in strictly increasing strictness expressions requiring none should be evaluatable with a nullptr knowledge pointer in the kit constraints should determine their knowledge requirement from their consistency expression and make available as a getter they should require that solves are no stricter a reasonable assumption i think mention this ticket by that check nope it s perfectly normal to move from direct querying of nodes in sat expressions and then use general knowledge in the solve split forces by domain membership the forces that are passed into solvers need to be split into domain forces which will be the root plink and non domain forces which will be master boundary plinks enum should be domain and arbitrary free vars are always domain solver prevents arbitrary forces being involved in nuggets requiring constraints domain and arbitrary versions of the forces are passed from engine to solver this spells out the policy which is a rule
| 1
|
2,851
| 5,809,988,110
|
IssuesEvent
|
2017-05-04 14:34:03
|
elm-community/literature-reviews
|
https://api.github.com/repos/elm-community/literature-reviews
|
opened
|
How do we deal with merged literature reviews?
|
process
|
This may be putting the cart *a little* before the horse, but what happens when we need to make changes to a literature review after the initial merge? Do we add the steward as a maintainer on the repository and assign them issues for change?
|
1.0
|
How do we deal with merged literature reviews? - This may be putting the cart *a little* before the horse, but what happens when we need to make changes to a literature review after the initial merge? Do we add the steward as a maintainer on the repository and assign them issues for change?
|
process
|
how do we deal with merged literature reviews this may be putting the cart a little before the horse but what happens when we need to make changes to a literature review after the initial merge do we add the steward as a maintainer on the repository and assign them issues for change
| 1
|
74,097
| 9,747,569,686
|
IssuesEvent
|
2019-06-03 14:39:39
|
jmoenig/Snap
|
https://api.github.com/repos/jmoenig/Snap
|
closed
|
No help available for Fill.
|
documentation
|
Tried in Edge and Chrome on Windows 10 and Chrome on Windows 7. Does not appear when clicking help. Other items' help appears.
|
1.0
|
No help available for Fill. - Tried in Edge and Chrome on Windows 10 and Chrome on Windows 7. Does not appear when clicking help. Other items' help appears.
|
non_process
|
no help available for fill tried in edge and chrome on windows and chrome on windows does not appear when clicking help other items help appears
| 0
|
12,386
| 14,900,262,681
|
IssuesEvent
|
2021-01-21 15:13:52
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
reopened
|
VDC: silver size is not enough to deploy solution from marketplace, worker is <not ready>
|
process_wontfix type_bug
|
- just deployed a silver size vdc name : qatest1
workloads :

- and when try to deploy any solution from marketplace got this error


when listing nodes i got that the worker is not ready :
```
root@samir:~/.kube# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3os-5262 NotReady <none> 75m v1.19.4+k3s1
k3os-1499 Ready master 77m v1.19.4+k3s1
```
```
root@samir:~/.kube# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
k3os-system system-upgrade-controller-6c9f84cb79-6mswc 1/1 Running 0 76m
kube-system local-path-provisioner-7ff9579c6-86sb7 1/1 Running 0 76m
kube-system metrics-server-7b4f8b595-tqznw 1/1 Running 0 76m
kube-system coredns-66c464876b-cr46b 1/1 Running 0 76m
kube-system helm-install-traefik-mnhrz 0/1 Completed 0 76m
kube-system svclb-traefik-v9jv7 2/2 Running 0 74m
kube-system svclb-traefik-gcs4z 2/2 Running 0 74m
kube-system traefik-7f489654dc-fr6r4 1/1 Terminating 0 74m
kube-system traefik-7f489654dc-zkj52 1/1 Running 0 51m
```
|
1.0
|
VDC: silver size is not enough to deploy solution from marketplace, worker is <not ready> - - just deployed a silver size vdc name : qatest1
workloads :

- and when try to deploy any solution from marketplace got this error


when listing nodes i got that the worker is not ready :
```
root@samir:~/.kube# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3os-5262 NotReady <none> 75m v1.19.4+k3s1
k3os-1499 Ready master 77m v1.19.4+k3s1
```
```
root@samir:~/.kube# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
k3os-system system-upgrade-controller-6c9f84cb79-6mswc 1/1 Running 0 76m
kube-system local-path-provisioner-7ff9579c6-86sb7 1/1 Running 0 76m
kube-system metrics-server-7b4f8b595-tqznw 1/1 Running 0 76m
kube-system coredns-66c464876b-cr46b 1/1 Running 0 76m
kube-system helm-install-traefik-mnhrz 0/1 Completed 0 76m
kube-system svclb-traefik-v9jv7 2/2 Running 0 74m
kube-system svclb-traefik-gcs4z 2/2 Running 0 74m
kube-system traefik-7f489654dc-fr6r4 1/1 Terminating 0 74m
kube-system traefik-7f489654dc-zkj52 1/1 Running 0 51m
```
|
process
|
vdc silver size is not enough to deploy solution from marketplace worker is just deployed a silver size vdc name workloads and when try to deploy any solution from marketplace got this error when listing nodes i got that the worker is not ready root samir kube kubectl get nodes name status roles age version notready ready master root samir kube kubectl get pods a namespace name ready status restarts age system system upgrade controller running kube system local path provisioner running kube system metrics server tqznw running kube system coredns running kube system helm install traefik mnhrz completed kube system svclb traefik running kube system svclb traefik running kube system traefik terminating kube system traefik running
| 1
|
251,162
| 21,436,026,336
|
IssuesEvent
|
2022-04-24 02:12:57
|
DogDatesComp4350/DogDates
|
https://api.github.com/repos/DogDatesComp4350/DogDates
|
closed
|
Separate some unit tests from integration test
|
Testing
|
Separate some unit tests from integration test:
signup, login, update, getNextUser, like, dislike, match, and delete.
|
1.0
|
Separate some unit tests from integration test - Separate some unit tests from integration test:
signup, login, update, getNextUser, like, dislike, match, and delete.
|
non_process
|
separate some unit tests from integration test separate some unit tests from integration test signup login update getnextuser like dislike match and delete
| 0
|
10,392
| 13,197,672,621
|
IssuesEvent
|
2020-08-13 23:50:31
|
GoogleCloudPlatform/stackdriver-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/stackdriver-sandbox
|
closed
|
Convert emailservice instrumentation from OC to OTel
|
lang: python priority: p2 type: process
|
Part of #132
Telemetry instrumented using OpenCensus should be converted to OpenTelemetry now.
|
1.0
|
Convert emailservice instrumentation from OC to OTel - Part of #132
Telemetry instrumented using OpenCensus should be converted to OpenTelemetry now.
|
process
|
convert emailservice instrumentation from oc to otel part of telemetry instrumented using opencensus should be converted to opentelemetry now
| 1
|
596,389
| 18,104,225,795
|
IssuesEvent
|
2021-09-22 17:19:30
|
NOAA-GSL/VxLegacyIngest
|
https://api.github.com/repos/NOAA-GSL/VxLegacyIngest
|
closed
|
Create an outline for Verification Paper
|
Priority: Blocker
|
---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 55263, https://vlab.ncep.noaa.gov/redmine/issues/55263
Original Date: 2018-09-18
Original Assignee: jeffrey.a.hamilton
---
Create an outline by next week for the paper. Report to the group
|
1.0
|
Create an outline for Verification Paper - ---
Author Name: **jeffrey.a.hamilton** (jeffrey.a.hamilton)
Original Redmine Issue: 55263, https://vlab.ncep.noaa.gov/redmine/issues/55263
Original Date: 2018-09-18
Original Assignee: jeffrey.a.hamilton
---
Create an outline by next week for the paper. Report to the group
|
non_process
|
create an outline for verification paper author name jeffrey a hamilton jeffrey a hamilton original redmine issue original date original assignee jeffrey a hamilton create an outline by next week for the paper report to the group
| 0
|
56,029
| 8,041,917,547
|
IssuesEvent
|
2018-07-31 06:02:29
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Working sample should include ability to connect to a VM in Terraform.io
|
documentation service/virtual-machine
|
The examples in azurerm_virtual_machine in [Terraform.io](http://aka.ms/terraform) are incomplete as it doesn't create a public IP or network security to connect to the VM. Fix documentation to do the same.
|
1.0
|
Working sample should include ability to connect to a VM in Terraform.io - The examples in azurerm_virtual_machine in [Terraform.io](http://aka.ms/terraform) are incomplete as it doesn't create a public IP or network security to connect to the VM. Fix documentation to do the same.
|
non_process
|
working sample should include ability to connect to a vm in terraform io the examples in azurerm virtual machine in are incomplete as it doesn t create a public ip or network security to connect to the vm fix documentation to do the same
| 0
|
18,645
| 24,580,906,390
|
IssuesEvent
|
2022-10-13 15:32:27
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] Questionnaire resource > 'Type' should be 'open-choice' for the questions with response type = text choice + Other option with text entry allowed
|
Bug P1 Response datastore Process: Fixed Process: Tested dev
|
AR: Questionnaire resource > 'Type' is getting displayed as 'choice' for the questions with response type = text choice + Other option with text entry allowed
ER: Questionnaire resource > 'Type' should be 'open-choice' for the questions with response type = text choice + Other option with text entry allowed
|
2.0
|
[FHIR] Questionnaire resource > 'Type' should be 'open-choice' for the questions with response type = text choice + Other option with text entry allowed - AR: Questionnaire resource > 'Type' is getting displayed as 'choice' for the questions with response type = text choice + Other option with text entry allowed
ER: Questionnaire resource > 'Type' should be 'open-choice' for the questions with response type = text choice + Other option with text entry allowed
|
process
|
questionnaire resource type should be open choice for the questions with response type text choice other option with text entry allowed ar questionnaire resource type is getting displayed as choice for the questions with response type text choice other option with text entry allowed er questionnaire resource type should be open choice for the questions with response type text choice other option with text entry allowed
| 1
|
343,391
| 24,769,607,429
|
IssuesEvent
|
2022-10-23 00:52:33
|
DBrueberg/gold-briefing
|
https://api.github.com/repos/DBrueberg/gold-briefing
|
opened
|
Design HyrailOperatingChecklist Component
|
documentation
|
Design the HyrailOperatingChecklist Component and create a wireframe.
When complete update Issue #3.
|
1.0
|
Design HyrailOperatingChecklist Component - Design the HyrailOperatingChecklist Component and create a wireframe.
When complete update Issue #3.
|
non_process
|
design hyrailoperatingchecklist component design the hyrailoperatingchecklist component and create a wireframe when complete update issue
| 0
|
20,293
| 26,930,873,197
|
IssuesEvent
|
2023-02-07 16:44:27
|
Psyderalis/DEV002-md-links
|
https://api.github.com/repos/Psyderalis/DEV002-md-links
|
closed
|
Primeras validaciones hasta lectura de archivo md
|
módulos de commonJS File system callbacks promesas process
|
- [x] validación de path
- [x] validación de option
- [x] validación de path absoluta o relativa
- [x] resolver ruta relativa a absoluta
- [x] validación de directorio
- [x] validación de archivo md
|
1.0
|
Primeras validaciones hasta lectura de archivo md - - [x] validación de path
- [x] validación de option
- [x] validación de path absoluta o relativa
- [x] resolver ruta relativa a absoluta
- [x] validación de directorio
- [x] validación de archivo md
|
process
|
primeras validaciones hasta lectura de archivo md validación de path validación de option validación de path absoluta o relativa resolver ruta relativa a absoluta validación de directorio validación de archivo md
| 1
|
131,030
| 18,214,478,963
|
IssuesEvent
|
2021-09-30 01:16:07
|
mgh3326/hot_deal_alarm_api
|
https://api.github.com/repos/mgh3326/hot_deal_alarm_api
|
opened
|
CVE-2021-37136 (High) detected in netty-codec-4.1.48.Final.jar
|
security vulnerability
|
## CVE-2021-37136 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.48.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: hot_deal_alarm_api/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.48.Final/3142078325d745228da9d6d1f6f9931c63aaba16/netty-codec-4.1.48.Final.jar,/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.48.Final/3142078325d745228da9d6d1f6f9931c63aaba16/netty-codec-4.1.48.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-redis-2.2.6.RELEASE.jar (Root Library)
- lettuce-core-5.2.2.RELEASE.jar
- netty-handler-4.1.48.Final.jar
- :x: **netty-codec-4.1.48.Final.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression).
All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack
<p>Publish Date: 2021-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136>CVE-2021-37136</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-grg4-wf29-r9vv">https://github.com/advisories/GHSA-grg4-wf29-r9vv</a></p>
<p>Release Date: 2021-07-21</p>
<p>Fix Resolution: io.netty:netty-codec:4.1.68.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37136 (High) detected in netty-codec-4.1.48.Final.jar - ## CVE-2021-37136 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.48.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: hot_deal_alarm_api/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.48.Final/3142078325d745228da9d6d1f6f9931c63aaba16/netty-codec-4.1.48.Final.jar,/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.48.Final/3142078325d745228da9d6d1f6f9931c63aaba16/netty-codec-4.1.48.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-redis-2.2.6.RELEASE.jar (Root Library)
- lettuce-core-5.2.2.RELEASE.jar
- netty-handler-4.1.48.Final.jar
- :x: **netty-codec-4.1.48.Final.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression).
All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack
<p>Publish Date: 2021-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136>CVE-2021-37136</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-grg4-wf29-r9vv">https://github.com/advisories/GHSA-grg4-wf29-r9vv</a></p>
<p>Release Date: 2021-07-21</p>
<p>Fix Resolution: io.netty:netty-codec:4.1.68.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in netty codec final jar cve high severity vulnerability vulnerable library netty codec final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file hot deal alarm api build gradle path to vulnerable library root gradle caches modules files io netty netty codec final netty codec final jar root gradle caches modules files io netty netty codec final netty codec final jar dependency hierarchy spring boot starter data redis release jar root library lettuce core release jar netty handler final jar x netty codec final jar vulnerable library vulnerability details the decompression decoder function doesn t allow setting size restrictions on the decompressed output data which affects the allocation size used during decompression all users of are affected the malicious input can trigger an oome and so a dos attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec final step up your open source security game with whitesource
| 0
|
69,714
| 8,445,182,631
|
IssuesEvent
|
2018-10-18 20:41:56
|
phetsims/energy-forms-and-changes
|
https://api.github.com/repos/phetsims/energy-forms-and-changes
|
closed
|
Add new wire artwork
|
design:artwork design:polish
|
@arouinfar has created some nice new wires for the different energy users, so i'll add and integrate them here. The reason for this was to fix the mismatching gradient that came from flipping one of the original wire images for the fan, see https://github.com/phetsims/energy-forms-and-changes/issues/76
|
2.0
|
Add new wire artwork - @arouinfar has created some nice new wires for the different energy users, so i'll add and integrate them here. The reason for this was to fix the mismatching gradient that came from flipping one of the original wire images for the fan, see https://github.com/phetsims/energy-forms-and-changes/issues/76
|
non_process
|
add new wire artwork arouinfar has created some nice new wires for the different energy users so i ll add and integrate them here the reason for this was to fix the mismatching gradient that came from flipping one of the original wire images for the fan see
| 0
|
93,684
| 11,798,652,215
|
IssuesEvent
|
2020-03-18 14:43:23
|
MozillaFoundation/Design
|
https://api.github.com/repos/MozillaFoundation/Design
|
opened
|
Create Thank You Card Template for Staff Anniversaries
|
design
|
Placeholder ticket to track work for creating a Thank you card template for staff anniversaries.
Stakeholder interview with Mandy [here](https://docs.google.com/document/d/1qBGP2EXrrehJoigm1CpT3zNIR8YtVMcE5XVX32Jp4wg/edit?usp=sharing)
cc: @kristinashu
|
1.0
|
Create Thank You Card Template for Staff Anniversaries - Placeholder ticket to track work for creating a Thank you card template for staff anniversaries.
Stakeholder interview with Mandy [here](https://docs.google.com/document/d/1qBGP2EXrrehJoigm1CpT3zNIR8YtVMcE5XVX32Jp4wg/edit?usp=sharing)
cc: @kristinashu
|
non_process
|
create thank you card template for staff anniversaries placeholder ticket to track work for creating a thank you card template for staff anniversaries stakeholder interview with mandy cc kristinashu
| 0
|
20,081
| 2,622,184,446
|
IssuesEvent
|
2015-03-04 00:20:15
|
byzhang/signal-collect
|
https://api.github.com/repos/byzhang/signal-collect
|
closed
|
Implement termination based on max. runtime
|
1.0 auto-migrated Priority-High Type-Enhancement
|
```
The execute function on the graph should properly terminate if the maximum
runtime is exceeded. Functionality needs an integration test.
```
Original issue reported on code.google.com by `philip.stutz` on 10 Oct 2011 at 11:56
|
1.0
|
Implement termination based on max. runtime - ```
The execute function on the graph should properly terminate if the maximum
runtime is exceeded. Functionality needs an integration test.
```
Original issue reported on code.google.com by `philip.stutz` on 10 Oct 2011 at 11:56
|
non_process
|
implement termination based on max runtime the execute function on the graph should properly terminate if the maximum runtime is exceeded functionality needs an integration test original issue reported on code google com by philip stutz on oct at
| 0
|
4,653
| 7,495,777,941
|
IssuesEvent
|
2018-04-08 01:13:35
|
gkiar/reading
|
https://api.github.com/repos/gkiar/reading
|
closed
|
Paper: testname
|
processing
|
URL: [http://testurl.io](http://testurl.io)
## This paper does...
testdo
## This paper does not...
test note
## Other comments?
it works!
|
1.0
|
Paper: testname - URL: [http://testurl.io](http://testurl.io)
## This paper does...
testdo
## This paper does not...
test note
## Other comments?
it works!
|
process
|
paper testname url this paper does testdo this paper does not test note other comments it works
| 1
|
6,679
| 9,797,474,441
|
IssuesEvent
|
2019-06-11 10:01:37
|
ESMValGroup/ESMValTool
|
https://api.github.com/repos/ESMValGroup/ESMValTool
|
closed
|
Provide error code/exit code if data set is not cmor compliant, etc.
|
enhancement preprocessor
|
Backend should provide error information/code on exit, why it failed.
Examples:
- File not found
- File not readable
- File empty
- File not cmor compliant
|
1.0
|
Provide error code/exit code if data set is not cmor compliant, etc. - Backend should provide error information/code on exit, why it failed.
Examples:
- File not found
- File not readable
- File empty
- File not cmor compliant
|
process
|
provide error code exit code if data set is not cmor compliant etc backend should provide error information code on exit why it failed examples file not found file not readable file empty file not cmor compliant
| 1
|
2,873
| 5,831,600,124
|
IssuesEvent
|
2017-05-08 19:45:02
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Handle P.O. Box addresses properly
|
bug processed
|
Disregard the PO BOX component of addresses and focus on the admin parts specified to return the region the box lies within.
|
1.0
|
Handle P.O. Box addresses properly - Disregard the PO BOX component of addresses and focus on the admin parts specified to return the region the box lies within.
|
process
|
handle p o box addresses properly disregard the po box component of addresses and focus on the admin parts specified to return the region the box lies within
| 1
|
438,106
| 12,611,167,845
|
IssuesEvent
|
2020-06-12 07:01:44
|
AbsaOSS/enceladus
|
https://api.github.com/repos/AbsaOSS/enceladus
|
opened
|
Improve ConfigReader
|
feature priority: medium refactoring
|
## Background
Methods defined in `ConfigReader` depend on `Config` and instantiate it using
```
private val config: Config = ConfigFactory.load()
```
## Feature
**Suggestion by @benedeki:** What about making it a class (with config input of constructor) and the companion object implementing the class. Like:
```
class ConfigReader(private val config: Config) {
}
object ConfigReader extends ConfigReader(ConfigFactory.load())
```
Would help testing among other.
|
1.0
|
Improve ConfigReader - ## Background
Methods defined in `ConfigReader` depend on `Config` and instantiate it using
```
private val config: Config = ConfigFactory.load()
```
## Feature
**Suggestion by @benedeki:** What about making it a class (with config input of constructor) and the companion object implementing the class. Like:
```
class ConfigReader(private val config: Config) {
}
object ConfigReader extends ConfigReader(ConfigFactory.load())
```
Would help testing among other.
|
non_process
|
improve configreader background methods defined in configreader depend on config and instantiate it using private val config config configfactory load feature suggestion by benedeki what about making it a class with config input of constructor and the companion object implementing the class like class configreader private val config config object configreader extends configreader configfactory load would help testing among other
| 0
|
411,316
| 12,016,845,923
|
IssuesEvent
|
2020-04-10 17:01:15
|
AY1920S2-CS2103-W14-3/main
|
https://api.github.com/repos/AY1920S2-CS2103-W14-3/main
|
closed
|
As a busy university student with a hectic work schedule I want to be suggested places to eat with my friends based on "KIV" notes for certain restaurants
|
priority.Medium type.Enhancement
|
so that I can choose a gathering place without much hassle.
|
1.0
|
As a busy university student with a hectic work schedule I want to be suggested places to eat with my friends based on "KIV" notes for certain restaurants - so that I can choose a gathering place without much hassle.
|
non_process
|
as a busy university student with a hectic work schedule i want to be suggested places to eat with my friends based on kiv notes for certain restaurants so that i can choose a gathering place without much hassle
| 0
|
719,380
| 24,757,942,867
|
IssuesEvent
|
2022-10-21 19:47:44
|
ramp4-pcar4/ramp4-pcar4
|
https://api.github.com/repos/ramp4-pcar4/ramp4-pcar4
|
closed
|
Add a sample using our esm library file
|
flavour: enhancement priority: nice type: preventative
|
Add a sample html/starter script to the `public/samples` folder that uses an ESM version of our library file.
You'll need to run `npm run build`, then manually copy the generated file `dist/samples/lib/ramp.es.js` to `public/samples/lib/ramp.es.js` and then load that file in a `<script type="module" src="./lib/ramp.es.js"></script>`
|
1.0
|
Add a sample using our esm library file - Add a sample html/starter script to the `public/samples` folder that uses an ESM version of our library file.
You'll need to run `npm run build`, then manually copy the generated file `dist/samples/lib/ramp.es.js` to `public/samples/lib/ramp.es.js` and then load that file in a `<script type="module" src="./lib/ramp.es.js"></script>`
|
non_process
|
add a sample using our esm library file add a sample html starter script to the public samples folder that uses an esm version of our library file you ll need to run npm run build then manually copy the generated file dist samples lib ramp es js to public samples lib ramp es js and then load that file in a
| 0
|
239,825
| 26,232,132,817
|
IssuesEvent
|
2023-01-05 01:50:08
|
ronnyh1/mithril.js
|
https://api.github.com/repos/ronnyh1/mithril.js
|
opened
|
CVE-2022-21680 (High) detected in marked-0.7.0.tgz
|
security vulnerability
|
## CVE-2022-21680 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `block.def` may cause catastrophic backtracking against some strings and lead to a regular expression denial of service (ReDoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21680>CVE-2022-21680</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rrrm-qjm4-v8hf">https://github.com/advisories/GHSA-rrrm-qjm4-v8hf</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-21680 (High) detected in marked-0.7.0.tgz - ## CVE-2022-21680 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.7.0.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.7.0.tgz">https://registry.npmjs.org/marked/-/marked-0.7.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.7.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `block.def` may cause catastrophic backtracking against some strings and lead to a regular expression denial of service (ReDoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21680>CVE-2022-21680</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rrrm-qjm4-v8hf">https://github.com/advisories/GHSA-rrrm-qjm4-v8hf</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in marked tgz cve high severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file package json path to vulnerable library node modules marked package json dependency hierarchy x marked tgz vulnerable library found in base branch next vulnerability details marked is a markdown parser and compiler prior to version the regular expression block def may cause catastrophic backtracking against some strings and lead to a regular expression denial of service redos anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected this issue is patched in version as a workaround avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
769,107
| 26,993,488,394
|
IssuesEvent
|
2023-02-09 22:04:38
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
opened
|
Arctos Report: DMNS Prep Sheet
|
Priority-High (Needed for work) function-Reports
|
**Description of the report** - DMNS Prep Sheet - on letter size page: top half has specimen record details, rest for prep lab entry. 1 page per specimen record
**SQL used in ColdFusion report** - Using the same SQL from DMNS label
Suggest to start with paged.js parameters to ensure 8X11 page size and other elements since the Prep Sheet has a lot of boxes and fields to fill by hand
|
1.0
|
Arctos Report: DMNS Prep Sheet - **Description of the report** - DMNS Prep Sheet - on letter size page: top half has specimen record details, rest for prep lab entry. 1 page per specimen record
**SQL used in ColdFusion report** - Using the same SQL from DMNS label
Suggest to start with paged.js parameters to ensure 8X11 page size and other elements since the Prep Sheet has a lot of boxes and fields to fill by hand
|
non_process
|
arctos report dmns prep sheet description of the report dmns prep sheet on letter size page top half has specimen record details rest for prep lab entry page per specimen record sql used in coldfusion report using the same sql from dmns label suggest to start with paged js parameters to ensure page size and other elements since the prep sheet has a lot of boxes and fields to fill by hand
| 0
|
14,366
| 3,392,472,846
|
IssuesEvent
|
2015-11-30 19:45:28
|
M-Zuber/VirtualGabbai
|
https://api.github.com/repos/M-Zuber/VirtualGabbai
|
closed
|
Tests aborted
|
bug Testing
|
If a test run is aborted then the clean up script doesn't run. Therefore it should be moved/copied to run also before every class of tests
|
1.0
|
Tests aborted - If a test run is aborted then the clean up script doesn't run. Therefore it should be moved/copied to run also before every class of tests
|
non_process
|
tests aborted if a test run is aborted then the clean up script doesn t run therefore it should be moved copied to run also before every class of tests
| 0
|
63,713
| 12,369,858,437
|
IssuesEvent
|
2020-05-18 15:52:34
|
stan-dev/cmdstanr
|
https://api.github.com/repos/stan-dev/cmdstanr
|
closed
|
Silence Windows warnings with RTools 4.0
|
internal-code
|
We should add `-Wno-int-in-bool-context -Wno-attributes` to the users compile flags if they are on Windows.
Reason: https://github.com/stan-dev/math/issues/1864
|
1.0
|
Silence Windows warnings with RTools 4.0 - We should add `-Wno-int-in-bool-context -Wno-attributes` to the users compile flags if they are on Windows.
Reason: https://github.com/stan-dev/math/issues/1864
|
non_process
|
silence windows warnings with rtools we should add wno int in bool context wno attributes to the users compile flags if they are on windows reason
| 0
|
17,950
| 23,947,186,962
|
IssuesEvent
|
2022-09-12 08:25:06
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
[Livestream]: tests are flaky.
|
type: process priority: p1 samples api: livestream
|
Quota is being reached on channels, so maybe channels 2 hours old need to be deleted, currently channels 24hrs old are being deleted.
@irataxy this is just so we don't forget.
|
1.0
|
[Livestream]: tests are flaky. - Quota is being reached on channels, so maybe channels 2 hours old need to be deleted, currently channels 24hrs old are being deleted.
@irataxy this is just so we don't forget.
|
process
|
tests are flaky quota is being reached on channels so maybe channels hours old need to be deleted currently channels old are being deleted irataxy this is just so we don t forget
| 1
|
674,903
| 23,069,500,008
|
IssuesEvent
|
2022-07-25 16:38:20
|
ChainSafe/forest
|
https://api.github.com/repos/ChainSafe/forest
|
closed
|
Restore the IPLD walk tests
|
Priority: 4 - Low Maintenance Ready
|
**Issue summary**
<!-- A clear and concise description of what the task is. -->
The `forest_ipld` crate has a set of test vectors for querying IPLD documents but the tests are currently disabled. The `submodules_tests` feature flag isn't appropriate for the IPLD tests and should be removed from `ipld/tests/walk_tests.rs`. This module has not been compiled in quite a while so some elbow grease may be needed.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 -->
|
1.0
|
Restore the IPLD walk tests - **Issue summary**
<!-- A clear and concise description of what the task is. -->
The `forest_ipld` crate has a set of test vectors for querying IPLD documents but the tests are currently disabled. The `submodules_tests` feature flag isn't appropriate for the IPLD tests and should be removed from `ipld/tests/walk_tests.rs`. This module has not been compiled in quite a while so some elbow grease may be needed.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 -->
|
non_process
|
restore the ipld walk tests issue summary the forest ipld crate has a set of test vectors for querying ipld documents but the tests are currently disabled the submodules tests feature flag isn t appropriate for the ipld tests and should be removed from ipld tests walk tests rs this module has not been compiled in quite a while so some elbow grease may be needed other information and links
| 0
|
176,818
| 6,565,529,206
|
IssuesEvent
|
2017-09-08 08:43:31
|
itsyouonline/identityserver
|
https://api.github.com/repos/itsyouonline/identityserver
|
reopened
|
IYO GUID feature
|
priority_critical state_question type_feature
|
- only needed from API
- allow any APP (API client linked to organization) to request a unique GUID for a user or organization
- only this APP can request which user is behind the GUID
- each ORG can ask max 10 of these unique GUID's for 1 specific user or org
- there is also feature which allows APP to store some data behind this GUID in IYO, this is just binary info & only means something for the APP (max size = 1kb), only APP can retrieve if it also knows the GUID.
## e.g. use case tokens in TF app
- we register info on blockchain e.g. tierion.com but we don't want to put email or other identifiable ID
- this allows the APP (which is org with API client) to create a link between GUID and a user or organization and register that way
- if ever needed this APP can retrieve the user ID but only this APP
## use case for the data
- keep track of metadata relevant to this user
- e.g. imagine app is storing coin info in the blockchain, it needs to be able to remember previous registrations for this user (or a link to the previous one so the app can walk back). can do this as e.g. a json in this data field
## other use case
- register info in Zero-Tlog server, when Tlog server finishes, register the hash & other relevant info in tierion (encrypted using the IYO GUID) which is only known by the APP
- this allows the APP to keep track of lots of info which is stored in Zero-STOR through TLOG server & can be used for full audit trails
|
1.0
|
IYO GUID feature - - only needed from API
- allow any APP (API client linked to organization) to request a unique GUID for a user or organization
- only this APP can request which user is behind the GUID
- each ORG can ask max 10 of these unique GUID's for 1 specific user or org
- there is also feature which allows APP to store some data behind this GUID in IYO, this is just binary info & only means something for the APP (max size = 1kb), only APP can retrieve if it also knows the GUID.
## e.g. use case tokens in TF app
- we register info on blockchain e.g. tierion.com but we don't want to put email or other identifiable ID
- this allows the APP (which is org with API client) to create a link between GUID and a user or organization and register that way
- if ever needed this APP can retrieve the user ID but only this APP
## use case for the data
- keep track of metadata relevant to this user
- e.g. imagine app is storing coin info in the blockchain, it needs to be able to remember previous registrations for this user (or a link to the previous one so the app can walk back). can do this as e.g. a json in this data field
## other use case
- register info in Zero-Tlog server, when Tlog server finishes, register the hash & other relevant info in tierion (encrypted using the IYO GUID) which is only known by the APP
- this allows the APP to keep track of lots of info which is stored in Zero-STOR through TLOG server & can be used for full audit trails
|
non_process
|
iyo guid feature only needed from api allow any app api client linked to organization to request a unique guid for a user or organization only this app can request which user is behind the guid each org can ask max of these unique guid s for specific user or org there is also feature which allows app to store some data behind this guid in iyo this is just binary info only means something for the app max size only app can retrieve if it also knows the guid e g use case tokens in tf app we register info on blockchain e g tierion com but we don t want to put email or other identifiable id this allows the app which is org with api client to create a link between guid and a user or organization and register that way if ever needed this app can retrieve the user id but only this app use case for the data keep track of metadata relevant to this user e g imagine app is storing coin info in the blockchain it needs to be able to remember previous registrations for this user or a link to the previous one so the app can walk back can do this as e g a json in this data field other use case register info in zero tlog server when tlog server finishes register the hash other relevant info in tierion encrypted using the iyo guid which is only known by the app this allows the app to keep track of lots of info which is stored in zero stor through tlog server can be used for full audit trails
| 0
|
18,865
| 3,727,383,301
|
IssuesEvent
|
2016-03-06 07:57:52
|
briansmith/ring
|
https://api.github.com/repos/briansmith/ring
|
closed
|
Remove GCC-isms like crypto/bn/asm/x86_64-gcc.c
|
performance static-analysis-and-type-safety test-coverage
|
This code doesn't get built with MSVC. How important are these optimizations? It seems strange to only have them for x86_64.
|
1.0
|
Remove GCC-isms like crypto/bn/asm/x86_64-gcc.c - This code doesn't get built with MSVC. How important are these optimizations? It seems strange to only have them for x86_64.
|
non_process
|
remove gcc isms like crypto bn asm gcc c this code doesn t get built with msvc how important are these optimizations it seems strange to only have them for
| 0
|
71,857
| 8,685,719,462
|
IssuesEvent
|
2018-12-03 08:45:53
|
Microsoft/Recognizers-Text
|
https://api.github.com/repos/Microsoft/Recognizers-Text
|
closed
|
[EN DatetimeV2] "everyday", "thanksgiving", "daily", "now" are tagged as datetimeV2
|
by design
|
"everyday", "thanksgiving", "daily", "now" are tagged as DatetimeV2.
For example, query = "create a thanksgiving note", "thanksgiving" will be tagged as DatetimeV2 in this condition.
|
1.0
|
[EN DatetimeV2] "everyday", "thanksgiving", "daily", "now" are tagged as datetimeV2 - "everyday", "thanksgiving", "daily", "now" are tagged as DatetimeV2.
For example, query = "create a thanksgiving note", "thanksgiving" will be tagged as DatetimeV2 in this condition.
|
non_process
|
everyday thanksgiving daily now are tagged as everyday thanksgiving daily now are tagged as for example query create a thanksgiving note thanksgiving will be tagged as in this condition
| 0
|
109,196
| 13,752,644,787
|
IssuesEvent
|
2020-10-06 14:45:10
|
blackbaud/skyux-lists
|
https://api.github.com/repos/blackbaud/skyux-lists
|
closed
|
Create sort Design guidelines for phase 2 docs
|
Impact: High Needs: Design Severity: Medium Type: Enhancement
|
We have a UX guidelines section at https://developer.blackbaud.com/skyux/components/sort#ux-guidelines that can be copied over as the starting point, but it only has 1 item in it.
|
1.0
|
Create sort Design guidelines for phase 2 docs - We have a UX guidelines section at https://developer.blackbaud.com/skyux/components/sort#ux-guidelines that can be copied over as the starting point, but it only has 1 item in it.
|
non_process
|
create sort design guidelines for phase docs we have a ux guidelines section at that can be copied over as the starting point but it only has item in it
| 0
|
402
| 2,847,719,945
|
IssuesEvent
|
2015-05-29 18:34:37
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Atlas * Post-processor failed: Error uploading: Metadata must have 'provider' key
|
bug post-processor/atlas
|
I get the above error when using this template.json when pushing to atlas:
```json
"post-processors": [
[{
"type": "vagrant",
"keep_input_artifact": false
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"created_at": "{{timestamp}}"
}
},
{
"type": "atlas",
"only": ["amazon-ebs"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "aws.ami",
"metadata": {
"created_at": "{{timestamp}}"
}
}]
```
The `amazon-ebs` one works, but the vagrant/virtualbox one gives that error:

|
1.0
|
Atlas * Post-processor failed: Error uploading: Metadata must have 'provider' key - I get the above error when using this template.json when pushing to atlas:
```json
"post-processors": [
[{
"type": "vagrant",
"keep_input_artifact": false
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"created_at": "{{timestamp}}"
}
},
{
"type": "atlas",
"only": ["amazon-ebs"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "aws.ami",
"metadata": {
"created_at": "{{timestamp}}"
}
}]
```
The `amazon-ebs` one works, but the vagrant/virtualbox one gives that error:

|
process
|
atlas post processor failed error uploading metadata must have provider key i get the above error when using this template json when pushing to atlas json post processors type vagrant keep input artifact false type atlas only artifact user atlas username user atlas name artifact type vagrant box metadata created at timestamp type atlas only artifact user atlas username user atlas name artifact type aws ami metadata created at timestamp the amazon ebs one works but the vagrant virtualbox one gives that error
| 1
|
279,325
| 30,702,506,427
|
IssuesEvent
|
2023-07-27 01:36:03
|
hshivhare67/kernel_v4.1.15_CVE-2019-10220
|
https://api.github.com/repos/hshivhare67/kernel_v4.1.15_CVE-2019-10220
|
closed
|
CVE-2017-17857 (High) detected in linuxlinux-4.4.302 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2017-17857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The check_stack_boundary function in kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging mishandling of invalid variable stack read operations.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17857>CVE-2017-17857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17857">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17857</a></p>
<p>Release Date: 2017-12-27</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-17857 (High) detected in linuxlinux-4.4.302 - autoclosed - ## CVE-2017-17857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.4.302</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The check_stack_boundary function in kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging mishandling of invalid variable stack read operations.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17857>CVE-2017-17857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17857">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17857</a></p>
<p>Release Date: 2017-12-27</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files kernel bpf verifier c kernel bpf verifier c vulnerability details the check stack boundary function in kernel bpf verifier c in the linux kernel through allows local users to cause a denial of service memory corruption or possibly have unspecified other impact by leveraging mishandling of invalid variable stack read operations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
19,523
| 25,834,582,429
|
IssuesEvent
|
2022-12-12 18:33:55
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
[Epic] Time zone metadata improvements for 46
|
Querying/Processor .Backend .Epic
|
A handful of improvements to the way we we deal with time zones in the Query Processor.
- #27177
-
|
1.0
|
[Epic] Time zone metadata improvements for 46 - A handful of improvements to the way we we deal with time zones in the Query Processor.
- #27177
-
|
process
|
time zone metadata improvements for a handful of improvements to the way we we deal with time zones in the query processor
| 1
|
25,990
| 6,731,679,172
|
IssuesEvent
|
2017-10-18 08:36:36
|
jstolarek/liveness-slicing
|
https://api.github.com/repos/jstolarek/liveness-slicing
|
opened
|
Rewrite tracing evaluation to use scopes instead of type classes
|
code cleanup
|
Tracing evaluation uses type classes for uniform notation - see #1. This is likely to cause trouble with proof automation. I should rewrite this to use scopes.
|
1.0
|
Rewrite tracing evaluation to use scopes instead of type classes - Tracing evaluation uses type classes for uniform notation - see #1. This is likely to cause trouble with proof automation. I should rewrite this to use scopes.
|
non_process
|
rewrite tracing evaluation to use scopes instead of type classes tracing evaluation uses type classes for uniform notation see this is likely to cause trouble with proof automation i should rewrite this to use scopes
| 0
|
512,445
| 14,897,028,747
|
IssuesEvent
|
2021-01-21 11:10:56
|
L-Acoustics/avdecc
|
https://api.github.com/repos/L-Acoustics/avdecc
|
opened
|
macOS/unix shared library type_info issue
|
bug high priority
|
With the addition of CONTROL descriptors, new structures have been added to the low level library, embedded inside a std::any type. We need to fully export such structures so type_info is correctly exported and usable by modules linking with that library.
(Otherwise std::any_cast will not recognize the correct structure and will throw a bad_any_cast)
|
1.0
|
macOS/unix shared library type_info issue - With the addition of CONTROL descriptors, new structures have been added to the low level library, embedded inside a std::any type. We need to fully export such structures so type_info is correctly exported and usable by modules linking with that library.
(Otherwise std::any_cast will not recognize the correct structure and will throw a bad_any_cast)
|
non_process
|
macos unix shared library type info issue with the addition of control descriptors new structures have been added to the low level library embedded inside a std any type we need to fully export such structures so type info is correctly exported and usable by modules linking with that library otherwise std any cast will not recognize the correct structure and will throw a bad any cast
| 0
|
22,185
| 30,733,553,524
|
IssuesEvent
|
2023-07-28 05:19:05
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Bug in `metricstransform` when using `experimental_match_labels`
|
bug help wanted good first issue Stale processor/metricstransform closed as inactive
|
### Component(s)
metricstransform
### What happened?
## Description
`experimental_match_labels` does not handle `/` correctly.
## Steps to Reproduce
Run transform that contains labels with `/` value:
```
metricstransform/prod-node:
transforms:
- include: node_filesystem_free_bytes
action: update
new_name: node.filesystem.root.free.bytes
experimental_match_labels: {"mountpoint": "/"}
match_type: strict
```
## Expected Result
All labels that contain `mountpoint = "/"` should be updated to a new metric with name `node.filesystem.root.free.bytes`
## Actual Result
`metricstransform` unable to match `/`
### Collector version
otel/opentelemetry-collector-contrib:latest
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
Image: otel/opentelemetry-collector-contrib:latest
### OpenTelemetry Collector configuration
```yaml
metricstransform/prod-node:
transforms:
- include: node_filesystem_free_bytes
action: update
new_name: node.filesystem.root.free2.bytes
experimental_match_labels: {"mountpoint": "/"}
match_type: strict
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
Bug in `metricstransform` when using `experimental_match_labels` - ### Component(s)
metricstransform
### What happened?
## Description
`experimental_match_labels` does not handle `/` correctly.
## Steps to Reproduce
Run transform that contains labels with `/` value:
```
metricstransform/prod-node:
transforms:
- include: node_filesystem_free_bytes
action: update
new_name: node.filesystem.root.free.bytes
experimental_match_labels: {"mountpoint": "/"}
match_type: strict
```
## Expected Result
All labels that contain `mountpoint = "/"` should be updated to a new metric with name `node.filesystem.root.free.bytes`
## Actual Result
`metricstransform` unable to match `/`
### Collector version
otel/opentelemetry-collector-contrib:latest
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
Image: otel/opentelemetry-collector-contrib:latest
### OpenTelemetry Collector configuration
```yaml
metricstransform/prod-node:
transforms:
- include: node_filesystem_free_bytes
action: update
new_name: node.filesystem.root.free2.bytes
experimental_match_labels: {"mountpoint": "/"}
match_type: strict
```
### Log output
_No response_
### Additional context
_No response_
|
process
|
bug in metricstransform when using experimental match labels component s metricstransform what happened description experimental match labels does not handle correctly steps to reproduce run transform that contains labels with value metricstransform prod node transforms include node filesystem free bytes action update new name node filesystem root free bytes experimental match labels mountpoint match type strict expected result all labels that contain mountpoint should be updated to a new metric with name node filesystem root free bytes actual result metricstransform unable to match collector version otel opentelemetry collector contrib latest environment information environment os e g ubuntu compiler if manually compiled e g go image otel opentelemetry collector contrib latest opentelemetry collector configuration yaml metricstransform prod node transforms include node filesystem free bytes action update new name node filesystem root bytes experimental match labels mountpoint match type strict log output no response additional context no response
| 1
|
18,749
| 24,649,271,799
|
IssuesEvent
|
2022-10-17 17:12:46
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
chorion micropyle formation - clarify if this is applicable to vertebrates, insects, or both
|
taxon constraints cellular processes
|
We have ZFIN annotations to
http://amigo.geneontology.org/amigo/term/GO:0046844
```yaml
id: GO:0046844
name: chorion micropyle formation
namespace: biological_process
def: "Establishment of the micropyle, a single cone-shaped specialization of the chorion that allows sperm entry into the egg prior to fertilization." [ISBN:0879694238]
is_a: GO:0003006 ! developmental process involved in reproduction
is_a: GO:0032502 ! developmental process
is_a: GO:0048646 ! anatomical structure formation involved in morphogenesis
intersection_of: GO:0048646 ! anatomical structure formation involved in morphogenesis
intersection_of: results_in_formation_of GO:0070825 ! micropyle
relationship: part_of GO:0007306 ! eggshell chorion assembly
```
But these should be flagged as taxon constraints (aside: where are we tracking these?)
The reasons are nuanced and there is a lot of confusion in GO with terms like eggshell/chorion/follicle:

@gouttegd from FlyBase and @paolaroncaglia are working on the underlying CL terms
* https://github.com/obophenotype/cell-ontology/issues/589
* https://github.com/obophenotype/uberon/issues/1524
|
1.0
|
chorion micropyle formation - clarify if this is applicable to vertebrates, insects, or both - We have ZFIN annotations to
http://amigo.geneontology.org/amigo/term/GO:0046844
```yaml
id: GO:0046844
name: chorion micropyle formation
namespace: biological_process
def: "Establishment of the micropyle, a single cone-shaped specialization of the chorion that allows sperm entry into the egg prior to fertilization." [ISBN:0879694238]
is_a: GO:0003006 ! developmental process involved in reproduction
is_a: GO:0032502 ! developmental process
is_a: GO:0048646 ! anatomical structure formation involved in morphogenesis
intersection_of: GO:0048646 ! anatomical structure formation involved in morphogenesis
intersection_of: results_in_formation_of GO:0070825 ! micropyle
relationship: part_of GO:0007306 ! eggshell chorion assembly
```
But these should be flagged as taxon constraints (aside: where are we tracking these?)
The reasons are nuanced and there is a lot of confusion in GO with terms like eggshell/chorion/follicle:

@gouttegd from FlyBase and @paolaroncaglia are working on the underlying CL terms
* https://github.com/obophenotype/cell-ontology/issues/589
* https://github.com/obophenotype/uberon/issues/1524
|
process
|
chorion micropyle formation clarify if this is applicable to vertebrates insects or both we have zfin annotations to yaml id go name chorion micropyle formation namespace biological process def establishment of the micropyle a single cone shaped specialization of the chorion that allows sperm entry into the egg prior to fertilization is a go developmental process involved in reproduction is a go developmental process is a go anatomical structure formation involved in morphogenesis intersection of go anatomical structure formation involved in morphogenesis intersection of results in formation of go micropyle relationship part of go eggshell chorion assembly but these should be flagged as taxon constraints aside where are we tracking these the reasons are nuanced and there is a lot of confusion in go with terms like eggshell chorion follicle gouttegd from flybase and paolaroncaglia are working on the underlying cl terms
| 1
|
461,341
| 13,228,701,305
|
IssuesEvent
|
2020-08-18 06:46:09
|
OpenSIPS/opensips
|
https://api.github.com/repos/OpenSIPS/opensips
|
closed
|
fraud_detection: CPM and CC may dip to negative values
|
bug high-priority
|
Hello Team,
Im using "opensips 3.0.2 (x86_64/linux)" and getting following concurrent calls.
root@debian:~# opensips-cli -x mi show_fraud_stats 15013880001 1
{
"cpm": 1,
"total_calls": 184,
"concurrent_calls": **4294967295**,
"seq_calls": 2
}
|
1.0
|
fraud_detection: CPM and CC may dip to negative values - Hello Team,
Im using "opensips 3.0.2 (x86_64/linux)" and getting following concurrent calls.
root@debian:~# opensips-cli -x mi show_fraud_stats 15013880001 1
{
"cpm": 1,
"total_calls": 184,
"concurrent_calls": **4294967295**,
"seq_calls": 2
}
|
non_process
|
fraud detection cpm and cc may dip to negative values hello team im using opensips linux and getting following concurrent calls root debian opensips cli x mi show fraud stats cpm total calls concurrent calls seq calls
| 0
|
758,237
| 26,547,072,203
|
IssuesEvent
|
2023-01-20 01:52:20
|
ksh1vn/DWCR
|
https://api.github.com/repos/ksh1vn/DWCR
|
closed
|
Pitch down radio phrase "Убийцы!"
|
Flaws (high priority)
|
Community Remaster use this phrase but i didn't pitched it down.
|
1.0
|
Pitch down radio phrase "Убийцы!" - Community Remaster use this phrase but i didn't pitched it down.
|
non_process
|
pitch down radio phrase убийцы community remaster use this phrase but i didn t pitched it down
| 0
|
12,174
| 14,741,895,381
|
IssuesEvent
|
2021-01-07 11:20:54
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Invoices total is not matching with Account Balance
|
anc-process anp-1 ant-bug
|
In GitLab by @pchaudhary on Feb 26, 2019, 07:37
If we make a payment for an account & then reverted the billing cycle which includes the invoice of any amount & then deletes the last payment, then the total of the invoice balance is not matching with the account balance.
|
1.0
|
Invoices total is not matching with Account Balance - In GitLab by @pchaudhary on Feb 26, 2019, 07:37
If we make a payment for an account & then reverted the billing cycle which includes the invoice of any amount & then deletes the last payment, then the total of the invoice balance is not matching with the account balance.
|
process
|
invoices total is not matching with account balance in gitlab by pchaudhary on feb if we make a payment for an account then reverted the billing cycle which includes the invoice of any amount then deletes the last payment then the total of the invoice balance is not matching with the account balance
| 1
|
141,712
| 11,432,591,348
|
IssuesEvent
|
2020-02-04 14:20:24
|
aws/amazon-vpc-cni-k8s
|
https://api.github.com/repos/aws/amazon-vpc-cni-k8s
|
closed
|
flaky unit test in nodeInit() DescribeENI call argument order
|
bug testing
|
```
--- FAIL: TestNodeInit (0.00s)
controller.go:150: Unexpected call to *mock_awsutils.MockAPIs.DescribeENI([eni-00000001]) at /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/pkg/awsutils/mocks/awsutils_mocks.go:102 because:
Expected call at /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/ipamd/ipamd_test.go:146 doesn't match the argument at index 0.
Got: eni-00000001
Want: is equal to eni-00000000
panic.go:563: missing call(s) to *mock_awsutils.MockAPIs.DescribeENI(is equal to eni-00000000) /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/ipamd/ipamd_test.go:146
panic.go:563: aborting test due to missing call(s)
```
Seeing this on some `make unit-test` runs.
|
1.0
|
flaky unit test in nodeInit() DescribeENI call argument order - ```
--- FAIL: TestNodeInit (0.00s)
controller.go:150: Unexpected call to *mock_awsutils.MockAPIs.DescribeENI([eni-00000001]) at /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/pkg/awsutils/mocks/awsutils_mocks.go:102 because:
Expected call at /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/ipamd/ipamd_test.go:146 doesn't match the argument at index 0.
Got: eni-00000001
Want: is equal to eni-00000000
panic.go:563: missing call(s) to *mock_awsutils.MockAPIs.DescribeENI(is equal to eni-00000000) /go/src/github.com/{{ORG_NAME}}/{{REPO_NAME}}/ipamd/ipamd_test.go:146
panic.go:563: aborting test due to missing call(s)
```
Seeing this on some `make unit-test` runs.
|
non_process
|
flaky unit test in nodeinit describeeni call argument order fail testnodeinit controller go unexpected call to mock awsutils mockapis describeeni at go src github com org name repo name pkg awsutils mocks awsutils mocks go because expected call at go src github com org name repo name ipamd ipamd test go doesn t match the argument at index got eni want is equal to eni panic go missing call s to mock awsutils mockapis describeeni is equal to eni go src github com org name repo name ipamd ipamd test go panic go aborting test due to missing call s seeing this on some make unit test runs
| 0
|
192,863
| 15,360,784,224
|
IssuesEvent
|
2021-03-01 17:19:53
|
wilmarmartinezq/git_web_practice_branch
|
https://api.github.com/repos/wilmarmartinezq/git_web_practice_branch
|
opened
|
Un commit que no sigue la convención de código o arreglo a realizar
|
documentation
|
L
El último commit tiene el siguiente mensaje:
`Se modifican la pagina3.html y pagina5.html`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
1.0
|
Un commit que no sigue la convención de código o arreglo a realizar - L
El último commit tiene el siguiente mensaje:
`Se modifican la pagina3.html y pagina5.html`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.
|
non_process
|
un commit que no sigue la convención de código o arreglo a realizar l el último commit tiene el siguiente mensaje se modifican la html y html este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado
| 0
|
559,201
| 16,552,336,752
|
IssuesEvent
|
2021-05-28 09:59:35
|
marcusolsson/grafana-calendar-panel
|
https://api.github.com/repos/marcusolsson/grafana-calendar-panel
|
closed
|
Show current day
|
priority/medium type/enhancement
|
Hello,
So far in our test we saw that by default today overflows and spills under in the view and we need to scroll to see it. The image is what we get when we load the dashboard

How is it possible to have a different view of the calendar? It is fine to have the last month, but the current week should be visible without scrolling.
As you can see, in the panel I selected 30d in the "Relative time" option.
Thanks,
Mirko
|
1.0
|
Show current day - Hello,
So far in our test we saw that by default today overflows and spills under in the view and we need to scroll to see it. The image is what we get when we load the dashboard

How is it possible to have a different view of the calendar? It is fine to have the last month, but the current week should be visible without scrolling.
As you can see, in the panel I selected 30d in the "Relative time" option.
Thanks,
Mirko
|
non_process
|
show current day hello so far in our test we saw that by default today overflows and spills under in the view and we need to scroll to see it the image is what we get when we load the dashboard how is it possible to have a different view of the calendar it is fine to have the last month but the current week should be visible without scrolling as you can see in the panel i selected in the relative time option thanks mirko
| 0
|
5,737
| 8,580,467,944
|
IssuesEvent
|
2018-11-13 12:04:38
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
Readify/Neo4jClient JsonProperty PropertyName attribute is not taken into account when generating Cypher
|
ADA C# test wrong processing
|
Issue: `https://github.com/Readify/Neo4jClient/issues/117`
PR: `https://github.com/Readify/Neo4jClient/commit/455672f8b73cddced22ced96b9dfe0ccaa02737d`
|
1.0
|
Readify/Neo4jClient JsonProperty PropertyName attribute is not taken into account when generating Cypher - Issue: `https://github.com/Readify/Neo4jClient/issues/117`
PR: `https://github.com/Readify/Neo4jClient/commit/455672f8b73cddced22ced96b9dfe0ccaa02737d`
|
process
|
readify jsonproperty propertyname attribute is not taken into account when generating cypher issue pr
| 1
|
17,467
| 23,291,047,574
|
IssuesEvent
|
2022-08-05 23:02:50
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] rules_rust-v0.8.1.tar.gz
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
Searched slack for the warning 404s I'm seeing and found #15881; this is the same issue for newer releases of rules_rust (URLs listed in the release notes which don't seem to be mirrored yet).
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_rust",
sha256 = "05e15e536cc1e5fd7b395d044fc2dabf73d2b27622fbc10504b7e48219bb09bc",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_rust/releases/download/0.8.1/rules_rust-v0.8.1.tar.gz",
"https://github.com/bazelbuild/rules_rust/releases/download/0.8.1/rules_rust-v0.8.1.tar.gz",
],
)
```
The previous release also looks to be missing, if you could mirror that as well while you're at it:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_rust",
sha256 = "b534645f6025ea887e8be6f577832e2830dc058a2990e287ff7a3745c523a739",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_rust/releases/download/0.8.0/rules_rust-v0.8.0.tar.gz",
"https://github.com/bazelbuild/rules_rust/releases/download/0.8.0/rules_rust-v0.8.0.tar.gz",
],
)
```
|
1.0
|
[Mirror] rules_rust-v0.8.1.tar.gz - ### Please list the URLs of the archives you'd like to mirror:
Searched slack for the warning 404s I'm seeing and found #15881; this is the same issue for newer releases of rules_rust (URLs listed in the release notes which don't seem to be mirrored yet).
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_rust",
sha256 = "05e15e536cc1e5fd7b395d044fc2dabf73d2b27622fbc10504b7e48219bb09bc",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_rust/releases/download/0.8.1/rules_rust-v0.8.1.tar.gz",
"https://github.com/bazelbuild/rules_rust/releases/download/0.8.1/rules_rust-v0.8.1.tar.gz",
],
)
```
The previous release also looks to be missing, if you could mirror that as well while you're at it:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_rust",
sha256 = "b534645f6025ea887e8be6f577832e2830dc058a2990e287ff7a3745c523a739",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_rust/releases/download/0.8.0/rules_rust-v0.8.0.tar.gz",
"https://github.com/bazelbuild/rules_rust/releases/download/0.8.0/rules_rust-v0.8.0.tar.gz",
],
)
```
|
process
|
rules rust tar gz please list the urls of the archives you d like to mirror searched slack for the warning i m seeing and found this is the same issue for newer releases of rules rust urls listed in the release notes which don t seem to be mirrored yet load bazel tools tools build defs repo http bzl http archive http archive name rules rust urls the previous release also looks to be missing if you could mirror that as well while you re at it load bazel tools tools build defs repo http bzl http archive http archive name rules rust urls
| 1
|
218,812
| 7,332,481,316
|
IssuesEvent
|
2018-03-05 16:26:43
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Create view service that can render metadata documents at REST URL
|
Category: metacat Component: Bugzilla-Id Priority: Normal Status: Closed Tracker: Feature
|
---
Author Name: **Matt Jones** (Matt Jones)
Original Redmine Issue: 5939, https://projects.ecoinformatics.org/ecoinfo/issues/5939
Original Date: 2013-05-23
Original Assignee: Chris Jones
---
We need a 'landing page' for metadata views that can be referenced as REST URLs and that show an HTML'ised version of an object using its PID. A service might have a REST url of the form:
-https://metacat.someplace.org/knb/d1/mn/v1/view/{pid}-
https://data.somplace.org/metacat/d1/mn/v1/views/{format}/{pid}
where {pid} is the permenant identifier for the metadata document. Such a URL landing page would prbably deliver an HTML version of the metadata. An optional ?format=knb parameter might be used to control which CSS file is linked into the page, or maybe we just omit the CSS altogether and assume we are delivering just an HTML fragment to a client? Needs more discussion.
This service would be used in several places, including the landing page URL for the new backbone based UI, and as the URL that is written into sitemaps that are published to google and elsewhere for search engines to index.
|
1.0
|
Create view service that can render metadata documents at REST URL - ---
Author Name: **Matt Jones** (Matt Jones)
Original Redmine Issue: 5939, https://projects.ecoinformatics.org/ecoinfo/issues/5939
Original Date: 2013-05-23
Original Assignee: Chris Jones
---
We need a 'landing page' for metadata views that can be referenced as REST URLs and that show an HTML'ised version of an object using its PID. A service might have a REST url of the form:
-https://metacat.someplace.org/knb/d1/mn/v1/view/{pid}-
https://data.somplace.org/metacat/d1/mn/v1/views/{format}/{pid}
where {pid} is the permenant identifier for the metadata document. Such a URL landing page would prbably deliver an HTML version of the metadata. An optional ?format=knb parameter might be used to control which CSS file is linked into the page, or maybe we just omit the CSS altogether and assume we are delivering just an HTML fragment to a client? Needs more discussion.
This service would be used in several places, including the landing page URL for the new backbone based UI, and as the URL that is written into sitemaps that are published to google and elsewhere for search engines to index.
|
non_process
|
create view service that can render metadata documents at rest url author name matt jones matt jones original redmine issue original date original assignee chris jones we need a landing page for metadata views that can be referenced as rest urls and that show an html ised version of an object using its pid a service might have a rest url of the form where pid is the permenant identifier for the metadata document such a url landing page would prbably deliver an html version of the metadata an optional format knb parameter might be used to control which css file is linked into the page or maybe we just omit the css altogether and assume we are delivering just an html fragment to a client needs more discussion this service would be used in several places including the landing page url for the new backbone based ui and as the url that is written into sitemaps that are published to google and elsewhere for search engines to index
| 0
|
17,043
| 22,421,400,656
|
IssuesEvent
|
2022-06-20 03:56:11
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Reject ProcessInstanceCreation when unable to start at target element
|
team/process-automation
|
The `ProcessInstanceCreation` command should be rejected:
- when one of the target element ids is unknown
Blocked by #9390
|
1.0
|
Reject ProcessInstanceCreation when unable to start at target element - The `ProcessInstanceCreation` command should be rejected:
- when one of the target element ids is unknown
Blocked by #9390
|
process
|
reject processinstancecreation when unable to start at target element the processinstancecreation command should be rejected when one of the target element ids is unknown blocked by
| 1
|
105,631
| 4,239,341,522
|
IssuesEvent
|
2016-07-06 09:05:56
|
Jumpscale/jscockpit
|
https://api.github.com/repos/Jumpscale/jscockpit
|
opened
|
Error deploying Cockpit through AYS
|
priority_critical type_bug
|
Check: ssh yves@85.255.197.109 -p 7122
PW: on-demand
ays blueprint -> succeeds
ays init -> succeeds
ays install -> fails
TRACEBACK:
*TRACEBACK*********************************************************************************
Traceback (most recent call last):
File "/tmp/actions/ays_ays_cockpit/install.py", line 35, in install
self.open_port(service, requested_port=ss[1], public_port=ss[0])
File "/opt/code/ays_cockpit/recipes/node.ovc/actions.py", line 52, in open_port
executor = j.tools.executor.getSSHBased(service.hrd.get("publicip"), service.hrd.getInt("sshport"), 'root')
File "/opt/jumpscale8//lib/JumpScale/tools/executor/ExecutorFactory.py", line 55, in getSSHBased
self._executors[key] = ExecutorSSH(addr, port=port, login=login, passwd=passwd, debug=debug, checkok=checkok, allow_agent=allow_agent, look_for_keys=look_for_keys, pushkey=pushkey,pubkey=pubkey)
File "/opt/jumpscale8/lib/JumpScale/tools/executor/ExecutorSSH.py", line 27, in __init__
self.sshclient.connectTest()
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 170, in connectTest
rc, out = self.execute(cmd, showout=False)
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 204, in execute
ch = self.transport.open_session()
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 118, in transport
if self.client is None:
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 141, in client
raise j.exceptions.RuntimeError('Impossible to create SSH connection to %s:%s' % (self.addr, self.port))
OurExceptions.RuntimeError: ERROR: Impossible to create SSH connection to 85.255.197.114:2200 ((type:runtime.error)
[Tue05 14:07] - ...lib/JumpScale/tools/actions/Action.py:794 - ERROR -
******************************************************************************************
[Tue05 14:07] - ...tyourservice/ActionMethodDecorator.py:93 - ERROR - **ERROR ACTION**:
action: install runid:ays_ays_cockpit (ERROR)
action: run runid:85.255.197.114:2200:cloudscalers (OK)
RESULT:
[Tue05 14:07] - ...lib/JumpScale/tools/actions/Action.py:794 - ERROR -
******************************************************************************************
**** TRACEBACK ***
Traceback (most recent call last):
File "/usr/local/bin/ays", line 330, in <module>
cli()
File "/usr/local/bin/ays", line 163, in install
run.execute()
File "/opt/jumpscale8/lib/JumpScale/baselib/atyourservice/AYSRun.py", line 276, in execute
raise j.exceptions.RuntimeError(self.error)
OurExceptions.RuntimeError: ERROR: RUN:ays_cockpit 4
|
1.0
|
Error deploying Cockpit through AYS -
Check: ssh yves@85.255.197.109 -p 7122
PW: on-demand
ays blueprint -> succeeds
ays init -> succeeds
ays install -> fails
TRACEBACK:
*TRACEBACK*********************************************************************************
Traceback (most recent call last):
File "/tmp/actions/ays_ays_cockpit/install.py", line 35, in install
self.open_port(service, requested_port=ss[1], public_port=ss[0])
File "/opt/code/ays_cockpit/recipes/node.ovc/actions.py", line 52, in open_port
executor = j.tools.executor.getSSHBased(service.hrd.get("publicip"), service.hrd.getInt("sshport"), 'root')
File "/opt/jumpscale8//lib/JumpScale/tools/executor/ExecutorFactory.py", line 55, in getSSHBased
self._executors[key] = ExecutorSSH(addr, port=port, login=login, passwd=passwd, debug=debug, checkok=checkok, allow_agent=allow_agent, look_for_keys=look_for_keys, pushkey=pushkey,pubkey=pubkey)
File "/opt/jumpscale8/lib/JumpScale/tools/executor/ExecutorSSH.py", line 27, in __init__
self.sshclient.connectTest()
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 170, in connectTest
rc, out = self.execute(cmd, showout=False)
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 204, in execute
ch = self.transport.open_session()
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 118, in transport
if self.client is None:
File "/opt/jumpscale8//lib/JumpScale/clients/ssh/SSHClient.py", line 141, in client
raise j.exceptions.RuntimeError('Impossible to create SSH connection to %s:%s' % (self.addr, self.port))
OurExceptions.RuntimeError: ERROR: Impossible to create SSH connection to 85.255.197.114:2200 ((type:runtime.error)
[Tue05 14:07] - ...lib/JumpScale/tools/actions/Action.py:794 - ERROR -
******************************************************************************************
[Tue05 14:07] - ...tyourservice/ActionMethodDecorator.py:93 - ERROR - **ERROR ACTION**:
action: install runid:ays_ays_cockpit (ERROR)
action: run runid:85.255.197.114:2200:cloudscalers (OK)
RESULT:
[Tue05 14:07] - ...lib/JumpScale/tools/actions/Action.py:794 - ERROR -
******************************************************************************************
**** TRACEBACK ***
Traceback (most recent call last):
File "/usr/local/bin/ays", line 330, in <module>
cli()
File "/usr/local/bin/ays", line 163, in install
run.execute()
File "/opt/jumpscale8/lib/JumpScale/baselib/atyourservice/AYSRun.py", line 276, in execute
raise j.exceptions.RuntimeError(self.error)
OurExceptions.RuntimeError: ERROR: RUN:ays_cockpit 4
|
non_process
|
error deploying cockpit through ays check ssh yves p pw on demand ays blueprint succeeds ays init succeeds ays install fails traceback traceback traceback most recent call last file tmp actions ays ays cockpit install py line in install self open port service requested port ss public port ss file opt code ays cockpit recipes node ovc actions py line in open port executor j tools executor getsshbased service hrd get publicip service hrd getint sshport root file opt lib jumpscale tools executor executorfactory py line in getsshbased self executors executorssh addr port port login login passwd passwd debug debug checkok checkok allow agent allow agent look for keys look for keys pushkey pushkey pubkey pubkey file opt lib jumpscale tools executor executorssh py line in init self sshclient connecttest file opt lib jumpscale clients ssh sshclient py line in connecttest rc out self execute cmd showout false file opt lib jumpscale clients ssh sshclient py line in execute ch self transport open session file opt lib jumpscale clients ssh sshclient py line in transport if self client is none file opt lib jumpscale clients ssh sshclient py line in client raise j exceptions runtimeerror impossible to create ssh connection to s s self addr self port ourexceptions runtimeerror error impossible to create ssh connection to type runtime error lib jumpscale tools actions action py error tyourservice actionmethoddecorator py error error action action install runid ays ays cockpit error action run runid cloudscalers ok result lib jumpscale tools actions action py error traceback traceback most recent call last file usr local bin ays line in cli file usr local bin ays line in install run execute file opt lib jumpscale baselib atyourservice aysrun py line in execute raise j exceptions runtimeerror self error ourexceptions runtimeerror error run ays cockpit
| 0
|
16,428
| 31,848,698,234
|
IssuesEvent
|
2023-09-14 22:27:21
|
patrickmohrmann/earthdawn4eV2
|
https://api.github.com/repos/patrickmohrmann/earthdawn4eV2
|
opened
|
Action Tests
|
Requirement
|
## Reason description
Earthdawn has two different test types. Action tests and Effect test. Effect tests are Initiative, Recovery and Damage tests. etc.
Action tests are all other tests. In general, the following:
- Talent tests
- skill tests
- Devotion tests
- spellcasting tests
- Attribute tests
- Attack tests
- Attack item tests
- ...
## detail description
the following workflows shall be created:
- [ ] Weapon Attack workflow
- [ ] Attack item workflow
- [ ] Talent workflows
- [ ] Skill workflows
- [ ] Devotions Workflows
- [ ] Attribute Workflows
- [ ] Spellcasting Workflows
## Technical notes
as an Idea. a differentiation of each action roll can be made by the rolltype:
| Action | description | Roll Type |
| ------------- | ------------- | ------------- |
| Weapon Attack - Melee weapon | | |
| Weapon Attack - Unarmed weapon | | |
| Weapon Attack - Ranged weapon | | |
| Weapon Attack - Throwing weapon | | |
| attack - Power | | |
| Attack - Attack | | |
| Attack - Maneuver | | |
| Talent | | |
| Talent - with Attribute initiative | | |
| skill | | |
| skill - with Attribute Initiative | | |
| Devotion | | |
| Devotion - with Attribute Initiative | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
|
1.0
|
Action Tests - ## Reason description
Earthdawn has two different test types. Action tests and Effect test. Effect tests are Initiative, Recovery and Damage tests. etc.
Action tests are all other tests. In general, the following:
- Talent tests
- skill tests
- Devotion tests
- spellcasting tests
- Attribute tests
- Attack tests
- Attack item tests
- ...
## detail description
the following workflows shall be created:
- [ ] Weapon Attack workflow
- [ ] Attack item workflow
- [ ] Talent workflows
- [ ] Skill workflows
- [ ] Devotions Workflows
- [ ] Attribute Workflows
- [ ] Spellcasting Workflows
## Technical notes
as an Idea. a differentiation of each action roll can be made by the rolltype:
| Action | description | Roll Type |
| ------------- | ------------- | ------------- |
| Weapon Attack - Melee weapon | | |
| Weapon Attack - Unarmed weapon | | |
| Weapon Attack - Ranged weapon | | |
| Weapon Attack - Throwing weapon | | |
| attack - Power | | |
| Attack - Attack | | |
| Attack - Maneuver | | |
| Talent | | |
| Talent - with Attribute initiative | | |
| skill | | |
| skill - with Attribute Initiative | | |
| Devotion | | |
| Devotion - with Attribute Initiative | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
|
non_process
|
action tests reason description earthdawn has two different test types action tests and effect test effect tests are initiative recovery and damage tests etc action tests are all other tests in general the following talent tests skill tests devotion tests spellcasting tests attribute tests attack tests attack item tests detail description the following workflows shall be created weapon attack workflow attack item workflow talent workflows skill workflows devotions workflows attribute workflows spellcasting workflows technical notes as an idea a differentiation of each action roll can be made by the rolltype action description roll type weapon attack melee weapon weapon attack unarmed weapon weapon attack ranged weapon weapon attack throwing weapon attack power attack attack attack maneuver talent talent with attribute initiative skill skill with attribute initiative devotion devotion with attribute initiative
| 0
|
10,695
| 13,491,052,115
|
IssuesEvent
|
2020-09-11 15:57:05
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
opened
|
Implement database locking in the migration engine
|
engines/migration engine process/candidate
|
We should make sure that migrations are not running concurrently.
This is implemented by other migration systems, and we can copy the best practices from them.
- pg_advisory_lock on postgres ([sqlx](https://github.com/launchbadge/sqlx/blob/61e4a4f5662f1e4970c8d602e1d345adc418f244/sqlx-core/src/postgres/migrate.rs#L123))
- GET LOCK on MySQL ([sqlx](https://github.com/launchbadge/sqlx/blob/61e4a4f5662f1e4970c8d602e1d345adc418f244/sqlx-core/src/mysql/migrate.rs#L113))
- sqlite can only have a single writer, so we don't need to do anything special
- Application locks on MSSQL ([DBA stackexchange](https://dba.stackexchange.com/questions/176424/does-microsoft-sql-server-offer-an-advisory-locks-feature-like-postgres))
It would be good to take a look at other migration systems, like activerecord, to find out if there is consensus on locking best practices.
|
1.0
|
Implement database locking in the migration engine - We should make sure that migrations are not running concurrently.
This is implemented by other migration systems, and we can copy the best practices from them.
- pg_advisory_lock on postgres ([sqlx](https://github.com/launchbadge/sqlx/blob/61e4a4f5662f1e4970c8d602e1d345adc418f244/sqlx-core/src/postgres/migrate.rs#L123))
- GET LOCK on MySQL ([sqlx](https://github.com/launchbadge/sqlx/blob/61e4a4f5662f1e4970c8d602e1d345adc418f244/sqlx-core/src/mysql/migrate.rs#L113))
- sqlite can only have a single writer, so we don't need to do anything special
- Application locks on MSSQL ([DBA stackexchange](https://dba.stackexchange.com/questions/176424/does-microsoft-sql-server-offer-an-advisory-locks-feature-like-postgres))
It would be good to take a look at other migration systems, like activerecord, to find out if there is consensus on locking best practices.
|
process
|
implement database locking in the migration engine we should make sure that migrations are not running concurrently this is implemented by other migration systems and we can copy the best practices from them pg advisory lock on postgres get lock on mysql sqlite can only have a single writer so we don t need to do anything special application locks on mssql it would be good to take a look at other migration systems like activerecord to find out if there is consensus on locking best practices
| 1
|
362,767
| 25,389,077,860
|
IssuesEvent
|
2022-11-22 01:31:16
|
exastro-suite/it-automation-docs
|
https://api.github.com/repos/exastro-suite/it-automation-docs
|
closed
|
[doc]【メニュー作成】パラメータシートに対する参照RestAPIについて、 bodyのFILTER検索にLISTを指定した際に、完全一致条件となるはずの文字が 一部無視された形で完全一致検索される
|
documentation
|
[【メニュー作成】パラメータシートに対する参照RestAPIについて、 bodyのFILTER検索にLISTを指定した際に、完全一致条件となるはずの文字が 一部無視された形で完全一致検索される #1998](https://github.com/exastro-suite/it-automation/issues/1998)
|
1.0
|
[doc]【メニュー作成】パラメータシートに対する参照RestAPIについて、 bodyのFILTER検索にLISTを指定した際に、完全一致条件となるはずの文字が 一部無視された形で完全一致検索される - [【メニュー作成】パラメータシートに対する参照RestAPIについて、 bodyのFILTER検索にLISTを指定した際に、完全一致条件となるはずの文字が 一部無視された形で完全一致検索される #1998](https://github.com/exastro-suite/it-automation/issues/1998)
|
non_process
|
【メニュー作成】パラメータシートに対する参照restapiについて、 bodyのfilter検索にlistを指定した際に、完全一致条件となるはずの文字が 一部無視された形で完全一致検索される
| 0
|
345,809
| 30,844,760,896
|
IssuesEvent
|
2023-08-02 13:04:46
|
iseruuuuu/disney_app
|
https://api.github.com/repos/iseruuuuu/disney_app
|
closed
|
[Improve]: account_screen_view_model test
|
test
|
### Contact Details
_No response_
### What happened?
A improve !
|
1.0
|
[Improve]: account_screen_view_model test - ### Contact Details
_No response_
### What happened?
A improve !
|
non_process
|
account screen view model test contact details no response what happened a improve
| 0
|
15,266
| 19,215,489,286
|
IssuesEvent
|
2021-12-07 09:02:52
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Change labels "in other organism" to "in another organism"
|
multi-species process
|
Hi,
The multi org processes group would prefer the terms that end with "in other organism" to be changed to "in another organism" ; e.g.
-name: envenomation resulting in hemolysis in other organism
+name: envenomation resulting in hemolysis in another organism
-name: cytolysis in other organism
+name: cytolysis in another organism
Etc. See PR for all changes.
Thanks, Pascale
@geneontology/multiorganism-working-group
|
1.0
|
Change labels "in other organism" to "in another organism" - Hi,
The multi org processes group would prefer the terms that end with "in other organism" to be changed to "in another organism" ; e.g.
-name: envenomation resulting in hemolysis in other organism
+name: envenomation resulting in hemolysis in another organism
-name: cytolysis in other organism
+name: cytolysis in another organism
Etc. See PR for all changes.
Thanks, Pascale
@geneontology/multiorganism-working-group
|
process
|
change labels in other organism to in another organism hi the multi org processes group would prefer the terms that end with in other organism to be changed to in another organism e g name envenomation resulting in hemolysis in other organism name envenomation resulting in hemolysis in another organism name cytolysis in other organism name cytolysis in another organism etc see pr for all changes thanks pascale geneontology multiorganism working group
| 1
|
452,666
| 13,057,703,913
|
IssuesEvent
|
2020-07-30 07:50:11
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
0.9 localization - Self-improvement description issues.
|
Category: UI Priority: Low Status: Fixed
|

https://crowdin.com/translate/eco-by-strange-loop-games/46/en-ru#173414
"weight citizens" - looks like "can carry" is missed.
Unnecessary double space in "their stomach"
|
1.0
|
0.9 localization - Self-improvement description issues. - 
https://crowdin.com/translate/eco-by-strange-loop-games/46/en-ru#173414
"weight citizens" - looks like "can carry" is missed.
Unnecessary double space in "their stomach"
|
non_process
|
localization self improvement description issues weight citizens looks like can carry is missed unnecessary double space in their stomach
| 0
|
32,942
| 12,152,123,017
|
IssuesEvent
|
2020-04-24 21:27:30
|
LevyForchh/mindmeld
|
https://api.github.com/repos/LevyForchh/mindmeld
|
opened
|
CVE-2015-9251 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/mindmeld/mindmeld-ui/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /mindmeld/mindmeld-ui/node_modules/sockjs/examples/multiplex/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/express-3.x/index.html,/mindmeld/mindmeld-ui/node_modules/vm-browserify/example/run/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/hapi/html/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/LevyForchh/mindmeld/commits/ecdb981d4151b1a7aaa73e3ae7ff11430bd3d4a4">ecdb981d4151b1a7aaa73e3ae7ff11430bd3d4a4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"}],"vulnerabilityIdentifier":"CVE-2015-9251","vulnerabilityDetails":"jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2015-9251 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/mindmeld/mindmeld-ui/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /mindmeld/mindmeld-ui/node_modules/sockjs/examples/multiplex/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/express-3.x/index.html,/mindmeld/mindmeld-ui/node_modules/vm-browserify/example/run/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/hapi/html/index.html,/mindmeld/mindmeld-ui/node_modules/sockjs/examples/echo/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/LevyForchh/mindmeld/commits/ecdb981d4151b1a7aaa73e3ae7ff11430bd3d4a4">ecdb981d4151b1a7aaa73e3ae7ff11430bd3d4a4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"}],"vulnerabilityIdentifier":"CVE-2015-9251","vulnerabilityDetails":"jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm mindmeld mindmeld ui node modules sockjs examples multiplex index html path to vulnerable library mindmeld mindmeld ui node modules sockjs examples multiplex index html mindmeld mindmeld ui node modules sockjs examples express x index html mindmeld mindmeld ui node modules vm browserify example run index html mindmeld mindmeld ui node modules sockjs examples hapi html index html mindmeld mindmeld ui node modules sockjs examples echo index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed vulnerabilityurl
| 0
|
5,824
| 8,658,266,843
|
IssuesEvent
|
2018-11-28 00:12:44
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Upgrade 0.30.4 to 0.31.1 / query builder queries fail
|
Bug Query Processor
|
Hello,
Upgraded this morning to 0.31.1. In dashboards with filters, all my queries built with the query builder fail with :
error "Unknown column 'field' in 'where clause'",
SQL written queries still work.
```
nov. 26 15:57:38 WARN metabase.query-processor :: {:status :failed,
:class com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException,
:error "Unknown column 'field' in 'where clause'",
:stacktrace
("sun.reflect.GeneratedConstructorAccessor85.newInstance(Unknown Source)"
"sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)"
"java.lang.reflect.Constructor.newInstance(Constructor.java:423)"
"com.mysql.jdbc.Util.handleNewInstance(Util.java:425)"
"com.mysql.jdbc.Util.getInstance(Util.java:408)"
"com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)"
"com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)"
"com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)"
"com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)"
"com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)"
"com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)"
"com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)"
"com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1966)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)"
"clojure.java.jdbc$execute_query_with_params.invokeStatic(jdbc.clj:1002)"
"clojure.java.jdbc$execute_query_with_params.invoke(jdbc.clj:996)"
"clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1025)"
"clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1005)"
"clojure.java.jdbc$query.invokeStatic(jdbc.clj:1099)"
"clojure.java.jdbc$query.invoke(jdbc.clj:1056)"
"toucan.db$query.invokeStatic(db.clj:275)"
"toucan.db$query.doInvoke(db.clj:271)"
"clojure.lang.RestFn.invoke(RestFn.java:410)"
"toucan.db$simple_select.invokeStatic(db.clj:379)"
"toucan.db$simple_select.invoke(db.clj:368)"
"toucan.db$simple_select_one.invokeStatic(db.clj:405)"
"toucan.db$simple_select_one.invoke(db.clj:394)"
"toucan.db$select_one.invokeStatic(db.clj:606)"
"toucan.db$select_one.doInvoke(db.clj:599)"
"clojure.lang.RestFn.applyTo(RestFn.java:139)"
"clojure.core$apply.invokeStatic(core.clj:659)"
"clojure.core$apply.invoke(core.clj:652)"
"toucan.db$select_one_field.invokeStatic(db.clj:615)"
"toucan.db$select_one_field.doInvoke(db.clj:608)"
"clojure.lang.RestFn.invoke(RestFn.java:464)"
"--> query_processor.middleware.parameters.mbql$parse_param_value_for_type.invokeStatic(mbql.clj:22)"
"query_processor.middleware.parameters.mbql$parse_param_value_for_type.invoke(mbql.clj:13)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866.invoke(mbql.clj:56)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856.invoke(mbql.clj:38)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866$iter__32871__32875$fn__32876$fn__32877.invoke(mbql.clj:45)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866$iter__32871__32875$fn__32876.invoke(mbql.clj:44)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866.invoke(mbql.clj:44)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856.invoke(mbql.clj:38)"
"query_processor.middleware.parameters.mbql$expand.invokeStatic(mbql.clj:71)"
"query_processor.middleware.parameters.mbql$expand.invoke(mbql.clj:58)"
"query_processor.middleware.parameters$expand_parameters_STAR_.invokeStatic(parameters.clj:18)"
"query_processor.middleware.parameters$expand_parameters_STAR_.invoke(parameters.clj:12)"
"query_processor.middleware.parameters$expand_parameters.invokeStatic(parameters.clj:43)"
"query_processor.middleware.parameters$expand_parameters.invoke(parameters.clj:40)"
"query_processor.middleware.parameters$substitute_parameters_STAR_.invokeStatic(parameters.clj:49)"
"query_processor.middleware.parameters$substitute_parameters_STAR_.invoke(parameters.clj:46)"
"query_processor.middleware.driver_specific$process_query_in_context$fn__32156.invoke(driver_specific.clj:12)"
"query_processor.middleware.resolve_driver$resolve_driver$fn__35279.invoke(resolve_driver.clj:15)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31351$fn__31352.invoke(bind_effective_timezone.clj:9)"
"util.date$call_with_effective_timezone.invokeStatic(date.clj:88)"
"util.date$call_with_effective_timezone.invoke(date.clj:77)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31351.invoke(bind_effective_timezone.clj:8)"
"query_processor.middleware.store$initialize_store$fn__37496$fn__37497.invoke(store.clj:11)"
"query_processor.store$do_with_new_store.invokeStatic(store.clj:34)"
"query_processor.store$do_with_new_store.invoke(store.clj:30)"
"query_processor.middleware.store$initialize_store$fn__37496.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__31773.invoke(cache.clj:127)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__31821.invoke(catch_exceptions.clj:64)"
"query_processor$process_query.invokeStatic(query_processor.clj:213)"
"query_processor$process_query.invoke(query_processor.clj:209)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:322)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:316)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666$fn__37667.invoke(query_processor.clj:354)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666.invoke(query_processor.clj:340)"
"api.card$run_query_for_card.invokeStatic(card.clj:580)"
"api.card$run_query_for_card.doInvoke(card.clj:566)"
"api.card$fn__46963$fn__46966.invoke(card.clj:587)"
"api.card$fn__46963.invokeStatic(card.clj:586)"
"api.card$fn__46963.invoke(card.clj:582)"
"middleware$enforce_authentication$fn__56091.invoke(middleware.clj:113)"
"api.routes$fn__56237.invokeStatic(routes.clj:62)"
"api.routes$fn__56237.invoke(routes.clj:62)"
"routes$fn__56326$fn__56327.doInvoke(routes.clj:108)"
"routes$fn__56326.invokeStatic(routes.clj:103)"
"routes$fn__56326.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__56226.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__56204$fn__56206.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__56204.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__56146.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__62590.invoke(core.clj:67)"
"middleware$bind_current_user$fn__56096.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__56156.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__56149.invoke(middleware.clj:262)"),
:query
{:query {:source-table 5, :aggregation [[:metric 2]], :breakout [[:fk-> [:field-id 46] [:field-id 61]]]},
:type :query,
:constraints {:max-results 10000, :max-results-bare-rows 2000},
:parameters [{:type "date/all-options", :target ["dimension" ["field-id" 42]], :value "2018-01-01~2018-07-01"} {:type "id", :target ["dimension" ["fk->" 46 60]], :value ["006"]}],
:middleware nil,
:cache-ttl nil,
:info
{:executed-by 1,
:context :question,
:card-id 14,
:dashboard-id nil,
:query-hash [115, -51, 84, -53, 94, 76, 3, 70, -111, -27, 94, 51, 112, -50, 115, 46, -48, 21, 93, -18, 74, -53, -64, 121, 59, -59, 26, 96, -113, -60, 73, 51],
:query-type "MBQL"}},
:preprocessed nil,
:native nil}
nov. 26 15:57:38 WARN metabase.query-processor :: Query failure: Unknown column 'field' in 'where clause'
("clojure.core$ex_info.invokeStatic(core.clj:4739)"
"clojure.core$ex_info.invoke(core.clj:4739)"
"--> query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:285)"
"query_processor$assert_query_status_successful.invoke(query_processor.clj:277)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:323)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:316)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666$fn__37667.invoke(query_processor.clj:354)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666.invoke(query_processor.clj:340)"
"api.card$run_query_for_card.invokeStatic(card.clj:580)"
"api.card$run_query_for_card.doInvoke(card.clj:566)"
"api.card$fn__46963$fn__46966.invoke(card.clj:587)"
"api.card$fn__46963.invokeStatic(card.clj:586)"
"api.card$fn__46963.invoke(card.clj:582)"
"middleware$enforce_authentication$fn__56091.invoke(middleware.clj:113)"
"api.routes$fn__56237.invokeStatic(routes.clj:62)"
"api.routes$fn__56237.invoke(routes.clj:62)"
"routes$fn__56326$fn__56327.doInvoke(routes.clj:108)"
"routes$fn__56326.invokeStatic(routes.clj:103)"
"routes$fn__56326.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__56226.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__56204$fn__56206.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__56204.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__56146.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__62590.invoke(core.clj:67)"
"middleware$bind_current_user$fn__56096.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__56156.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__56149.invoke(middleware.clj:262)")
```
|
1.0
|
Upgrade 0.30.4 to 0.31.1 / query builder queries fail - Hello,
Upgraded this morning to 0.31.1. In dashboards with filters, all my queries built with the query builder fail with :
error "Unknown column 'field' in 'where clause'",
SQL written queries still work.
```
nov. 26 15:57:38 WARN metabase.query-processor :: {:status :failed,
:class com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException,
:error "Unknown column 'field' in 'where clause'",
:stacktrace
("sun.reflect.GeneratedConstructorAccessor85.newInstance(Unknown Source)"
"sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)"
"java.lang.reflect.Constructor.newInstance(Constructor.java:423)"
"com.mysql.jdbc.Util.handleNewInstance(Util.java:425)"
"com.mysql.jdbc.Util.getInstance(Util.java:408)"
"com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)"
"com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)"
"com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)"
"com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)"
"com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)"
"com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484)"
"com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1858)"
"com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1966)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)"
"clojure.java.jdbc$execute_query_with_params.invokeStatic(jdbc.clj:1002)"
"clojure.java.jdbc$execute_query_with_params.invoke(jdbc.clj:996)"
"clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1025)"
"clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1005)"
"clojure.java.jdbc$query.invokeStatic(jdbc.clj:1099)"
"clojure.java.jdbc$query.invoke(jdbc.clj:1056)"
"toucan.db$query.invokeStatic(db.clj:275)"
"toucan.db$query.doInvoke(db.clj:271)"
"clojure.lang.RestFn.invoke(RestFn.java:410)"
"toucan.db$simple_select.invokeStatic(db.clj:379)"
"toucan.db$simple_select.invoke(db.clj:368)"
"toucan.db$simple_select_one.invokeStatic(db.clj:405)"
"toucan.db$simple_select_one.invoke(db.clj:394)"
"toucan.db$select_one.invokeStatic(db.clj:606)"
"toucan.db$select_one.doInvoke(db.clj:599)"
"clojure.lang.RestFn.applyTo(RestFn.java:139)"
"clojure.core$apply.invokeStatic(core.clj:659)"
"clojure.core$apply.invoke(core.clj:652)"
"toucan.db$select_one_field.invokeStatic(db.clj:615)"
"toucan.db$select_one_field.doInvoke(db.clj:608)"
"clojure.lang.RestFn.invoke(RestFn.java:464)"
"--> query_processor.middleware.parameters.mbql$parse_param_value_for_type.invokeStatic(mbql.clj:22)"
"query_processor.middleware.parameters.mbql$parse_param_value_for_type.invoke(mbql.clj:13)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866.invoke(mbql.clj:56)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856.invoke(mbql.clj:38)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866$iter__32871__32875$fn__32876$fn__32877.invoke(mbql.clj:45)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866$iter__32871__32875$fn__32876.invoke(mbql.clj:44)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856$fn__32866.invoke(mbql.clj:44)"
"query_processor.middleware.parameters.mbql$fn__32851$build_filter_clause__32856.invoke(mbql.clj:38)"
"query_processor.middleware.parameters.mbql$expand.invokeStatic(mbql.clj:71)"
"query_processor.middleware.parameters.mbql$expand.invoke(mbql.clj:58)"
"query_processor.middleware.parameters$expand_parameters_STAR_.invokeStatic(parameters.clj:18)"
"query_processor.middleware.parameters$expand_parameters_STAR_.invoke(parameters.clj:12)"
"query_processor.middleware.parameters$expand_parameters.invokeStatic(parameters.clj:43)"
"query_processor.middleware.parameters$expand_parameters.invoke(parameters.clj:40)"
"query_processor.middleware.parameters$substitute_parameters_STAR_.invokeStatic(parameters.clj:49)"
"query_processor.middleware.parameters$substitute_parameters_STAR_.invoke(parameters.clj:46)"
"query_processor.middleware.driver_specific$process_query_in_context$fn__32156.invoke(driver_specific.clj:12)"
"query_processor.middleware.resolve_driver$resolve_driver$fn__35279.invoke(resolve_driver.clj:15)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31351$fn__31352.invoke(bind_effective_timezone.clj:9)"
"util.date$call_with_effective_timezone.invokeStatic(date.clj:88)"
"util.date$call_with_effective_timezone.invoke(date.clj:77)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31351.invoke(bind_effective_timezone.clj:8)"
"query_processor.middleware.store$initialize_store$fn__37496$fn__37497.invoke(store.clj:11)"
"query_processor.store$do_with_new_store.invokeStatic(store.clj:34)"
"query_processor.store$do_with_new_store.invoke(store.clj:30)"
"query_processor.middleware.store$initialize_store$fn__37496.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__31773.invoke(cache.clj:127)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__31821.invoke(catch_exceptions.clj:64)"
"query_processor$process_query.invokeStatic(query_processor.clj:213)"
"query_processor$process_query.invoke(query_processor.clj:209)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:322)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:316)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666$fn__37667.invoke(query_processor.clj:354)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666.invoke(query_processor.clj:340)"
"api.card$run_query_for_card.invokeStatic(card.clj:580)"
"api.card$run_query_for_card.doInvoke(card.clj:566)"
"api.card$fn__46963$fn__46966.invoke(card.clj:587)"
"api.card$fn__46963.invokeStatic(card.clj:586)"
"api.card$fn__46963.invoke(card.clj:582)"
"middleware$enforce_authentication$fn__56091.invoke(middleware.clj:113)"
"api.routes$fn__56237.invokeStatic(routes.clj:62)"
"api.routes$fn__56237.invoke(routes.clj:62)"
"routes$fn__56326$fn__56327.doInvoke(routes.clj:108)"
"routes$fn__56326.invokeStatic(routes.clj:103)"
"routes$fn__56326.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__56226.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__56204$fn__56206.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__56204.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__56146.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__62590.invoke(core.clj:67)"
"middleware$bind_current_user$fn__56096.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__56156.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__56149.invoke(middleware.clj:262)"),
:query
{:query {:source-table 5, :aggregation [[:metric 2]], :breakout [[:fk-> [:field-id 46] [:field-id 61]]]},
:type :query,
:constraints {:max-results 10000, :max-results-bare-rows 2000},
:parameters [{:type "date/all-options", :target ["dimension" ["field-id" 42]], :value "2018-01-01~2018-07-01"} {:type "id", :target ["dimension" ["fk->" 46 60]], :value ["006"]}],
:middleware nil,
:cache-ttl nil,
:info
{:executed-by 1,
:context :question,
:card-id 14,
:dashboard-id nil,
:query-hash [115, -51, 84, -53, 94, 76, 3, 70, -111, -27, 94, 51, 112, -50, 115, 46, -48, 21, 93, -18, 74, -53, -64, 121, 59, -59, 26, 96, -113, -60, 73, 51],
:query-type "MBQL"}},
:preprocessed nil,
:native nil}
nov. 26 15:57:38 WARN metabase.query-processor :: Query failure: Unknown column 'field' in 'where clause'
("clojure.core$ex_info.invokeStatic(core.clj:4739)"
"clojure.core$ex_info.invoke(core.clj:4739)"
"--> query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:285)"
"query_processor$assert_query_status_successful.invoke(query_processor.clj:277)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:323)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:316)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666$fn__37667.invoke(query_processor.clj:354)"
"query_processor$fn__37661$process_query_and_save_execution_BANG___37666.invoke(query_processor.clj:340)"
"api.card$run_query_for_card.invokeStatic(card.clj:580)"
"api.card$run_query_for_card.doInvoke(card.clj:566)"
"api.card$fn__46963$fn__46966.invoke(card.clj:587)"
"api.card$fn__46963.invokeStatic(card.clj:586)"
"api.card$fn__46963.invoke(card.clj:582)"
"middleware$enforce_authentication$fn__56091.invoke(middleware.clj:113)"
"api.routes$fn__56237.invokeStatic(routes.clj:62)"
"api.routes$fn__56237.invoke(routes.clj:62)"
"routes$fn__56326$fn__56327.doInvoke(routes.clj:108)"
"routes$fn__56326.invokeStatic(routes.clj:103)"
"routes$fn__56326.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__56226.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__56204$fn__56206.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__56204.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__56146.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__62590.invoke(core.clj:67)"
"middleware$bind_current_user$fn__56096.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__56156.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__56149.invoke(middleware.clj:262)")
```
|
process
|
upgrade to query builder queries fail hello upgraded this morning to in dashboards with filters all my queries built with the query builder fail with error unknown column field in where clause sql written queries still work nov warn metabase query processor status failed class com mysql jdbc exceptions mysqlsyntaxerrorexception error unknown column field in where clause stacktrace sun reflect newinstance unknown source sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java java lang reflect constructor newinstance constructor java com mysql jdbc util handlenewinstance util java com mysql jdbc util getinstance util java com mysql jdbc sqlerror createsqlexception sqlerror java com mysql jdbc mysqlio checkerrorpacket mysqlio java com mysql jdbc mysqlio checkerrorpacket mysqlio java com mysql jdbc mysqlio sendcommand mysqlio java com mysql jdbc mysqlio sqlquerydirect mysqlio java com mysql jdbc connectionimpl execsql connectionimpl java com mysql jdbc preparedstatement executeinternal preparedstatement java com mysql jdbc preparedstatement executequery preparedstatement java com mchange impl newproxypreparedstatement executequery newproxypreparedstatement java clojure java jdbc execute query with params invokestatic jdbc clj clojure java jdbc execute query with params invoke jdbc clj clojure java jdbc db query with resultset star invokestatic jdbc clj clojure java jdbc db query with resultset star invoke jdbc clj clojure java jdbc query invokestatic jdbc clj clojure java jdbc query invoke jdbc clj toucan db query invokestatic db clj toucan db query doinvoke db clj clojure lang restfn invoke restfn java toucan db simple select invokestatic db clj toucan db simple select invoke db clj toucan db simple select one invokestatic db clj toucan db simple select one invoke db clj toucan db select one invokestatic db clj toucan db select one doinvoke db clj clojure lang restfn applyto restfn java clojure core apply invokestatic core clj clojure core apply invoke core clj toucan db select one field invokestatic db clj toucan db select one field doinvoke db clj clojure lang restfn invoke restfn java query processor middleware parameters mbql parse param value for type invokestatic mbql clj query processor middleware parameters mbql parse param value for type invoke mbql clj query processor middleware parameters mbql fn build filter clause fn invoke mbql clj query processor middleware parameters mbql fn build filter clause invoke mbql clj query processor middleware parameters mbql fn build filter clause fn iter fn fn invoke mbql clj query processor middleware parameters mbql fn build filter clause fn iter fn invoke mbql clj query processor middleware parameters mbql fn build filter clause fn invoke mbql clj query processor middleware parameters mbql fn build filter clause invoke mbql clj query processor middleware parameters mbql expand invokestatic mbql clj query processor middleware parameters mbql expand invoke mbql clj query processor middleware parameters expand parameters star invokestatic parameters clj query processor middleware parameters expand parameters star invoke parameters clj query processor middleware parameters expand parameters invokestatic parameters clj query processor middleware parameters expand parameters invoke parameters clj query processor middleware parameters substitute parameters star invokestatic parameters clj query processor middleware parameters substitute parameters star invoke parameters clj query processor middleware driver specific process query in context fn invoke driver specific clj query processor middleware resolve driver resolve driver fn invoke resolve driver clj query processor middleware bind effective timezone bind effective timezone fn fn invoke bind effective timezone clj util date call with effective timezone invokestatic date clj util date call with effective timezone invoke date clj query processor middleware bind effective timezone bind effective timezone fn invoke bind effective timezone clj query processor middleware store initialize store fn fn invoke store clj query processor store do with new store invokestatic store clj query processor store do with new store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor process query invokestatic query processor clj query processor process query invoke query processor clj query processor run and save query bang invokestatic query processor clj query processor run and save query bang invoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj api card run query for card invokestatic card clj api card run query for card doinvoke card clj api card fn fn invoke card clj api card fn invokestatic card clj api card fn invoke card clj middleware enforce authentication fn invoke middleware clj api routes fn invokestatic routes clj api routes fn invoke routes clj routes fn fn doinvoke routes clj routes fn invokestatic routes clj routes fn invoke routes clj middleware catch api exceptions fn invoke middleware clj middleware log api call fn fn invoke middleware clj middleware log api call fn invoke middleware clj middleware add security headers fn invoke middleware clj core wrap streamed json response fn invoke core clj middleware bind current user fn invoke middleware clj middleware maybe set site url fn invoke middleware clj middleware add content type fn invoke middleware clj query query source table aggregation breakout type query constraints max results max results bare rows parameters value type id target value middleware nil cache ttl nil info executed by context question card id dashboard id nil query hash query type mbql preprocessed nil native nil nov warn metabase query processor query failure unknown column field in where clause clojure core ex info invokestatic core clj clojure core ex info invoke core clj query processor assert query status successful invokestatic query processor clj query processor assert query status successful invoke query processor clj query processor run and save query bang invokestatic query processor clj query processor run and save query bang invoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj api card run query for card invokestatic card clj api card run query for card doinvoke card clj api card fn fn invoke card clj api card fn invokestatic card clj api card fn invoke card clj middleware enforce authentication fn invoke middleware clj api routes fn invokestatic routes clj api routes fn invoke routes clj routes fn fn doinvoke routes clj routes fn invokestatic routes clj routes fn invoke routes clj middleware catch api exceptions fn invoke middleware clj middleware log api call fn fn invoke middleware clj middleware log api call fn invoke middleware clj middleware add security headers fn invoke middleware clj core wrap streamed json response fn invoke core clj middleware bind current user fn invoke middleware clj middleware maybe set site url fn invoke middleware clj middleware add content type fn invoke middleware clj
| 1
|
22,153
| 30,694,487,171
|
IssuesEvent
|
2023-07-26 17:28:48
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@mongodb-js/oidc-plugin 0.3.0 has 2 guarddog issues
|
npm-install-script npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"husky install\",","location":"package/package.json:52","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const child = (0, child_process_1.spawn)(this.options.openBrowser.command, [options.url], {\n shell: true,\n stdio: 'ignore',\n detached: true,\n signal: this.options.openB... });","location":"package/dist/plugin.js:298","message":"This package is silently executing another executable"}]}```
|
1.0
|
@mongodb-js/oidc-plugin 0.3.0 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"husky install\",","location":"package/package.json:52","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const child = (0, child_process_1.spawn)(this.options.openBrowser.command, [options.url], {\n shell: true,\n stdio: 'ignore',\n detached: true,\n signal: this.options.openB... });","location":"package/dist/plugin.js:298","message":"This package is silently executing another executable"}]}```
|
process
|
mongodb js oidc plugin has guarddog issues npm install script npm silent process execution n shell true n stdio ignore n detached true n signal this options openb location package dist plugin js message this package is silently executing another executable
| 1
|
141,519
| 12,970,446,780
|
IssuesEvent
|
2020-07-21 09:20:57
|
Leo-Corporation/LABS-ExperimentalConsole
|
https://api.github.com/repos/Leo-Corporation/LABS-ExperimentalConsole
|
closed
|
[Documentation] L'aide n'est pas à jour
|
documentation
|
**Description du problème :**
Il manque la description de la commande `list`.
https://github.com/Leo-Corporation/LABS-ExperimentalConsole/blob/cac99a7760d417120f8fafedce2913fe3336b9ce/LABS%20Experimental%20Console/Classes/Functions.cs#L13-L26
|
1.0
|
[Documentation] L'aide n'est pas à jour - **Description du problème :**
Il manque la description de la commande `list`.
https://github.com/Leo-Corporation/LABS-ExperimentalConsole/blob/cac99a7760d417120f8fafedce2913fe3336b9ce/LABS%20Experimental%20Console/Classes/Functions.cs#L13-L26
|
non_process
|
l aide n est pas à jour description du problème il manque la description de la commande list
| 0
|
825,629
| 31,465,026,842
|
IssuesEvent
|
2023-08-30 00:49:06
|
War-Brokers/.github
|
https://api.github.com/repos/War-Brokers/.github
|
closed
|
Make discord bot that can be invited to other servers
|
priority:3 - low type:suggestion
|
features:
- `stats` command
- `hecc` command
- `loldude`
- `siib` command
- `register-listing` command
- adds the server to the WB community server list (#117)
- squad role syncing
|
1.0
|
Make discord bot that can be invited to other servers - features:
- `stats` command
- `hecc` command
- `loldude`
- `siib` command
- `register-listing` command
- adds the server to the WB community server list (#117)
- squad role syncing
|
non_process
|
make discord bot that can be invited to other servers features stats command hecc command loldude siib command register listing command adds the server to the wb community server list squad role syncing
| 0
|
12,432
| 14,927,946,247
|
IssuesEvent
|
2021-01-24 17:20:33
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Dashboard > Statistics > Questionnaires > Values shown in stats are incorrect and instead calculated based on X/60
|
Bug P1 Process: Fixed Process: Tested dev iOS
|
**Steps:**
1. Configure stats for any response type Eg. Height
2. Publish the update
3. Login to iOS mobile
4. Submit the response Eg. 170cm
5. Observe the stats value for the same
**Actual:** All the values shown in stats are calculated based on X/60 Eg. 170/60 = 2.833
**Expected:** Values should be proper
Issue observed for all response types for questionnaires(question step and form step)
|
2.0
|
[iOS] Dashboard > Statistics > Questionnaires > Values shown in stats are incorrect and instead calculated based on X/60 - **Steps:**
1. Configure stats for any response type Eg. Height
2. Publish the update
3. Login to iOS mobile
4. Submit the response Eg. 170cm
5. Observe the stats value for the same
**Actual:** All the values shown in stats are calculated based on X/60 Eg. 170/60 = 2.833
**Expected:** Values should be proper
Issue observed for all response types for questionnaires(question step and form step)
|
process
|
dashboard statistics questionnaires values shown in stats are incorrect and instead calculated based on x steps configure stats for any response type eg height publish the update login to ios mobile submit the response eg observe the stats value for the same actual all the values shown in stats are calculated based on x eg expected values should be proper issue observed for all response types for questionnaires question step and form step
| 1
|
195,859
| 14,786,107,017
|
IssuesEvent
|
2021-01-12 04:36:26
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
kv/kvserver: TestStoreRangeMergeSlowWatcher timed out
|
C-test-failure O-robot branch-master
|
[(kv/kvserver).TestStoreRangeMergeSlowWatcher failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574679&tab=buildLog) on [master@63fba7f1ab321088619d04aa9a44e61d018016f6](https://github.com/cockroachdb/cockroach/commits/63fba7f1ab321088619d04aa9a44e61d018016f6):
```
Slow failing tests:
TestStoreRangeMergeSlowWatcher - 960.21s
Slow passing tests:
TestLogic - 626.36s
TestTenantLogic - 435.98s
TestRestoreMidSchemaChange - 86.46s
Example_demo - 83.71s
TestImportIntoCSV - 82.20s
TestScatterRandomizeLeases - 67.50s
TestImportCSVStmt - 66.97s
TestPaginatedBackupTenant - 59.86s
TestRemoveDeadReplicas - 58.21s
TestExecBuild - 56.94s
TestRaceWithBackfill - 55.38s
TestChangefeedSchemaChangeNoBackfill - 55.17s
TestBTreeDeleteInsertCloneEachTime - 52.36s
TestImportData - 48.77s
TestAllRegisteredSetup - 46.20s
TestTelemetry - 45.91s
TestChangefeedNoBackfill - 44.55s
TestFullClusterBackup - 44.34s
TestChangefeedDiff - 42.89s
TestTransientTxnErrors - 41.33s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestStoreRangeMergeSlowWatcher PKG=./pkg/kv/kvserver TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestStoreRangeMergeSlowWatcher.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
kv/kvserver: TestStoreRangeMergeSlowWatcher timed out - [(kv/kvserver).TestStoreRangeMergeSlowWatcher failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2574679&tab=buildLog) on [master@63fba7f1ab321088619d04aa9a44e61d018016f6](https://github.com/cockroachdb/cockroach/commits/63fba7f1ab321088619d04aa9a44e61d018016f6):
```
Slow failing tests:
TestStoreRangeMergeSlowWatcher - 960.21s
Slow passing tests:
TestLogic - 626.36s
TestTenantLogic - 435.98s
TestRestoreMidSchemaChange - 86.46s
Example_demo - 83.71s
TestImportIntoCSV - 82.20s
TestScatterRandomizeLeases - 67.50s
TestImportCSVStmt - 66.97s
TestPaginatedBackupTenant - 59.86s
TestRemoveDeadReplicas - 58.21s
TestExecBuild - 56.94s
TestRaceWithBackfill - 55.38s
TestChangefeedSchemaChangeNoBackfill - 55.17s
TestBTreeDeleteInsertCloneEachTime - 52.36s
TestImportData - 48.77s
TestAllRegisteredSetup - 46.20s
TestTelemetry - 45.91s
TestChangefeedNoBackfill - 44.55s
TestFullClusterBackup - 44.34s
TestChangefeedDiff - 42.89s
TestTransientTxnErrors - 41.33s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestStoreRangeMergeSlowWatcher PKG=./pkg/kv/kvserver TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestStoreRangeMergeSlowWatcher.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
kv kvserver teststorerangemergeslowwatcher timed out on slow failing tests teststorerangemergeslowwatcher slow passing tests testlogic testtenantlogic testrestoremidschemachange example demo testimportintocsv testscatterrandomizeleases testimportcsvstmt testpaginatedbackuptenant testremovedeadreplicas testexecbuild testracewithbackfill testchangefeedschemachangenobackfill testbtreedeleteinsertcloneeachtime testimportdata testallregisteredsetup testtelemetry testchangefeednobackfill testfullclusterbackup testchangefeeddiff testtransienttxnerrors more parameters goflags json make stressrace tests teststorerangemergeslowwatcher pkg pkg kv kvserver testtimeout stressflags timeout powered by
| 0
|
26,157
| 19,694,378,063
|
IssuesEvent
|
2022-01-12 10:33:25
|
wazuh/wazuh-documentation
|
https://api.github.com/repos/wazuh/wazuh-documentation
|
closed
|
Rework the section: Installation alternatives
|
priority: high type: infrastructure type: refactor
|
This issue aims to enhance the section **Installation alternatives**
## Tasks
- [x] Create a different menu items distribution according to the points discussed with the content-team
- [x] Create a branch: `4717_3865_installation_alternatives_rework_v1`
- [x] Make a PR: #4718
- [x] The new **Installation alternatives** menu contains the following items:
- Ready-to-use machines
- Containers options
- Offline installation
- Installation from sources
- Commercial options
- Orchestration tools
- [x] Editing the following sections of the index page:
- Containers
- Commercial options
- Orchestration tools
- [x] Create an alternative branch for merging: `4717_3865_installation_alternatives_rework_v2`
- [x] Merge with `3865_installation_guide_rework`
- [x] Make a PR: #4726
- [x] Make a last review
**Comparing both versions of the Installation alternatives menu:**

Regards,
Damian Furfuro
|
1.0
|
Rework the section: Installation alternatives - This issue aims to enhance the section **Installation alternatives**
## Tasks
- [x] Create a different menu items distribution according to the points discussed with the content-team
- [x] Create a branch: `4717_3865_installation_alternatives_rework_v1`
- [x] Make a PR: #4718
- [x] The new **Installation alternatives** menu contains the following items:
- Ready-to-use machines
- Containers options
- Offline installation
- Installation from sources
- Commercial options
- Orchestration tools
- [x] Editing the following sections of the index page:
- Containers
- Commercial options
- Orchestration tools
- [x] Create an alternative branch for merging: `4717_3865_installation_alternatives_rework_v2`
- [x] Merge with `3865_installation_guide_rework`
- [x] Make a PR: #4726
- [x] Make a last review
**Comparing both versions of the Installation alternatives menu:**

Regards,
Damian Furfuro
|
non_process
|
rework the section installation alternatives this issue aims to enhance the section installation alternatives tasks create a different menu items distribution according to the points discussed with the content team create a branch installation alternatives rework make a pr the new installation alternatives menu contains the following items ready to use machines containers options offline installation installation from sources commercial options orchestration tools editing the following sections of the index page containers commercial options orchestration tools create an alternative branch for merging installation alternatives rework merge with installation guide rework make a pr make a last review comparing both versions of the installation alternatives menu regards damian furfuro
| 0
|
123,023
| 10,244,792,740
|
IssuesEvent
|
2019-08-20 11:19:21
|
enonic/app-contentstudio
|
https://api.github.com/repos/enonic/app-contentstudio
|
closed
|
Add ui-test to verify issue#760
|
Test
|
Content in the Request Publishing Wizard is not updated after it has been changed on the server
https://github.com/enonic/app-contentstudio/issues/760
|
1.0
|
Add ui-test to verify issue#760 - Content in the Request Publishing Wizard is not updated after it has been changed on the server
https://github.com/enonic/app-contentstudio/issues/760
|
non_process
|
add ui test to verify issue content in the request publishing wizard is not updated after it has been changed on the server
| 0
|
36,657
| 12,418,375,446
|
IssuesEvent
|
2020-05-23 00:01:26
|
MicrosoftDocs/microsoft-365-docs
|
https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs
|
closed
|
what admin roles are needed?
|
security
|
Are their limits to what can be seen based on admin role assigned? What admin roles are needed to use Advanced hunting?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7bd75f06-75c8-c8f2-e2b4-0d48c714276f
* Version Independent ID: 621789df-ce3f-d985-9a93-77a46a18a9b8
* Content: [Overview of advanced hunting in Microsoft Threat Protection - Microsoft 365 security](https://docs.microsoft.com/en-us/microsoft-365/security/mtp/advanced-hunting-overview?view=o365-worldwide)
* Content Source: [microsoft-365/security/mtp/advanced-hunting-overview.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/mtp/advanced-hunting-overview.md)
* Product: **microsoft-365-enterprise**
* GitHub Login: @lomayor
* Microsoft Alias: **lomayor**
|
True
|
what admin roles are needed? - Are their limits to what can be seen based on admin role assigned? What admin roles are needed to use Advanced hunting?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7bd75f06-75c8-c8f2-e2b4-0d48c714276f
* Version Independent ID: 621789df-ce3f-d985-9a93-77a46a18a9b8
* Content: [Overview of advanced hunting in Microsoft Threat Protection - Microsoft 365 security](https://docs.microsoft.com/en-us/microsoft-365/security/mtp/advanced-hunting-overview?view=o365-worldwide)
* Content Source: [microsoft-365/security/mtp/advanced-hunting-overview.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/mtp/advanced-hunting-overview.md)
* Product: **microsoft-365-enterprise**
* GitHub Login: @lomayor
* Microsoft Alias: **lomayor**
|
non_process
|
what admin roles are needed are their limits to what can be seen based on admin role assigned what admin roles are needed to use advanced hunting document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product microsoft enterprise github login lomayor microsoft alias lomayor
| 0
|
17,303
| 23,119,866,189
|
IssuesEvent
|
2022-07-27 20:15:32
|
GoogleCloudPlatform/anthos-config-management-samples
|
https://api.github.com/repos/GoogleCloudPlatform/anthos-config-management-samples
|
reopened
|
Dependency Dashboard
|
type: process priority: p3
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
This repository currently has no open or pending branches.
## Detected dependencies
<details><summary>cloudbuild</summary>
<blockquote>
<details><summary>ci-app/app-repo/cloudbuild.yaml</summary>
- `gcr.io/google.com/cloudsdktool/cloud-sdk no version found`
- `gcr.io/kpt-dev/kpt no version found`
- `gcr.io/kpt-fn/gatekeeper v0.2`
</details>
<details><summary>ci-pipeline-unstructured/cloudbuild.yaml</summary>
- `bash no version found`
- `gcr.io/config-management-release/read-yaml no version found`
- `gcr.io/config-management-release/policy-controller-validate no version found`
</details>
<details><summary>ci-pipeline/cloudbuild.yaml</summary>
- `gcr.io/cloud-builders/kubectl no version found`
- `bash no version found`
- `bash no version found`
- `gcr.io/config-management-release/nomos no version found`
- `gcr.io/config-management-release/read-yaml no version found`
- `gcr.io/config-management-release/policy-controller-validate no version found`
</details>
<details><summary>kustomize-pipeline/build/cloudbuild.yaml</summary>
- `gcr.io/google-samples/cloudbuild-kustomize latest`
- `gcr.io/google-samples/cloudbuild-kustomize latest`
</details>
<details><summary>multi-environments-kustomize/config-source/cloudbuild.yaml</summary>
- `gcr.io/google-samples/cloudbuild-kustomize latest`
- `gcr.io/google-samples/cloudbuild-kustomize latest`
</details>
</blockquote>
</details>
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>multi-environments-kustomize/cloud-build-rendering/cloudbuilder-kustomize/Dockerfile</summary>
- `gcr.io/cloud-builders/kubectl latest`
</details>
</blockquote>
</details>
<details><summary>kustomize</summary>
<blockquote>
<details><summary>asm-acm-tutorial/online-boutique/authorization-policies/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/online-boutique/deployments/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/deploy-authorization-policies/ingress-gateway/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/deployments/ingress-gateway/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/fix-default-deny-authorization-policy/default-deny-authorization-policy/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/fix-strict-mtls/enable-mesh-strict-mtls/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>helm-component/automated-rendering/base/kustomization.yaml</summary>
- `cert-manager v1.9.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
This repository currently has no open or pending branches.
## Detected dependencies
<details><summary>cloudbuild</summary>
<blockquote>
<details><summary>ci-app/app-repo/cloudbuild.yaml</summary>
- `gcr.io/google.com/cloudsdktool/cloud-sdk no version found`
- `gcr.io/kpt-dev/kpt no version found`
- `gcr.io/kpt-fn/gatekeeper v0.2`
</details>
<details><summary>ci-pipeline-unstructured/cloudbuild.yaml</summary>
- `bash no version found`
- `gcr.io/config-management-release/read-yaml no version found`
- `gcr.io/config-management-release/policy-controller-validate no version found`
</details>
<details><summary>ci-pipeline/cloudbuild.yaml</summary>
- `gcr.io/cloud-builders/kubectl no version found`
- `bash no version found`
- `bash no version found`
- `gcr.io/config-management-release/nomos no version found`
- `gcr.io/config-management-release/read-yaml no version found`
- `gcr.io/config-management-release/policy-controller-validate no version found`
</details>
<details><summary>kustomize-pipeline/build/cloudbuild.yaml</summary>
- `gcr.io/google-samples/cloudbuild-kustomize latest`
- `gcr.io/google-samples/cloudbuild-kustomize latest`
</details>
<details><summary>multi-environments-kustomize/config-source/cloudbuild.yaml</summary>
- `gcr.io/google-samples/cloudbuild-kustomize latest`
- `gcr.io/google-samples/cloudbuild-kustomize latest`
</details>
</blockquote>
</details>
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>multi-environments-kustomize/cloud-build-rendering/cloudbuilder-kustomize/Dockerfile</summary>
- `gcr.io/cloud-builders/kubectl latest`
</details>
</blockquote>
</details>
<details><summary>kustomize</summary>
<blockquote>
<details><summary>asm-acm-tutorial/online-boutique/authorization-policies/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/online-boutique/deployments/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/deploy-authorization-policies/ingress-gateway/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/deployments/ingress-gateway/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/fix-default-deny-authorization-policy/default-deny-authorization-policy/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>asm-acm-tutorial/root-sync/fix-strict-mtls/enable-mesh-strict-mtls/kustomization.yaml</summary>
- `GoogleCloudPlatform/anthos-service-mesh-samples main`
</details>
<details><summary>helm-component/automated-rendering/base/kustomization.yaml</summary>
- `cert-manager v1.9.1`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more this repository currently has no open or pending branches detected dependencies cloudbuild ci app app repo cloudbuild yaml gcr io google com cloudsdktool cloud sdk no version found gcr io kpt dev kpt no version found gcr io kpt fn gatekeeper ci pipeline unstructured cloudbuild yaml bash no version found gcr io config management release read yaml no version found gcr io config management release policy controller validate no version found ci pipeline cloudbuild yaml gcr io cloud builders kubectl no version found bash no version found bash no version found gcr io config management release nomos no version found gcr io config management release read yaml no version found gcr io config management release policy controller validate no version found kustomize pipeline build cloudbuild yaml gcr io google samples cloudbuild kustomize latest gcr io google samples cloudbuild kustomize latest multi environments kustomize config source cloudbuild yaml gcr io google samples cloudbuild kustomize latest gcr io google samples cloudbuild kustomize latest dockerfile multi environments kustomize cloud build rendering cloudbuilder kustomize dockerfile gcr io cloud builders kubectl latest kustomize asm acm tutorial online boutique authorization policies kustomization yaml googlecloudplatform anthos service mesh samples main googlecloudplatform anthos service mesh samples main asm acm tutorial online boutique deployments kustomization yaml googlecloudplatform anthos service mesh samples main asm acm tutorial root sync deploy authorization policies ingress gateway kustomization yaml googlecloudplatform anthos service mesh samples main asm acm tutorial root sync deployments ingress gateway kustomization yaml googlecloudplatform anthos service mesh samples main asm acm tutorial root sync fix default deny authorization policy default deny authorization policy kustomization yaml googlecloudplatform anthos service mesh samples main asm acm tutorial root sync fix strict mtls enable mesh strict mtls kustomization yaml googlecloudplatform anthos service mesh samples main helm component automated rendering base kustomization yaml cert manager check this box to trigger a request for renovate to run again on this repository
| 1
|
16,660
| 21,727,734,295
|
IssuesEvent
|
2022-05-11 09:10:54
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
opened
|
predict_class and predict_probabilities
|
new process
|
@JeroenVerstraelen is working on an implementation of CatBoost base ML in the VITO backend and while discussing details a couple of things came up:
- the `predict_catboost` process would be practically identical to `predict_random_forest`, except for some textual differences in title and descriptions. Turns out that it is not really necessary to define a dedicated `predict_` process for each kind of machine learning model: all the model details are embedded in the `ml-model` object and you could just use a single `predict(data: array, model: ml-model)` for all kinds of ML models.
- for some use cases we want to predict the probability of each class instead of a single class prediction. We first considered adding a parameter to toggle between class output or probabilities output, but that would mean that the output type would change: scalar for class prediction and array for probability prediction. Moreover, the former has to be used in `reduce_dimension` and the other in `apply_dimension`. It felt error prone and confusing to let these two different patterns depend on a rather inconspicuous boolean parameter. It might be better to have a separate processes for class prediction and probabilities prediction
So with this background, the proposal is to introduce two generic ml prediction processes:
- `predict_class(data: array, model: ml-model) -> number`
- `predict_probabilities(data: array, model: ml-model) -> array`
|
1.0
|
predict_class and predict_probabilities - @JeroenVerstraelen is working on an implementation of CatBoost base ML in the VITO backend and while discussing details a couple of things came up:
- the `predict_catboost` process would be practically identical to `predict_random_forest`, except for some textual differences in title and descriptions. Turns out that it is not really necessary to define a dedicated `predict_` process for each kind of machine learning model: all the model details are embedded in the `ml-model` object and you could just use a single `predict(data: array, model: ml-model)` for all kinds of ML models.
- for some use cases we want to predict the probability of each class instead of a single class prediction. We first considered adding a parameter to toggle between class output or probabilities output, but that would mean that the output type would change: scalar for class prediction and array for probability prediction. Moreover, the former has to be used in `reduce_dimension` and the other in `apply_dimension`. It felt error prone and confusing to let these two different patterns depend on a rather inconspicuous boolean parameter. It might be better to have a separate processes for class prediction and probabilities prediction
So with this background, the proposal is to introduce two generic ml prediction processes:
- `predict_class(data: array, model: ml-model) -> number`
- `predict_probabilities(data: array, model: ml-model) -> array`
|
process
|
predict class and predict probabilities jeroenverstraelen is working on an implementation of catboost base ml in the vito backend and while discussing details a couple of things came up the predict catboost process would be practically identical to predict random forest except for some textual differences in title and descriptions turns out that it is not really necessary to define a dedicated predict process for each kind of machine learning model all the model details are embedded in the ml model object and you could just use a single predict data array model ml model for all kinds of ml models for some use cases we want to predict the probability of each class instead of a single class prediction we first considered adding a parameter to toggle between class output or probabilities output but that would mean that the output type would change scalar for class prediction and array for probability prediction moreover the former has to be used in reduce dimension and the other in apply dimension it felt error prone and confusing to let these two different patterns depend on a rather inconspicuous boolean parameter it might be better to have a separate processes for class prediction and probabilities prediction so with this background the proposal is to introduce two generic ml prediction processes predict class data array model ml model number predict probabilities data array model ml model array
| 1
|
18,113
| 24,146,190,282
|
IssuesEvent
|
2022-09-21 18:58:45
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
opened
|
Bikeshed build failing
|
type:process
|
## Description
On a new Mac, did fresh install of bikshed with `pip3 install bikeshed` then tried to build the freshly-pulled main branch:
```
$ bikeshed spec
WARNING: Couldn't determine width and height of this image: images/webauthn-registration-flow-01.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/webauthn-registration-flow-01.svg'
WARNING: Couldn't determine width and height of this image: images/webauthn-authentication-flow-01.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/webauthn-authentication-flow-01.svg'
WARNING: Couldn't determine width and height of this image: images/fido-signature-formats-figure1.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-signature-formats-figure1.svg'
WARNING: Couldn't determine width and height of this image: images/fido-signature-formats-figure2.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-signature-formats-figure2.svg'
WARNING: Couldn't determine width and height of this image: images/string-truncation.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/string-truncation.svg'
WARNING: Couldn't determine width and height of this image: images/fido-attestation-structures.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-attestation-structures.svg'
FATAL ERROR: Obsolete biblio ref: [rfc8152] is replaced by [rfc9053]. Either update the reference, or use [rfc8152 obsolete] if this is an intentionally-obsolete reference.
✘ Did not generate, due to fatal errors
```
|
1.0
|
Bikeshed build failing - ## Description
On a new Mac, did fresh install of bikshed with `pip3 install bikeshed` then tried to build the freshly-pulled main branch:
```
$ bikeshed spec
WARNING: Couldn't determine width and height of this image: images/webauthn-registration-flow-01.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/webauthn-registration-flow-01.svg'
WARNING: Couldn't determine width and height of this image: images/webauthn-authentication-flow-01.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/webauthn-authentication-flow-01.svg'
WARNING: Couldn't determine width and height of this image: images/fido-signature-formats-figure1.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-signature-formats-figure1.svg'
WARNING: Couldn't determine width and height of this image: images/fido-signature-formats-figure2.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-signature-formats-figure2.svg'
WARNING: Couldn't determine width and height of this image: images/string-truncation.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/string-truncation.svg'
WARNING: Couldn't determine width and height of this image: images/fido-attestation-structures.svg
cannot identify image file '/Users/sweeden/git/w3c/webauthn/images/fido-attestation-structures.svg'
FATAL ERROR: Obsolete biblio ref: [rfc8152] is replaced by [rfc9053]. Either update the reference, or use [rfc8152 obsolete] if this is an intentionally-obsolete reference.
✘ Did not generate, due to fatal errors
```
|
process
|
bikeshed build failing description on a new mac did fresh install of bikshed with install bikeshed then tried to build the freshly pulled main branch bikeshed spec warning couldn t determine width and height of this image images webauthn registration flow svg cannot identify image file users sweeden git webauthn images webauthn registration flow svg warning couldn t determine width and height of this image images webauthn authentication flow svg cannot identify image file users sweeden git webauthn images webauthn authentication flow svg warning couldn t determine width and height of this image images fido signature formats svg cannot identify image file users sweeden git webauthn images fido signature formats svg warning couldn t determine width and height of this image images fido signature formats svg cannot identify image file users sweeden git webauthn images fido signature formats svg warning couldn t determine width and height of this image images string truncation svg cannot identify image file users sweeden git webauthn images string truncation svg warning couldn t determine width and height of this image images fido attestation structures svg cannot identify image file users sweeden git webauthn images fido attestation structures svg fatal error obsolete biblio ref is replaced by either update the reference or use if this is an intentionally obsolete reference ✘ did not generate due to fatal errors
| 1
|
463
| 2,902,659,458
|
IssuesEvent
|
2015-06-18 08:37:21
|
haskell-distributed/distributed-process
|
https://api.github.com/repos/haskell-distributed/distributed-process
|
closed
|
Support for large distributed networks
|
distributed-process-azure network-transport network-transport-tcp
|
This was first raised as a question on issue #99. This is a common problem in distributed systems.
Distributed Erlang systems require a fully connected network of nodes, which places considerable strain on system resources. As a professional Erlang programmer, I've seen anecdotal reports of distributed Erlang systems with thousands of nodes, though these apparently start to come unstuck beyond 1000 or so nodes.
In issue #99 @robstewart57 mentioned operating system limits on open TCP connections, and I'd like to discuss that in a a bit more detail.
> As I discovered [http://www.haskell.org/pipermail/haskell-cafe/2012-February/099212.html] , there are operating system limits on > the number of open TCP connections --- by default 1024. If we are thinking at scales of 1,000's of nodes, then connecting every > endpoint to every other endpoint is problematic.
As the respondent in that thread points out, the 1024 file descriptor limit is an artefact of the unix `select` system call, which can handle only that many descriptors at a time. Modern non-blocking I/O system calls such as `poll` and `epoll` on linux, `kqueue` on BSD variants and AIO capabilities on other operating systems can support a **much** higher number of open file descriptors in practice.
There **are** still limits however. Forcing all nodes to be fully interconnected carries a significant overhead, especially in a connection oriented transport layer protocol such as TCP.
> There is a case to say that this concern is not one for the programmer to deal with.
I think that's both true and false at the same time. It depends, as the saying goes. Let's address the problem area a little, then we can come back to areas of responsibility.
### Handling Connections Efficiently
> A solution would be have the transport on each node manage the connections on each of its endpoints. Heuristics might include killing heavyweight connections to remote endpoints that haven't been used in a time limit, killing old connections when > new connection requests are made and a connection limit (e.g. ulimit) isn't far from being breached etc..
The classic way to deal with this is to use a kind of 'heart beat' sensor, that periodically sends a ping to the other end to ensure it's alive, and closes the connection if it doesn't receive a response in a timely fashion. The definition of 'timely' is up for debate and, of course, this might benefit from being configurable.
Erlang certainly **does** support this notion, and the time limit for the heartbeat that determines whether or not a connection should be considered dead and torn down is set using the `net_kernel.tick_time` parameter to the virtual machine.
These however, are the concerns of handling connections efficiently and carefully in a fully connected system. They do not actually address the scalability concerns that you've raised, because if the system is fully operational - i.e., all the nodes stay online and are able to remain connected to one another - there is **still** a limit to how many nodes you can interconnect before you begin to exhaust system limits on each node.
Personally I **do** think that connection management should be configurable and exposed via the *management* APIs, but I do not think it should be magic. We should simply pick some sensible defaults.
Another issue to be aware of, with all this, is that Cloud Haskell deliberately makes the programmer handle `reconnect` explicitly. This is a **good thing** because, unlike Erlang, it forces the programmer to be aware that message ordering and/or delivery guarantees might not hold after a reconnection has occurred.
There is, BTW, some open issues to look at this in detail:
* https://github.com/haskell-distributed/distributed-process/issues/66
* https://github.com/haskell-distributed/distributed-process/issues/32
### Scalable Distributed Computing
Ha ha - the title says it all! This is a **massive** area of distributed systems research and we'd do well to absorb as much of that as makes sense before pushing forward with any particular implementation.
One fairly common solution to the problem of needing fully connected networks is to create independent federated clusters. In this architecture, you have several *clusters* of nodes, which are fully inter-connected (like a *normal* CH system). These clusters are then connected to one another via a quorum of members, which act as intermediaries.
> I'm not sure whether this is a parameter set when creating transports or endpoints, but such connection management should probably not be a concern of the programmer.. (?)
Connection management is very much the concern of the programmer, but we should not make it a barrier to entry. You should
* be able to build a scalable, high performance distributed system on CH using the (sensible) defaults
* be able to configure connection management (such as heartbeat timeouts) if you wish
* be able to manage the network layer at runtime, once your application is in production
* be able to choose the right strategy for your own needs, where this is appropriate [*]
On that last point, the example I gave of federated clusters is a good one. We should not be doing something like that automatically IMO, but if that kind of capability *does* exist then we should make it easy and transparent for the application developer to take advantage of it. From the system architects point of view however, this is almost certainly something that should be turned on or off explicitly.
Finally, I do need to point out that this is way off our radar right now. That doesn't mean it's unimportant, but the attention it receives will increase over time - I see the massive scalability support as low (ish) priority, versus the connection management which is of medium importance - less so than, for example, being able to send messages efficiently to local processes, but perhaps more so than providing some gorgeous web UI to do cluster/grid administration.
|
1.0
|
Support for large distributed networks - This was first raised as a question on issue #99. This is a common problem in distributed systems.
Distributed Erlang systems require a fully connected network of nodes, which places considerable strain on system resources. As a professional Erlang programmer, I've seen anecdotal reports of distributed Erlang systems with thousands of nodes, though these apparently start to come unstuck beyond 1000 or so nodes.
In issue #99 @robstewart57 mentioned operating system limits on open TCP connections, and I'd like to discuss that in a a bit more detail.
> As I discovered [http://www.haskell.org/pipermail/haskell-cafe/2012-February/099212.html] , there are operating system limits on > the number of open TCP connections --- by default 1024. If we are thinking at scales of 1,000's of nodes, then connecting every > endpoint to every other endpoint is problematic.
As the respondent in that thread points out, the 1024 file descriptor limit is an artefact of the unix `select` system call, which can handle only that many descriptors at a time. Modern non-blocking I/O system calls such as `poll` and `epoll` on linux, `kqueue` on BSD variants and AIO capabilities on other operating systems can support a **much** higher number of open file descriptors in practice.
There **are** still limits however. Forcing all nodes to be fully interconnected carries a significant overhead, especially in a connection oriented transport layer protocol such as TCP.
> There is a case to say that this concern is not one for the programmer to deal with.
I think that's both true and false at the same time. It depends, as the saying goes. Let's address the problem area a little, then we can come back to areas of responsibility.
### Handling Connections Efficiently
> A solution would be have the transport on each node manage the connections on each of its endpoints. Heuristics might include killing heavyweight connections to remote endpoints that haven't been used in a time limit, killing old connections when > new connection requests are made and a connection limit (e.g. ulimit) isn't far from being breached etc..
The classic way to deal with this is to use a kind of 'heart beat' sensor, that periodically sends a ping to the other end to ensure it's alive, and closes the connection if it doesn't receive a response in a timely fashion. The definition of 'timely' is up for debate and, of course, this might benefit from being configurable.
Erlang certainly **does** support this notion, and the time limit for the heartbeat that determines whether or not a connection should be considered dead and torn down is set using the `net_kernel.tick_time` parameter to the virtual machine.
These however, are the concerns of handling connections efficiently and carefully in a fully connected system. They do not actually address the scalability concerns that you've raised, because if the system is fully operational - i.e., all the nodes stay online and are able to remain connected to one another - there is **still** a limit to how many nodes you can interconnect before you begin to exhaust system limits on each node.
Personally I **do** think that connection management should be configurable and exposed via the *management* APIs, but I do not think it should be magic. We should simply pick some sensible defaults.
Another issue to be aware of, with all this, is that Cloud Haskell deliberately makes the programmer handle `reconnect` explicitly. This is a **good thing** because, unlike Erlang, it forces the programmer to be aware that message ordering and/or delivery guarantees might not hold after a reconnection has occurred.
There is, BTW, some open issues to look at this in detail:
* https://github.com/haskell-distributed/distributed-process/issues/66
* https://github.com/haskell-distributed/distributed-process/issues/32
### Scalable Distributed Computing
Ha ha - the title says it all! This is a **massive** area of distributed systems research and we'd do well to absorb as much of that as makes sense before pushing forward with any particular implementation.
One fairly common solution to the problem of needing fully connected networks is to create independent federated clusters. In this architecture, you have several *clusters* of nodes, which are fully inter-connected (like a *normal* CH system). These clusters are then connected to one another via a quorum of members, which act as intermediaries.
> I'm not sure whether this is a parameter set when creating transports or endpoints, but such connection management should probably not be a concern of the programmer.. (?)
Connection management is very much the concern of the programmer, but we should not make it a barrier to entry. You should
* be able to build a scalable, high performance distributed system on CH using the (sensible) defaults
* be able to configure connection management (such as heartbeat timeouts) if you wish
* be able to manage the network layer at runtime, once your application is in production
* be able to choose the right strategy for your own needs, where this is appropriate [*]
On that last point, the example I gave of federated clusters is a good one. We should not be doing something like that automatically IMO, but if that kind of capability *does* exist then we should make it easy and transparent for the application developer to take advantage of it. From the system architects point of view however, this is almost certainly something that should be turned on or off explicitly.
Finally, I do need to point out that this is way off our radar right now. That doesn't mean it's unimportant, but the attention it receives will increase over time - I see the massive scalability support as low (ish) priority, versus the connection management which is of medium importance - less so than, for example, being able to send messages efficiently to local processes, but perhaps more so than providing some gorgeous web UI to do cluster/grid administration.
|
process
|
support for large distributed networks this was first raised as a question on issue this is a common problem in distributed systems distributed erlang systems require a fully connected network of nodes which places considerable strain on system resources as a professional erlang programmer i ve seen anecdotal reports of distributed erlang systems with thousands of nodes though these apparently start to come unstuck beyond or so nodes in issue mentioned operating system limits on open tcp connections and i d like to discuss that in a a bit more detail as i discovered there are operating system limits on the number of open tcp connections by default if we are thinking at scales of s of nodes then connecting every endpoint to every other endpoint is problematic as the respondent in that thread points out the file descriptor limit is an artefact of the unix select system call which can handle only that many descriptors at a time modern non blocking i o system calls such as poll and epoll on linux kqueue on bsd variants and aio capabilities on other operating systems can support a much higher number of open file descriptors in practice there are still limits however forcing all nodes to be fully interconnected carries a significant overhead especially in a connection oriented transport layer protocol such as tcp there is a case to say that this concern is not one for the programmer to deal with i think that s both true and false at the same time it depends as the saying goes let s address the problem area a little then we can come back to areas of responsibility handling connections efficiently a solution would be have the transport on each node manage the connections on each of its endpoints heuristics might include killing heavyweight connections to remote endpoints that haven t been used in a time limit killing old connections when new connection requests are made and a connection limit e g ulimit isn t far from being breached etc the classic way to deal with this is to use a kind of heart beat sensor that periodically sends a ping to the other end to ensure it s alive and closes the connection if it doesn t receive a response in a timely fashion the definition of timely is up for debate and of course this might benefit from being configurable erlang certainly does support this notion and the time limit for the heartbeat that determines whether or not a connection should be considered dead and torn down is set using the net kernel tick time parameter to the virtual machine these however are the concerns of handling connections efficiently and carefully in a fully connected system they do not actually address the scalability concerns that you ve raised because if the system is fully operational i e all the nodes stay online and are able to remain connected to one another there is still a limit to how many nodes you can interconnect before you begin to exhaust system limits on each node personally i do think that connection management should be configurable and exposed via the management apis but i do not think it should be magic we should simply pick some sensible defaults another issue to be aware of with all this is that cloud haskell deliberately makes the programmer handle reconnect explicitly this is a good thing because unlike erlang it forces the programmer to be aware that message ordering and or delivery guarantees might not hold after a reconnection has occurred there is btw some open issues to look at this in detail scalable distributed computing ha ha the title says it all this is a massive area of distributed systems research and we d do well to absorb as much of that as makes sense before pushing forward with any particular implementation one fairly common solution to the problem of needing fully connected networks is to create independent federated clusters in this architecture you have several clusters of nodes which are fully inter connected like a normal ch system these clusters are then connected to one another via a quorum of members which act as intermediaries i m not sure whether this is a parameter set when creating transports or endpoints but such connection management should probably not be a concern of the programmer connection management is very much the concern of the programmer but we should not make it a barrier to entry you should be able to build a scalable high performance distributed system on ch using the sensible defaults be able to configure connection management such as heartbeat timeouts if you wish be able to manage the network layer at runtime once your application is in production be able to choose the right strategy for your own needs where this is appropriate on that last point the example i gave of federated clusters is a good one we should not be doing something like that automatically imo but if that kind of capability does exist then we should make it easy and transparent for the application developer to take advantage of it from the system architects point of view however this is almost certainly something that should be turned on or off explicitly finally i do need to point out that this is way off our radar right now that doesn t mean it s unimportant but the attention it receives will increase over time i see the massive scalability support as low ish priority versus the connection management which is of medium importance less so than for example being able to send messages efficiently to local processes but perhaps more so than providing some gorgeous web ui to do cluster grid administration
| 1
|
180,712
| 14,792,216,406
|
IssuesEvent
|
2021-01-12 14:29:37
|
CNRS/Pangloss
|
https://api.github.com/repos/CNRS/Pangloss
|
closed
|
Citation d'un corpus ?
|
FAQ documentation métadonnées
|
Retour utilisateur :
"On avait un bouton "citer les corpus" mais il n'est plus disponible. J'ai vu qu'Alex François a mis la citation manuellement en fin de son corpus mwotlap sur son site web perso par ex. Est-ce qu'on ne devrait pas le générer automatiquement ? Cela pourrait-être fait pour le corpus entier : ex. nashta et pas juste les textes et phrases individuelles. "
|
1.0
|
Citation d'un corpus ? - Retour utilisateur :
"On avait un bouton "citer les corpus" mais il n'est plus disponible. J'ai vu qu'Alex François a mis la citation manuellement en fin de son corpus mwotlap sur son site web perso par ex. Est-ce qu'on ne devrait pas le générer automatiquement ? Cela pourrait-être fait pour le corpus entier : ex. nashta et pas juste les textes et phrases individuelles. "
|
non_process
|
citation d un corpus retour utilisateur on avait un bouton citer les corpus mais il n est plus disponible j ai vu qu alex françois a mis la citation manuellement en fin de son corpus mwotlap sur son site web perso par ex est ce qu on ne devrait pas le générer automatiquement cela pourrait être fait pour le corpus entier ex nashta et pas juste les textes et phrases individuelles
| 0
|
13,245
| 15,715,609,676
|
IssuesEvent
|
2021-03-28 02:19:11
|
tdwg/chrono
|
https://api.github.com/repos/tdwg/chrono
|
reopened
|
Change term - materialDatedID
|
Process - under public review Term - change
|
## Change term
* Submitter: John Wieczorek (from review comment https://github.com/tdwg/chrono/issues/15#issuecomment-732249855)
* Justification (why is this change necessary?): Example is misleading
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Organized in Class: ChronometricAge
* Term name (in lowerCamelCase): materialDatedID
* Examples: Change `dwc:occurrenceID: 702b306d-f167-44d0-a5c9-890ece2b8839` to `dwc:materialSampleID:
https://www.ebi.ac.uk/metagenomics/samples/SRS1930158`
|
1.0
|
Change term - materialDatedID - ## Change term
* Submitter: John Wieczorek (from review comment https://github.com/tdwg/chrono/issues/15#issuecomment-732249855)
* Justification (why is this change necessary?): Example is misleading
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Organized in Class: ChronometricAge
* Term name (in lowerCamelCase): materialDatedID
* Examples: Change `dwc:occurrenceID: 702b306d-f167-44d0-a5c9-890ece2b8839` to `dwc:materialSampleID:
https://www.ebi.ac.uk/metagenomics/samples/SRS1930158`
|
process
|
change term materialdatedid change term submitter john wieczorek from review comment justification why is this change necessary example is misleading proponents who needs this change everyone proposed new attributes of the term organized in class chronometricage term name in lowercamelcase materialdatedid examples change dwc occurrenceid to dwc materialsampleid
| 1
|
14,995
| 18,675,323,066
|
IssuesEvent
|
2021-10-31 13:10:22
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Query executions hitting the cache are not recorded
|
Type:Bug Priority:P2 Querying/Processor .Correctness .Backend .Regression Administration/Audit
|
**Describe the bug**
Query executions hitting the cache are not recorded
Regression since 0.40.0, likely caused by #16220
P1 contender.
**To Reproduce**
1. Admin > Settings > Caching > Duration=0.001, TTL=1000000
2. Simple question > Sample Dataset > Orders - save as "Q1"
3. (reload the browser until it's hitting the cache, issue 13262)
4. Then reload the browser again, while it's hitting the cache
5. View `query_execution` (or Admin > Audit > Audit log), it does not have any records of anyone viewing the cached results.
There is however a record in `view_log`, though this does not record what was viewed, nor if viewing from a dashboard.
**Expected behavior**
Queries hitting the cache should always be recorded and should be marked as `cache_hit=true` (column introduced in 0.40.0)
**Information about your Metabase Installation:**
Tested 0.39.5 thru 0.41.1 - regression since 0.40.0
|
1.0
|
Query executions hitting the cache are not recorded - **Describe the bug**
Query executions hitting the cache are not recorded
Regression since 0.40.0, likely caused by #16220
P1 contender.
**To Reproduce**
1. Admin > Settings > Caching > Duration=0.001, TTL=1000000
2. Simple question > Sample Dataset > Orders - save as "Q1"
3. (reload the browser until it's hitting the cache, issue 13262)
4. Then reload the browser again, while it's hitting the cache
5. View `query_execution` (or Admin > Audit > Audit log), it does not have any records of anyone viewing the cached results.
There is however a record in `view_log`, though this does not record what was viewed, nor if viewing from a dashboard.
**Expected behavior**
Queries hitting the cache should always be recorded and should be marked as `cache_hit=true` (column introduced in 0.40.0)
**Information about your Metabase Installation:**
Tested 0.39.5 thru 0.41.1 - regression since 0.40.0
|
process
|
query executions hitting the cache are not recorded describe the bug query executions hitting the cache are not recorded regression since likely caused by contender to reproduce admin settings caching duration ttl simple question sample dataset orders save as reload the browser until it s hitting the cache issue then reload the browser again while it s hitting the cache view query execution or admin audit audit log it does not have any records of anyone viewing the cached results there is however a record in view log though this does not record what was viewed nor if viewing from a dashboard expected behavior queries hitting the cache should always be recorded and should be marked as cache hit true column introduced in information about your metabase installation tested thru regression since
| 1
|
830,747
| 32,022,637,005
|
IssuesEvent
|
2023-09-22 06:24:35
|
McBaws/comp
|
https://api.github.com/repos/McBaws/comp
|
opened
|
Groupname Selection
|
bug enhancement high priority
|
Add case where if two files have the same groupname, the whole filename is used instead throughout the script. This could cause undesired behaviour.
Also, add feature where if they have the same groupname but different versions (ie. "[Vodes] Youjo Senki - S01E01 v2.mkv" vs "[Vodes] Youjo Senki - S01E01 v3.mkv" (no shade btw)), the version is put into the groupname. This helps with fileinfo and slow.pics comp selection.
|
1.0
|
Groupname Selection - Add case where if two files have the same groupname, the whole filename is used instead throughout the script. This could cause undesired behaviour.
Also, add feature where if they have the same groupname but different versions (ie. "[Vodes] Youjo Senki - S01E01 v2.mkv" vs "[Vodes] Youjo Senki - S01E01 v3.mkv" (no shade btw)), the version is put into the groupname. This helps with fileinfo and slow.pics comp selection.
|
non_process
|
groupname selection add case where if two files have the same groupname the whole filename is used instead throughout the script this could cause undesired behaviour also add feature where if they have the same groupname but different versions ie youjo senki mkv vs youjo senki mkv no shade btw the version is put into the groupname this helps with fileinfo and slow pics comp selection
| 0
|
147,609
| 13,211,185,284
|
IssuesEvent
|
2020-08-15 21:21:14
|
manga-download/hakuneko
|
https://api.github.com/repos/manga-download/hakuneko
|
opened
|
Website Release https://hakuneko.download
|
Documentation Enhancement
|
As to officialy open the new website : https://hakuneko.download
**Todo:**
- [X] Move the user documentation to the website (but let the developper documentation on github)
Github
- [ ] Put a link in the wiki pointing to the user documentation
- [ ] Put a link to the website on the main Readme of the github (improve SEO)
- [ ] Put a link on the top right of the github page (improve SEO), in place of the discord link( and put the discord link in the description?)
- [ ] Reorder the issue templates so that connector is the first one
**Sourceforge**
- [ ] Put a link on the sourceforge page
- [ ] Make it clearer on sourceforge that this version is obsolete (still lots of downloads)
**App modification**
- [ ] Remove the google form and replace with a link to github issue
- [ ] Change the documentation link
**Google**
- [ ] Register on google search
|
1.0
|
Website Release https://hakuneko.download - As to officialy open the new website : https://hakuneko.download
**Todo:**
- [X] Move the user documentation to the website (but let the developper documentation on github)
Github
- [ ] Put a link in the wiki pointing to the user documentation
- [ ] Put a link to the website on the main Readme of the github (improve SEO)
- [ ] Put a link on the top right of the github page (improve SEO), in place of the discord link( and put the discord link in the description?)
- [ ] Reorder the issue templates so that connector is the first one
**Sourceforge**
- [ ] Put a link on the sourceforge page
- [ ] Make it clearer on sourceforge that this version is obsolete (still lots of downloads)
**App modification**
- [ ] Remove the google form and replace with a link to github issue
- [ ] Change the documentation link
**Google**
- [ ] Register on google search
|
non_process
|
website release as to officialy open the new website todo move the user documentation to the website but let the developper documentation on github github put a link in the wiki pointing to the user documentation put a link to the website on the main readme of the github improve seo put a link on the top right of the github page improve seo in place of the discord link and put the discord link in the description reorder the issue templates so that connector is the first one sourceforge put a link on the sourceforge page make it clearer on sourceforge that this version is obsolete still lots of downloads app modification remove the google form and replace with a link to github issue change the documentation link google register on google search
| 0
|
407,336
| 11,912,199,634
|
IssuesEvent
|
2020-03-31 09:52:17
|
JEvents/JEvents
|
https://api.github.com/repos/JEvents/JEvents
|
closed
|
uikit: No save when creating a new calendar
|
Priority - High
|
As stated, however, if you edit an existing calendar there is a save & close button along with the cancel.
|
1.0
|
uikit: No save when creating a new calendar - As stated, however, if you edit an existing calendar there is a save & close button along with the cancel.
|
non_process
|
uikit no save when creating a new calendar as stated however if you edit an existing calendar there is a save close button along with the cancel
| 0
|
20,765
| 27,496,691,921
|
IssuesEvent
|
2023-03-05 08:03:03
|
helmfile/helmfile
|
https://api.github.com/repos/helmfile/helmfile
|
closed
|
--state-values-set does not work properly in gotmpl value files
|
bug in process
|
### Operating system
MacOS 12.1
### Helmfile Version
0.150.0
### Helm Version
v3.9.4
### Bug description
See the linked github repo's README file
### Example helmfile.yaml
See the linked github repo's README file.
### Error message you've seen (if any)
None
### Steps to reproduce
https://github.com/vladimir-avinkin/helmfile-bug-repro
### Working Helmfile Version
Haven't found one
### Relevant discussion
_No response_
|
1.0
|
--state-values-set does not work properly in gotmpl value files - ### Operating system
MacOS 12.1
### Helmfile Version
0.150.0
### Helm Version
v3.9.4
### Bug description
See the linked github repo's README file
### Example helmfile.yaml
See the linked github repo's README file.
### Error message you've seen (if any)
None
### Steps to reproduce
https://github.com/vladimir-avinkin/helmfile-bug-repro
### Working Helmfile Version
Haven't found one
### Relevant discussion
_No response_
|
process
|
state values set does not work properly in gotmpl value files operating system macos helmfile version helm version bug description see the linked github repo s readme file example helmfile yaml see the linked github repo s readme file error message you ve seen if any none steps to reproduce working helmfile version haven t found one relevant discussion no response
| 1
|
395,201
| 27,059,487,860
|
IssuesEvent
|
2023-02-13 18:34:17
|
FlorianJourde/OpenClassrooms-7-Create-a-web-service-exposing-an-API
|
https://api.github.com/repos/FlorianJourde/OpenClassrooms-7-Create-a-web-service-exposing-an-API
|
closed
|
composer.json
|
documentation
|
Dans ton composer, ta version de php est mise à 7.2.5. Dans ton readme tu parles de 7.4. Tu peux soit forcer ta version de php à 7.4 dans ton composer, soit dire que ton site fonctionne à partir de 7.2.5. Je penche pour la première solution :D
|
1.0
|
composer.json - Dans ton composer, ta version de php est mise à 7.2.5. Dans ton readme tu parles de 7.4. Tu peux soit forcer ta version de php à 7.4 dans ton composer, soit dire que ton site fonctionne à partir de 7.2.5. Je penche pour la première solution :D
|
non_process
|
composer json dans ton composer ta version de php est mise à dans ton readme tu parles de tu peux soit forcer ta version de php à dans ton composer soit dire que ton site fonctionne à partir de je penche pour la première solution d
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.