Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,853
| 13,629,487,298
|
IssuesEvent
|
2020-09-24 15:10:49
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
Support boolean variations for the remap `bool` function
|
domain: mapping domain: processing type: enhancement
|
This might already be the case, but the Remap `bool` function should support the following inputs:
- [x] `"true"` => `true`
- [x] `"false"` => `false`
- [x] `"t"` => `true`
- [x] `"f"` => `false`
- [x] `"yes"` => `true`
- [x] `"no"` => `false`
- [x] `"y"` => `true`
- [x] `"n"` => `false`
- [x] `0` => `false`
- [x] `1` => `true`
- [x] `"0"` => `false`
- [ ] `"1"` => `true`
- [x] Case-insensitive for all of the above
|
1.0
|
Support boolean variations for the remap `bool` function - This might already be the case, but the Remap `bool` function should support the following inputs:
- [x] `"true"` => `true`
- [x] `"false"` => `false`
- [x] `"t"` => `true`
- [x] `"f"` => `false`
- [x] `"yes"` => `true`
- [x] `"no"` => `false`
- [x] `"y"` => `true`
- [x] `"n"` => `false`
- [x] `0` => `false`
- [x] `1` => `true`
- [x] `"0"` => `false`
- [ ] `"1"` => `true`
- [x] Case-insensitive for all of the above
|
process
|
support boolean variations for the remap bool function this might already be the case but the remap bool function should support the following inputs true true false false t true f false yes true no false y true n false false true false true case insensitive for all of the above
| 1
|
451,898
| 32,044,729,894
|
IssuesEvent
|
2023-09-22 23:34:09
|
project-copacetic/copacetic
|
https://api.github.com/repos/project-copacetic/copacetic
|
opened
|
[DOC] add search to website
|
documentation
|
### What kind of documentation improvement is needed?
Other
### What is the change that is needed?
website needs search integration setup through algolia
|
1.0
|
[DOC] add search to website - ### What kind of documentation improvement is needed?
Other
### What is the change that is needed?
website needs search integration setup through algolia
|
non_process
|
add search to website what kind of documentation improvement is needed other what is the change that is needed website needs search integration setup through algolia
| 0
|
2,815
| 5,748,468,638
|
IssuesEvent
|
2017-04-25 00:51:24
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
How to parse multiple log sources using log-file config option?
|
duplicate log-processing question
|
Hi guys,
Is there anyway to provide multiple log sources to goaccess with a log-file parameter which could looks like : ``` log-file /var/log/*/*/*.log``` ?
- - -
the idea behind is to build a common directory to all my logs from host and container within 1 directory like
/my/var/log
in this directory I coul bind volumes like that
```docker run -v /my/var/log/myapp/var/log:/var/log [...]```
```docker run -v /my/var/log/myotherapp/var/log:/var/log [...]```
```docker run -v /my/var/log/anapp/var/log:/var/log [...]```
+
```mkdir -p /my/var/log/host/var/log``` + configure app to redirect logs to that specific dir
Once I have done that, I can bind /my/var/log/ to goaccess container, define log-file /var/log/*/*/*.log and the process will take care of parsing all logs, define the proper log-format and generate report(s)
I m not sure it meant to be like that, but that will be so awesome
What do you think ?
|
1.0
|
How to parse multiple log sources using log-file config option? - Hi guys,
Is there anyway to provide multiple log sources to goaccess with a log-file parameter which could looks like : ``` log-file /var/log/*/*/*.log``` ?
- - -
the idea behind is to build a common directory to all my logs from host and container within 1 directory like
/my/var/log
in this directory I coul bind volumes like that
```docker run -v /my/var/log/myapp/var/log:/var/log [...]```
```docker run -v /my/var/log/myotherapp/var/log:/var/log [...]```
```docker run -v /my/var/log/anapp/var/log:/var/log [...]```
+
```mkdir -p /my/var/log/host/var/log``` + configure app to redirect logs to that specific dir
Once I have done that, I can bind /my/var/log/ to goaccess container, define log-file /var/log/*/*/*.log and the process will take care of parsing all logs, define the proper log-format and generate report(s)
I m not sure it meant to be like that, but that will be so awesome
What do you think ?
|
process
|
how to parse multiple log sources using log file config option hi guys is there anyway to provide multiple log sources to goaccess with a log file parameter which could looks like log file var log log the idea behind is to build a common directory to all my logs from host and container within directory like my var log in this directory i coul bind volumes like that docker run v my var log myapp var log var log docker run v my var log myotherapp var log var log docker run v my var log anapp var log var log mkdir p my var log host var log configure app to redirect logs to that specific dir once i have done that i can bind my var log to goaccess container define log file var log log and the process will take care of parsing all logs define the proper log format and generate report s i m not sure it meant to be like that but that will be so awesome what do you think
| 1
|
508,131
| 14,690,351,094
|
IssuesEvent
|
2021-01-02 14:50:26
|
iv-org/invidious
|
https://api.github.com/repos/iv-org/invidious
|
reopened
|
English words when using other languages
|
bug priority:high stale type:frontend
|
When using any other language than English, some English words are still used :
Some place where I noticed it (There's maybe more) :
https://i.imgur.com/vtJfgWa.jpg
https://i.imgur.com/sOeIk7z.jpg
https://i.imgur.com/CjEbAhJ.png
|
1.0
|
English words when using other languages - When using any other language than English, some English words are still used :
Some place where I noticed it (There's maybe more) :
https://i.imgur.com/vtJfgWa.jpg
https://i.imgur.com/sOeIk7z.jpg
https://i.imgur.com/CjEbAhJ.png
|
non_process
|
english words when using other languages when using any other language than english some english words are still used some place where i noticed it there s maybe more
| 0
|
183,555
| 14,237,433,792
|
IssuesEvent
|
2020-11-18 17:12:49
|
rakudo/rakudo
|
https://api.github.com/repos/rakudo/rakudo
|
closed
|
Regression? Sequence of junctions wrapped in singleton lists
|
tests needed
|
## The Problem
See https://www.nntp.perl.org/group/perl.perl6.users/2020/11/msg9338.html
"This seems to happen for any kind of junction, not just or-junctions."
## Expected Behavior
Same as:
```
say $*PERL.version; # v6.d
say $*PERL.compiler.version; # v2018.12
say ({ 1 | -1 } ... *)[^3]; # (any(1, -1) any(1, -1) any(1, -1))
```
## Actual Behavior
```
say $*PERL.version; # v6.d
say $*PERL.compiler.version; # v2020.07
say ({ 1 | -1 } ... *)[^3]; # ((any(1, -1)) (any(1, -1)) (any(1, -1)))
```
|
1.0
|
Regression? Sequence of junctions wrapped in singleton lists - ## The Problem
See https://www.nntp.perl.org/group/perl.perl6.users/2020/11/msg9338.html
"This seems to happen for any kind of junction, not just or-junctions."
## Expected Behavior
Same as:
```
say $*PERL.version; # v6.d
say $*PERL.compiler.version; # v2018.12
say ({ 1 | -1 } ... *)[^3]; # (any(1, -1) any(1, -1) any(1, -1))
```
## Actual Behavior
```
say $*PERL.version; # v6.d
say $*PERL.compiler.version; # v2020.07
say ({ 1 | -1 } ... *)[^3]; # ((any(1, -1)) (any(1, -1)) (any(1, -1)))
```
|
non_process
|
regression sequence of junctions wrapped in singleton lists the problem see this seems to happen for any kind of junction not just or junctions expected behavior same as say perl version d say perl compiler version say any any any actual behavior say perl version d say perl compiler version say any any any
| 0
|
61,095
| 12,145,941,944
|
IssuesEvent
|
2020-04-24 10:12:30
|
strangerstudios/pmpro-register-helper
|
https://api.github.com/repos/strangerstudios/pmpro-register-helper
|
closed
|
Some field types don't respect the class attribute
|
Difficulty: Easy Impact: Low Status: Needs Code
|
radio buttons, grouped check boxes, and hidden fields should add a class attribute to the main html element in the getHTML method.
https://github.com/strangerstudios/pmpro-register-helper/blob/dev/classes/class.field.php#L403
A work around is to use the divclass property which adds the class to the wrapping div.
|
1.0
|
Some field types don't respect the class attribute - radio buttons, grouped check boxes, and hidden fields should add a class attribute to the main html element in the getHTML method.
https://github.com/strangerstudios/pmpro-register-helper/blob/dev/classes/class.field.php#L403
A work around is to use the divclass property which adds the class to the wrapping div.
|
non_process
|
some field types don t respect the class attribute radio buttons grouped check boxes and hidden fields should add a class attribute to the main html element in the gethtml method a work around is to use the divclass property which adds the class to the wrapping div
| 0
|
125,387
| 12,259,417,315
|
IssuesEvent
|
2020-05-06 16:34:13
|
netlify/build
|
https://api.github.com/repos/netlify/build
|
opened
|
Better documentation of `utils`
|
documentation type: feature
|
We should document (or link to the documentation) the officially supported `utils` in the `README`.
|
1.0
|
Better documentation of `utils` - We should document (or link to the documentation) the officially supported `utils` in the `README`.
|
non_process
|
better documentation of utils we should document or link to the documentation the officially supported utils in the readme
| 0
|
314,268
| 23,513,233,851
|
IssuesEvent
|
2022-08-18 18:42:30
|
slsa-framework/slsa-github-generator
|
https://api.github.com/repos/slsa-framework/slsa-github-generator
|
opened
|
[doc] Add examples for incorporating a generated SBOM in the generic provenance
|
type:documentation workflow:generic
|
SBOMs are one artifact that a build system may output, in addition to other binaries, tarballs, etc
We should document this in the doc https://github.com/slsa-framework/slsa-github-generator/tree/main/internal/builders/generic
@lumjjb Let's work on this together
|
1.0
|
[doc] Add examples for incorporating a generated SBOM in the generic provenance - SBOMs are one artifact that a build system may output, in addition to other binaries, tarballs, etc
We should document this in the doc https://github.com/slsa-framework/slsa-github-generator/tree/main/internal/builders/generic
@lumjjb Let's work on this together
|
non_process
|
add examples for incorporating a generated sbom in the generic provenance sboms are one artifact that a build system may output in addition to other binaries tarballs etc we should document this in the doc lumjjb let s work on this together
| 0
|
2,471
| 5,245,818,773
|
IssuesEvent
|
2017-02-01 06:47:39
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
dataFormat with div no longer allows for selection
|
bug inprocess
|
In version 2.9.2, due to the following change:
https://github.com/AllenFang/react-bootstrap-table/commit/eff49cfd90ba7ebbd343c2da921a041c5e310853
my dataFormat which includes a div no longer generates select row events.
My dataFormat looks something like this:
```
var nameFormat = function(cell, row) {
return (
<div>
<div>{cell}</div>
<div>{row.addlText}</div>
</div>);
};
```
Thanks!
Will
|
1.0
|
dataFormat with div no longer allows for selection - In version 2.9.2, due to the following change:
https://github.com/AllenFang/react-bootstrap-table/commit/eff49cfd90ba7ebbd343c2da921a041c5e310853
my dataFormat which includes a div no longer generates select row events.
My dataFormat looks something like this:
```
var nameFormat = function(cell, row) {
return (
<div>
<div>{cell}</div>
<div>{row.addlText}</div>
</div>);
};
```
Thanks!
Will
|
process
|
dataformat with div no longer allows for selection in version due to the following change my dataformat which includes a div no longer generates select row events my dataformat looks something like this var nameformat function cell row return cell row addltext thanks will
| 1
|
67,283
| 27,779,335,112
|
IssuesEvent
|
2023-03-16 19:41:13
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
[DevOps] Bump Hasura Engine to v2.20 in local + staging
|
Service: Dev Product: Moped Type: DevOps Project: Moped v2.0
|
As discussed [here](https://austininnovation.slack.com/archives/CNUEPKLB1/p1678984517268649), going forward we will do this every release cycle:
1. Bump our local docker-compose and the moped staging ECS definition to the most recent Hasura version
2. Bump the prod ECS task definition to match the previous local/staging version
Right now we’re on 2.17.1 everywhere, so we need to kick this off by upgrading local to v2.20, and during[ the release](#11661) we can leave prod as-is but bring staging ECS up to match 2.20.
|
1.0
|
[DevOps] Bump Hasura Engine to v2.20 in local + staging - As discussed [here](https://austininnovation.slack.com/archives/CNUEPKLB1/p1678984517268649), going forward we will do this every release cycle:
1. Bump our local docker-compose and the moped staging ECS definition to the most recent Hasura version
2. Bump the prod ECS task definition to match the previous local/staging version
Right now we’re on 2.17.1 everywhere, so we need to kick this off by upgrading local to v2.20, and during[ the release](#11661) we can leave prod as-is but bring staging ECS up to match 2.20.
|
non_process
|
bump hasura engine to in local staging as discussed going forward we will do this every release cycle bump our local docker compose and the moped staging ecs definition to the most recent hasura version bump the prod ecs task definition to match the previous local staging version right now we’re on everywhere so we need to kick this off by upgrading local to and during we can leave prod as is but bring staging ecs up to match
| 0
|
2,132
| 4,971,902,690
|
IssuesEvent
|
2016-12-05 20:00:11
|
matz-e/lobster
|
https://api.github.com/repos/matz-e/lobster
|
closed
|
Better estimate tasks left
|
bug processing
|
Currently we provide some estimate of how much work is left to WorkQueue. We calculate that as `units_available / tasksize`, with out accounting for anything running. At the same time, I normally see a lot of workers connecting and disconnecting when all work is currently running, thinking that we should adjust this to minimize squandering of resources.
|
1.0
|
Better estimate tasks left - Currently we provide some estimate of how much work is left to WorkQueue. We calculate that as `units_available / tasksize`, with out accounting for anything running. At the same time, I normally see a lot of workers connecting and disconnecting when all work is currently running, thinking that we should adjust this to minimize squandering of resources.
|
process
|
better estimate tasks left currently we provide some estimate of how much work is left to workqueue we calculate that as units available tasksize with out accounting for anything running at the same time i normally see a lot of workers connecting and disconnecting when all work is currently running thinking that we should adjust this to minimize squandering of resources
| 1
|
52,124
| 21,994,879,955
|
IssuesEvent
|
2022-05-26 04:38:43
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
azurerm_mysql_server public_network_access_enabled setting on replica not respected on create
|
bug service/mysql
|
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform (and AzureRM Provider) Version
Terraform v1.0.6
on darwin_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.76.0
### Affected Resource(s)
* `azurerm_mysql_server`
### Terraform Configuration Files
**mysql.tf**
```hcl
resource "azurerm_resource_group" "rg" {
name = var.resource_group
location = var.location
}
resource "azurerm_mysql_server" "mysql_primary" {
name = "test-mysql-primary"
location = var.location
resource_group_name = var.resource_group
administrator_login = "dbadmin"
administrator_login_password = "superSecretP@$$w0rd!"
sku_name = "GP_Gen5_2"
storage_mb = 204800
version = "5.7"
create_mode = "Default"
creation_source_server_id = null
auto_grow_enabled = true
infrastructure_encryption_enabled = true
public_network_access_enabled = false
ssl_enforcement_enabled = true
ssl_minimal_tls_version_enforced = "TLS1_2"
depends_on = [
azurerm_resource_group.rg
]
}
resource "azurerm_mysql_server" "mysql_replica" {
name = "test-mysql-replica"
location = var.location
resource_group_name = var.resource_group
sku_name = "GP_Gen5_2"
storage_mb = 204800
version = "5.7"
create_mode = "Replica"
creation_source_server_id = azurerm_mysql_server.mysql_primary.id
auto_grow_enabled = true
infrastructure_encryption_enabled = true
public_network_access_enabled = false
ssl_enforcement_enabled = true
ssl_minimal_tls_version_enforced = "TLS1_2"
}
```
**variables.tf**
```hcl
provider "azurerm" {
features {}
}
variable "location" {
default = "westeurope"
}
variable "resource_group" {
default = "test-mysql-replica-pub-net-access"
}
```
### Debug Output
https://gist.github.com/andrei-tonita/f9e1ebf04d7c9129c6b4d0bcafedba5f
### Panic Output
### Expected Behaviour
Replica's `public_network_access_enabled` should be kept false when creating.
### Actual Behaviour
When creating a MySQL replica, the `public_network_access_enabled` setting is set to `true`, ignoring the attribute value of `false`.
The primary MySQL instance has the correct configuration.
Subsequent `terraform plan` action after an initial `terraform apply`:
```hcl
Terraform will perform the following actions:
# azurerm_mysql_server.mysql_replica will be updated in-place
~ resource "azurerm_mysql_server" "mysql_replica" {
id = "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/test-mysql-replica-pub-net-access/providers/Microsoft.DBforMySQL/servers/test-mysql-replica"
name = "test-mysql-replica"
~ public_network_access_enabled = true -> false
tags = {}
# (16 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
If a second `terraform apply` is performed, the replica instance will have the correct `public_network_access_enabled` configuration.
### Steps to Reproduce
1. Run `terraform plan` and `terraform apply` using the files provided above. This will result in a bad config.
2. Running `terraform plan` and `terraform apply` again will update the configuration correctly.
### Important Factoids
### References
A similar issue was fixed on the `azurerm_postgresql_server` resource:
* #11346
|
1.0
|
azurerm_mysql_server public_network_access_enabled setting on replica not respected on create - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Terraform (and AzureRM Provider) Version
Terraform v1.0.6
on darwin_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.76.0
### Affected Resource(s)
* `azurerm_mysql_server`
### Terraform Configuration Files
**mysql.tf**
```hcl
resource "azurerm_resource_group" "rg" {
name = var.resource_group
location = var.location
}
resource "azurerm_mysql_server" "mysql_primary" {
name = "test-mysql-primary"
location = var.location
resource_group_name = var.resource_group
administrator_login = "dbadmin"
administrator_login_password = "superSecretP@$$w0rd!"
sku_name = "GP_Gen5_2"
storage_mb = 204800
version = "5.7"
create_mode = "Default"
creation_source_server_id = null
auto_grow_enabled = true
infrastructure_encryption_enabled = true
public_network_access_enabled = false
ssl_enforcement_enabled = true
ssl_minimal_tls_version_enforced = "TLS1_2"
depends_on = [
azurerm_resource_group.rg
]
}
resource "azurerm_mysql_server" "mysql_replica" {
name = "test-mysql-replica"
location = var.location
resource_group_name = var.resource_group
sku_name = "GP_Gen5_2"
storage_mb = 204800
version = "5.7"
create_mode = "Replica"
creation_source_server_id = azurerm_mysql_server.mysql_primary.id
auto_grow_enabled = true
infrastructure_encryption_enabled = true
public_network_access_enabled = false
ssl_enforcement_enabled = true
ssl_minimal_tls_version_enforced = "TLS1_2"
}
```
**variables.tf**
```hcl
provider "azurerm" {
features {}
}
variable "location" {
default = "westeurope"
}
variable "resource_group" {
default = "test-mysql-replica-pub-net-access"
}
```
### Debug Output
https://gist.github.com/andrei-tonita/f9e1ebf04d7c9129c6b4d0bcafedba5f
### Panic Output
### Expected Behaviour
Replica's `public_network_access_enabled` should be kept false when creating.
### Actual Behaviour
When creating a MySQL replica, the `public_network_access_enabled` setting is set to `true`, ignoring the attribute value of `false`.
The primary MySQL instance has the correct configuration.
Subsequent `terraform plan` action after an initial `terraform apply`:
```hcl
Terraform will perform the following actions:
# azurerm_mysql_server.mysql_replica will be updated in-place
~ resource "azurerm_mysql_server" "mysql_replica" {
id = "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/test-mysql-replica-pub-net-access/providers/Microsoft.DBforMySQL/servers/test-mysql-replica"
name = "test-mysql-replica"
~ public_network_access_enabled = true -> false
tags = {}
# (16 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
If a second `terraform apply` is performed, the replica instance will have the correct `public_network_access_enabled` configuration.
### Steps to Reproduce
1. Run `terraform plan` and `terraform apply` using the files provided above. This will result in a bad config.
2. Running `terraform plan` and `terraform apply` again will update the configuration correctly.
### Important Factoids
### References
A similar issue was fixed on the `azurerm_postgresql_server` resource:
* #11346
|
non_process
|
azurerm mysql server public network access enabled setting on replica not respected on create community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version terraform on darwin provider registry terraform io hashicorp azurerm affected resource s azurerm mysql server terraform configuration files mysql tf hcl resource azurerm resource group rg name var resource group location var location resource azurerm mysql server mysql primary name test mysql primary location var location resource group name var resource group administrator login dbadmin administrator login password supersecretp sku name gp storage mb version create mode default creation source server id null auto grow enabled true infrastructure encryption enabled true public network access enabled false ssl enforcement enabled true ssl minimal tls version enforced depends on azurerm resource group rg resource azurerm mysql server mysql replica name test mysql replica location var location resource group name var resource group sku name gp storage mb version create mode replica creation source server id azurerm mysql server mysql primary id auto grow enabled true infrastructure encryption enabled true public network access enabled false ssl enforcement enabled true ssl minimal tls version enforced variables tf hcl provider azurerm features variable location default westeurope variable resource group default test mysql replica pub net access debug output panic output expected behaviour replica s public network access enabled should be kept false when creating actual behaviour when creating a mysql replica the public network access enabled setting is set to true ignoring the attribute value of false the primary mysql instance has the correct configuration subsequent terraform plan action after an initial terraform apply hcl terraform will perform the following actions azurerm mysql server mysql replica will be updated in place resource azurerm mysql server mysql replica id subscriptions aaaaaaaa bbbb cccc dddd eeeeeeeeeeee resourcegroups test mysql replica pub net access providers microsoft dbformysql servers test mysql replica name test mysql replica public network access enabled true false tags unchanged attributes hidden unchanged block hidden plan to add to change to destroy if a second terraform apply is performed the replica instance will have the correct public network access enabled configuration steps to reproduce run terraform plan and terraform apply using the files provided above this will result in a bad config running terraform plan and terraform apply again will update the configuration correctly important factoids references a similar issue was fixed on the azurerm postgresql server resource
| 0
|
80,439
| 10,173,876,866
|
IssuesEvent
|
2019-08-08 14:00:18
|
chanan/BlazorStyled
|
https://api.github.com/repos/chanan/BlazorStyled
|
closed
|
Documentation: Add section about API usage
|
Documentation
|
Now that all samples have been converted to the "tag" style. Need to add a section about how to use the API style as well.
|
1.0
|
Documentation: Add section about API usage - Now that all samples have been converted to the "tag" style. Need to add a section about how to use the API style as well.
|
non_process
|
documentation add section about api usage now that all samples have been converted to the tag style need to add a section about how to use the api style as well
| 0
|
20,640
| 27,318,214,524
|
IssuesEvent
|
2023-02-24 17:22:36
|
Parsl/parsl
|
https://api.github.com/repos/Parsl/parsl
|
closed
|
parsl auto-release should record what it just released in git
|
bug release_process
|
**Describe the bug**
At present, the parsl automatic release process:
i) edits the source code before release
ii) does not record that edited source code
iii) does not record which `master` branch version was used as the base for the edited source code.
This makes it hard to do git related things like move around in time, bisect, look at logs between versions.
**To Reproduce**
Try to use git to get the same parsl code as a release.
Try to use git to tell me the log of what changed between two releases.
**Expected behavior**
The code that is release should be identifiable in git.
**Environment**
CI
|
1.0
|
parsl auto-release should record what it just released in git - **Describe the bug**
At present, the parsl automatic release process:
i) edits the source code before release
ii) does not record that edited source code
iii) does not record which `master` branch version was used as the base for the edited source code.
This makes it hard to do git related things like move around in time, bisect, look at logs between versions.
**To Reproduce**
Try to use git to get the same parsl code as a release.
Try to use git to tell me the log of what changed between two releases.
**Expected behavior**
The code that is release should be identifiable in git.
**Environment**
CI
|
process
|
parsl auto release should record what it just released in git describe the bug at present the parsl automatic release process i edits the source code before release ii does not record that edited source code iii does not record which master branch version was used as the base for the edited source code this makes it hard to do git related things like move around in time bisect look at logs between versions to reproduce try to use git to get the same parsl code as a release try to use git to tell me the log of what changed between two releases expected behavior the code that is release should be identifiable in git environment ci
| 1
|
113,727
| 14,480,257,157
|
IssuesEvent
|
2020-12-10 10:55:17
|
git3080ti/Invaders
|
https://api.github.com/repos/git3080ti/Invaders
|
closed
|
Rest Score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그
|
Game Design UI/UX bug
|
## Expected behavior
Rest Score 화면에서 no 를 선택할 경우 Title 화면으로 돌아가야함
## Actual behavior
Rest Score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그 발견
|
1.0
|
Rest Score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그 - ## Expected behavior
Rest Score 화면에서 no 를 선택할 경우 Title 화면으로 돌아가야함
## Actual behavior
Rest Score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그 발견
|
non_process
|
rest score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그 expected behavior rest score 화면에서 no 를 선택할 경우 title 화면으로 돌아가야함 actual behavior rest score 화면에서 no 를 선택할 경우 화면이 그대로 멈추는 버그 발견
| 0
|
355,345
| 10,579,696,402
|
IssuesEvent
|
2019-10-08 03:44:33
|
nbcp/leaderboard
|
https://api.github.com/repos/nbcp/leaderboard
|
closed
|
Distinguished Speaker of the House Badge
|
High Priority Ready for Release
|
GIVEN | WHEN | THEN
-- | -- | --
Player has an “Advanced Speaker of the House” badge AND no “Distinguished Speaker of the House” in the current season | Approver approves the Special Quest | Player receives a “Distinguished Speaker of the House” badge AND a raffle ticket for the current month
|
1.0
|
Distinguished Speaker of the House Badge - GIVEN | WHEN | THEN
-- | -- | --
Player has an “Advanced Speaker of the House” badge AND no “Distinguished Speaker of the House” in the current season | Approver approves the Special Quest | Player receives a “Distinguished Speaker of the House” badge AND a raffle ticket for the current month
|
non_process
|
distinguished speaker of the house badge given when then player has an “advanced speaker of the house” badge and no “distinguished speaker of the house” in the current season approver approves the special quest player receives a “distinguished speaker of the house” badge and a raffle ticket for the current month
| 0
|
11,092
| 13,935,140,229
|
IssuesEvent
|
2020-10-22 11:03:45
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
Zammad can't import specific ISO-2022-JP mails
|
bug mail processing prioritised by payment verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1077341 , #1077607 , #1077341
### Expected behavior:
Zammad imports ISO-2022-JP without issues.
### Actual behavior:
In some special cases, Zammad can't import mails encoded with ISO-2022-JP.
(you can find samples in above mentioned ticket IDs - anonymized mails usually no longer contain the issue, as the byte order is changed).
In that situations Zammad logs the following:
```
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/aa7e19b4dfa0dc04a4b6b4b35cfff7ee.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<Encoding::InvalidByteSequenceError: \"%\" followed by \" \" on ISO-2022-JP>"
Traceback (most recent call last):
35: from bin/rails:9:in `<main>'
34: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
33: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
32: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
31: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
30: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
29: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
28: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
27: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
26: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
25: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
24: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
23: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
22: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
21: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
20: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
19: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
18: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
17: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
16: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
15: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
14: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process'
13: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
12: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
11: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
10: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
9: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
8: from /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process'
7: from /opt/zammad/app/models/channel/email_parser.rb:139:in `_process'
6: from /opt/zammad/app/models/channel/email_parser.rb:81:in `parse'
5: from /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed'
4: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block'
3: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each'
2: from /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed'
1: from /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:515:in `encode': "%" followed by " " on ISO-2022-JP (Encoding::InvalidByteSequenceError)
22: from bin/rails:9:in `<main>'
21: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
20: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
19: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
18: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
17: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
16: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
15: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
14: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
13: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
12: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
11: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
10: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
9: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
8: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
7: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
6: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
5: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
4: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
3: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
2: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
1: from /opt/zammad/app/models/channel/email_parser.rb:115:in `process'
/opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<Encoding::InvalidByteSequenceError: "%" followed by " " on ISO-2022-JP> (RuntimeError)
/opt/zammad/app/models/channel/email_parser.rb:515:in `encode'
/opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block'
/opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:81:in `parse'
/opt/zammad/app/models/channel/email_parser.rb:139:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:118:in `block in process'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:117:in `process'
/opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
/opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
/opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
bin/rails:9:in `<main>'
```
### Steps to reproduce the behavior:
* have a specific byte sequence of a ISO-2022-JP encoded mail
* try to import it
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Zammad can't import specific ISO-2022-JP mails - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.4
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1077341 , #1077607 , #1077341
### Expected behavior:
Zammad imports ISO-2022-JP without issues.
### Actual behavior:
In some special cases, Zammad can't import mails encoded with ISO-2022-JP.
(you can find samples in above mentioned ticket IDs - anonymized mails usually no longer contain the issue, as the byte order is changed).
In that situations Zammad logs the following:
```
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/aa7e19b4dfa0dc04a4b6b4b35cfff7ee.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<Encoding::InvalidByteSequenceError: \"%\" followed by \" \" on ISO-2022-JP>"
Traceback (most recent call last):
35: from bin/rails:9:in `<main>'
34: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
33: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
32: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
31: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
30: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
29: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
28: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
27: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
26: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
25: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
24: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
23: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
22: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
21: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
20: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
19: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
18: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
17: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
16: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
15: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
14: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process'
13: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
12: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
11: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
10: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
9: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
8: from /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process'
7: from /opt/zammad/app/models/channel/email_parser.rb:139:in `_process'
6: from /opt/zammad/app/models/channel/email_parser.rb:81:in `parse'
5: from /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed'
4: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block'
3: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each'
2: from /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed'
1: from /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:515:in `encode': "%" followed by " " on ISO-2022-JP (Encoding::InvalidByteSequenceError)
22: from bin/rails:9:in `<main>'
21: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
20: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
19: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
18: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
17: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
16: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
15: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
14: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
13: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
12: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
11: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
10: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
9: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
8: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
7: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
6: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
5: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
4: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
3: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
2: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
1: from /opt/zammad/app/models/channel/email_parser.rb:115:in `process'
/opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<Encoding::InvalidByteSequenceError: "%" followed by " " on ISO-2022-JP> (RuntimeError)
/opt/zammad/app/models/channel/email_parser.rb:515:in `encode'
/opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block'
/opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed'
/opt/zammad/app/models/channel/email_parser.rb:81:in `parse'
/opt/zammad/app/models/channel/email_parser.rb:139:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:118:in `block in process'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:117:in `process'
/opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
/opt/zammad/app/models/channel/email_parser.rb:481:in `glob'
/opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke'
/usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
/usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency'
/usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require'
bin/rails:9:in `<main>'
```
### Steps to reproduce the behavior:
* have a specific byte sequence of a ISO-2022-JP encoded mail
* try to import it
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
zammad can t import specific iso jp mails hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any ticket id expected behavior zammad imports iso jp without issues actual behavior in some special cases zammad can t import mails encoded with iso jp you can find samples in above mentioned ticket ids anonymized mails usually no longer contain the issue as the byte order is changed in that situations zammad logs the following error can t process email you will find it for bug reporting under opt zammad tmp unprocessable mail eml please create an issue at error traceback most recent call last from bin rails in from usr local rvm gems ruby gems activesupport lib active support dependencies rb in require from usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency from usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process from usr local rvm rubies ruby lib ruby timeout rb in timeout from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in block in catch from usr local rvm rubies ruby lib ruby timeout rb in block in timeout from opt zammad app models channel email parser rb in block in process from opt zammad app models channel email parser rb in process from opt zammad app models channel email parser rb in parse from opt zammad app models channel email parser rb in force parts encoding if needed from usr local rvm rubies ruby lib ruby delegate rb in block in delegating block from usr local rvm rubies ruby lib ruby delegate rb in each from opt zammad app models channel email parser rb in block in force parts encoding if needed from opt zammad app models channel email parser rb in force single part encoding if needed opt zammad app models channel email parser rb in encode followed by on iso jp encoding invalidbytesequenceerror from bin rails in from usr local rvm gems ruby gems activesupport lib active support dependencies rb in require from usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency from usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in rescue in process runtimeerror opt zammad app models channel email parser rb in encode opt zammad app models channel email parser rb in force single part encoding if needed opt zammad app models channel email parser rb in block in force parts encoding if needed usr local rvm rubies ruby lib ruby delegate rb in each usr local rvm rubies ruby lib ruby delegate rb in block in delegating block opt zammad app models channel email parser rb in force parts encoding if needed opt zammad app models channel email parser rb in parse opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process usr local rvm rubies ruby lib ruby timeout rb in block in timeout usr local rvm rubies ruby lib ruby timeout rb in block in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in timeout opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process unprocessable mails opt zammad app models channel email parser rb in glob opt zammad app models channel email parser rb in process unprocessable mails usr local rvm gems ruby gems railties lib rails commands runner runner command rb in usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform usr local rvm gems ruby gems thor lib thor command rb in run usr local rvm gems ruby gems thor lib thor invocation rb in invoke command usr local rvm gems ruby gems thor lib thor rb in dispatch usr local rvm gems ruby gems railties lib rails command base rb in perform usr local rvm gems ruby gems railties lib rails command rb in invoke usr local rvm gems ruby gems railties lib rails commands rb in usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency usr local rvm gems ruby gems activesupport lib active support dependencies rb in require bin rails in steps to reproduce the behavior have a specific byte sequence of a iso jp encoded mail try to import it yes i m sure this is a bug and no feature request or a general question
| 1
|
414,700
| 12,110,417,518
|
IssuesEvent
|
2020-04-21 10:22:46
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
static-course-assets.s3.amazonaws.com - design is broken
|
browser-firefox engine-gecko priority-critical
|
<!-- @browser: Firefox 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/51940 -->
**URL**: https://static-course-assets.s3.amazonaws.com/CyberEss/pt/index.html#2.0.1.1
**Browser / Version**: Firefox 75.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
When trying to study Cisco's cybersecurity course the pages does not show the content but when inspecting frame you are able to see it.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/4/8618dc85-832a-4f19-a38d-1e5dfa8bb975.jpg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@xdjgustavox`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
static-course-assets.s3.amazonaws.com - design is broken - <!-- @browser: Firefox 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/51940 -->
**URL**: https://static-course-assets.s3.amazonaws.com/CyberEss/pt/index.html#2.0.1.1
**Browser / Version**: Firefox 75.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
When trying to study Cisco's cybersecurity course the pages does not show the content but when inspecting frame you are able to see it.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/4/8618dc85-832a-4f19-a38d-1e5dfa8bb975.jpg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@xdjgustavox`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
static course assets amazonaws com design is broken url browser version firefox operating system windows tested another browser yes chrome problem type design is broken description items not fully visible steps to reproduce when trying to study cisco s cybersecurity course the pages does not show the content but when inspecting frame you are able to see it view the screenshot img alt screenshot src browser configuration none submitted in the name of xdjgustavox from with ❤️
| 0
|
306,398
| 23,158,866,601
|
IssuesEvent
|
2022-07-29 15:30:03
|
scgbern/scgbib
|
https://api.github.com/repos/scgbern/scgbib
|
closed
|
Update the github README to explain the main files
|
documentation
|
Explain all the files, starting with scg.bib
|
1.0
|
Update the github README to explain the main files - Explain all the files, starting with scg.bib
|
non_process
|
update the github readme to explain the main files explain all the files starting with scg bib
| 0
|
7,244
| 10,410,440,608
|
IssuesEvent
|
2019-09-13 11:24:38
|
energy-modelling-toolkit/Dispa-SET
|
https://api.github.com/repos/energy-modelling-toolkit/Dispa-SET
|
opened
|
Chord diagram for net-flows
|
enhancement postprocessing
|
It would be nice to include a Chord diagram for the net-flows between different zones
Plot.ly offeres this kind of graphical representations
|
1.0
|
Chord diagram for net-flows - It would be nice to include a Chord diagram for the net-flows between different zones
Plot.ly offeres this kind of graphical representations
|
process
|
chord diagram for net flows it would be nice to include a chord diagram for the net flows between different zones plot ly offeres this kind of graphical representations
| 1
|
19,883
| 26,327,576,025
|
IssuesEvent
|
2023-01-10 08:15:19
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing dialogs don't export --PROJECT_PATH (qgis_process) and 'project_path' (JSON) when needed
|
Processing Bug
|
### What is the bug or the crash?
With the 'Advanced' button, the processing tool dialog provides a means to export the chosen parameters as `qgis_process` command syntax or as a JSON string. The same holds when right clicking a history item in the processing history dialog.
When using `qgis_process` with some algorithms, e.g. `native:printlayouttoimage`, it is needed to add the `PROJECT_PATH` argument for such algorithms to work. This also holds for algorithms using expressions which refer to layers and attribute fields (cf. #50481).
However, the exported string for `qgis_process` (from the dialog) seems to never contain the `PROJECT_PATH` argument (command) or the `project_path` element (JSON), so in the above cases this string won't be enough for `qgis_process` to run successfully.
### Steps to reproduce the issue
1. As an example, make or use a QGIS project with at least 1 layout.
2. From the toolbox, choose 'Export print layout as image' (as example) and select a print layout.
3. In the processing dialog, click 'Advanced' and choose either 'Copy as qgis_process command' or 'Copy as JSON'.
5. Inspect the content of your clipboard: the `PROJECT_PATH` argument (command) or the `project_path` element (JSON) is missing.
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.26.3-Buenos Aires | QGIS code revision | 65e4edfdad
-- | -- | -- | --
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.4.3
PROJ version | 8.2.0
EPSG Registry database version | v10.038 (2021-10-21)
GEOS version | 3.10.2-CAPI-1.16.0
SQLite version | 3.31.1
PDAL version | 2.2.0
PostgreSQL client version | 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)
SpatiaLite version | 5.0.1
QWT version | 6.1.4
QScintilla2 version | 2.11.2
OS version | Linux Mint 20
| | |
Active Python plugins
geopunt4Qgis | 2.2.4
ViewshedAnalysis | 1.7
cartography_tools | 1.2.1
quick_map_services | 0.19.27
grassprovider | 2.12.99
processing | 2.12.99
sagaprovider | 2.12.99
db_manager | 0.1.20
MetaSearch | 0.3.6
otbprovider | 2.12.99
</body></html>
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Processing dialogs don't export --PROJECT_PATH (qgis_process) and 'project_path' (JSON) when needed - ### What is the bug or the crash?
With the 'Advanced' button, the processing tool dialog provides a means to export the chosen parameters as `qgis_process` command syntax or as a JSON string. The same holds when right clicking a history item in the processing history dialog.
When using `qgis_process` with some algorithms, e.g. `native:printlayouttoimage`, it is needed to add the `PROJECT_PATH` argument for such algorithms to work. This also holds for algorithms using expressions which refer to layers and attribute fields (cf. #50481).
However, the exported string for `qgis_process` (from the dialog) seems to never contain the `PROJECT_PATH` argument (command) or the `project_path` element (JSON), so in the above cases this string won't be enough for `qgis_process` to run successfully.
### Steps to reproduce the issue
1. As an example, make or use a QGIS project with at least 1 layout.
2. From the toolbox, choose 'Export print layout as image' (as example) and select a print layout.
3. In the processing dialog, click 'Advanced' and choose either 'Copy as qgis_process command' or 'Copy as JSON'.
5. Inspect the content of your clipboard: the `PROJECT_PATH` argument (command) or the `project_path` element (JSON) is missing.
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body>
QGIS version | 3.26.3-Buenos Aires | QGIS code revision | 65e4edfdad
-- | -- | -- | --
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.4.3
PROJ version | 8.2.0
EPSG Registry database version | v10.038 (2021-10-21)
GEOS version | 3.10.2-CAPI-1.16.0
SQLite version | 3.31.1
PDAL version | 2.2.0
PostgreSQL client version | 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)
SpatiaLite version | 5.0.1
QWT version | 6.1.4
QScintilla2 version | 2.11.2
OS version | Linux Mint 20
| | |
Active Python plugins
geopunt4Qgis | 2.2.4
ViewshedAnalysis | 1.7
cartography_tools | 1.2.1
quick_map_services | 0.19.27
grassprovider | 2.12.99
processing | 2.12.99
sagaprovider | 2.12.99
db_manager | 0.1.20
MetaSearch | 0.3.6
otbprovider | 2.12.99
</body></html>
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
processing dialogs don t export project path qgis process and project path json when needed what is the bug or the crash with the advanced button the processing tool dialog provides a means to export the chosen parameters as qgis process command syntax or as a json string the same holds when right clicking a history item in the processing history dialog when using qgis process with some algorithms e g native printlayouttoimage it is needed to add the project path argument for such algorithms to work this also holds for algorithms using expressions which refer to layers and attribute fields cf however the exported string for qgis process from the dialog seems to never contain the project path argument command or the project path element json so in the above cases this string won t be enough for qgis process to run successfully steps to reproduce the issue as an example make or use a qgis project with at least layout from the toolbox choose export print layout as image as example and select a print layout in the processing dialog click advanced and choose either copy as qgis process command or copy as json inspect the content of your clipboard the project path argument command or the project path element json is missing versions doctype html public dtd html en qgis version buenos aires qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version ubuntu spatialite version qwt version version os version linux mint active python plugins viewshedanalysis cartography tools quick map services grassprovider processing sagaprovider db manager metasearch otbprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
35,705
| 7,799,758,235
|
IssuesEvent
|
2018-06-09 00:16:22
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Extra timestep test failures on 20 Jan 2011 (Trac #397)
|
Migrated from Trac clubb_src defect dschanen@uwm.edu
|
'''Introduction'''
The timestep tests don't look as good as they used to. A summary of the results is below:
```text
Dec 31
----------
10 min: No failures
60 min: gabls3, rico_LH, twp_ice failure
Jan 8:
--------
10 min: twp_ice failure
60 min: gabls3, rico, rico_LH, twp_ice failure
Jan 19:
----------
10 min: No failures
60 min: gabls3, cloud_feedback_s6, rico_LH,
twp_ice failure
Jan 20:
----------
10 min: rico_LH, twp_ice failure
60 min: gabls2, gabls3, mpace_b, rico,
astex_a209, cloud_feedback_s6, mpace_b_LH
rico_LH, twp_ice failure
Jan. 21:
-----------
10 min: rico_LH failure
60 min: gabls3, mpace_b, cloud_feedback_s6,
mpace_b_LH, rico_LH, twp_ice failure
```
In particular, we have many new crashes on Jan 20. This is probably due to commit r4980, in which we added wp3_on_wp2 to the arguments for wpxp_ta, smoothed the sigma_sqd_w terms, and relaxed clipping on wprtp/wpthlp.
I don't understand why this change would make the code more unstable. Is there a bug? Should we look at the output and assess the nature of the crash? Do we need to adjust the tolerance on the relaxed clipping?
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/397
```json
{
"status": "closed",
"changetime": "2011-03-29T21:57:48",
"description": "'''Introduction'''\n\nThe timestep tests don't look as good as they used to. A summary of the results is below:\n{{{\nDec 31 \n----------\n\n10 min: No failures\n\n60 min: gabls3, rico_LH, twp_ice failure\n\nJan 8:\n--------\n\n10 min: twp_ice failure\n\n60 min: gabls3, rico, rico_LH, twp_ice failure\n\nJan 19:\n----------\n\n10 min: No failures\n\n60 min: gabls3, cloud_feedback_s6, rico_LH, \ntwp_ice failure\n\nJan 20:\n----------\n\n10 min: rico_LH, twp_ice failure\n\n60 min: gabls2, gabls3, mpace_b, rico,\nastex_a209, cloud_feedback_s6, mpace_b_LH\nrico_LH, twp_ice failure\n\nJan. 21:\n-----------\n\n10 min: rico_LH failure\n\n60 min: gabls3, mpace_b, cloud_feedback_s6,\nmpace_b_LH, rico_LH, twp_ice failure\n}}}\n\nIn particular, we have many new crashes on Jan 20. This is probably due to commit r4980, in which we added wp3_on_wp2 to the arguments for wpxp_ta, smoothed the sigma_sqd_w terms, and relaxed clipping on wprtp/wpthlp. \n\nI don't understand why this change would make the code more unstable. Is there a bug? Should we look at the output and assess the nature of the crash? Do we need to adjust the tolerance on the relaxed clipping? ",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu, meyern@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1301435868000000",
"component": "clubb_src",
"summary": "Extra timestep test failures on 20 Jan 2011",
"priority": "critical",
"keywords": "",
"time": "2011-01-22T22:06:05",
"milestone": "Improve CLUBB's accuracy at coarse vertical grid spacing",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
1.0
|
Extra timestep test failures on 20 Jan 2011 (Trac #397) - '''Introduction'''
The timestep tests don't look as good as they used to. A summary of the results is below:
```text
Dec 31
----------
10 min: No failures
60 min: gabls3, rico_LH, twp_ice failure
Jan 8:
--------
10 min: twp_ice failure
60 min: gabls3, rico, rico_LH, twp_ice failure
Jan 19:
----------
10 min: No failures
60 min: gabls3, cloud_feedback_s6, rico_LH,
twp_ice failure
Jan 20:
----------
10 min: rico_LH, twp_ice failure
60 min: gabls2, gabls3, mpace_b, rico,
astex_a209, cloud_feedback_s6, mpace_b_LH
rico_LH, twp_ice failure
Jan. 21:
-----------
10 min: rico_LH failure
60 min: gabls3, mpace_b, cloud_feedback_s6,
mpace_b_LH, rico_LH, twp_ice failure
```
In particular, we have many new crashes on Jan 20. This is probably due to commit r4980, in which we added wp3_on_wp2 to the arguments for wpxp_ta, smoothed the sigma_sqd_w terms, and relaxed clipping on wprtp/wpthlp.
I don't understand why this change would make the code more unstable. Is there a bug? Should we look at the output and assess the nature of the crash? Do we need to adjust the tolerance on the relaxed clipping?
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/397
```json
{
"status": "closed",
"changetime": "2011-03-29T21:57:48",
"description": "'''Introduction'''\n\nThe timestep tests don't look as good as they used to. A summary of the results is below:\n{{{\nDec 31 \n----------\n\n10 min: No failures\n\n60 min: gabls3, rico_LH, twp_ice failure\n\nJan 8:\n--------\n\n10 min: twp_ice failure\n\n60 min: gabls3, rico, rico_LH, twp_ice failure\n\nJan 19:\n----------\n\n10 min: No failures\n\n60 min: gabls3, cloud_feedback_s6, rico_LH, \ntwp_ice failure\n\nJan 20:\n----------\n\n10 min: rico_LH, twp_ice failure\n\n60 min: gabls2, gabls3, mpace_b, rico,\nastex_a209, cloud_feedback_s6, mpace_b_LH\nrico_LH, twp_ice failure\n\nJan. 21:\n-----------\n\n10 min: rico_LH failure\n\n60 min: gabls3, mpace_b, cloud_feedback_s6,\nmpace_b_LH, rico_LH, twp_ice failure\n}}}\n\nIn particular, we have many new crashes on Jan 20. This is probably due to commit r4980, in which we added wp3_on_wp2 to the arguments for wpxp_ta, smoothed the sigma_sqd_w terms, and relaxed clipping on wprtp/wpthlp. \n\nI don't understand why this change would make the code more unstable. Is there a bug? Should we look at the output and assess the nature of the crash? Do we need to adjust the tolerance on the relaxed clipping? ",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu, meyern@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1301435868000000",
"component": "clubb_src",
"summary": "Extra timestep test failures on 20 Jan 2011",
"priority": "critical",
"keywords": "",
"time": "2011-01-22T22:06:05",
"milestone": "Improve CLUBB's accuracy at coarse vertical grid spacing",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
non_process
|
extra timestep test failures on jan trac introduction the timestep tests don t look as good as they used to a summary of the results is below text dec min no failures min rico lh twp ice failure jan min twp ice failure min rico rico lh twp ice failure jan min no failures min cloud feedback rico lh twp ice failure jan min rico lh twp ice failure min mpace b rico astex cloud feedback mpace b lh rico lh twp ice failure jan min rico lh failure min mpace b cloud feedback mpace b lh rico lh twp ice failure in particular we have many new crashes on jan this is probably due to commit in which we added on to the arguments for wpxp ta smoothed the sigma sqd w terms and relaxed clipping on wprtp wpthlp i don t understand why this change would make the code more unstable is there a bug should we look at the output and assess the nature of the crash do we need to adjust the tolerance on the relaxed clipping attachments migrated from json status closed changetime description introduction n nthe timestep tests don t look as good as they used to a summary of the results is below n ndec n n min no failures n min rico lh twp ice failure n njan n n min twp ice failure n min rico rico lh twp ice failure n njan n n min no failures n min cloud feedback rico lh ntwp ice failure n njan n n min rico lh twp ice failure n min mpace b rico nastex cloud feedback mpace b lh nrico lh twp ice failure n njan n n min rico lh failure n min mpace b cloud feedback nmpace b lh rico lh twp ice failure n n nin particular we have many new crashes on jan this is probably due to commit in which we added on to the arguments for wpxp ta smoothed the sigma sqd w terms and relaxed clipping on wprtp wpthlp n ni don t understand why this change would make the code more unstable is there a bug should we look at the output and assess the nature of the crash do we need to adjust the tolerance on the relaxed clipping reporter vlarson uwm edu cc vlarson uwm edu meyern uwm edu resolution verified by v larson ts component clubb src summary extra timestep test failures on jan priority critical keywords time milestone improve clubb s accuracy at coarse vertical grid spacing owner dschanen uwm edu type defect
| 0
|
14,087
| 16,978,004,314
|
IssuesEvent
|
2021-06-30 03:54:50
|
theislab/scanpy
|
https://api.github.com/repos/theislab/scanpy
|
opened
|
Build process having some issues
|
Bug 🐛 Development Process 🚀
|
## Current build process
Current build process (as used on azure) does not include the `LICENSE`, as well as a number of other files (which may note be necessary)
```
git checkout 1.8.0
python -m build --sdist --wheel .
tar tzf dist/scanpy-1.8.0.tar.gz
```
<details>
<summary> contents of source dist </summary>
```
scanpy-1.8.0/README.rst
scanpy-1.8.0/pyproject.toml
scanpy-1.8.0/scanpy/__init__.py
scanpy-1.8.0/scanpy/__main__.py
scanpy-1.8.0/scanpy/_compat.py
scanpy-1.8.0/scanpy/_metadata.py
scanpy-1.8.0/scanpy/_settings.py
scanpy-1.8.0/scanpy/_utils/__init__.py
scanpy-1.8.0/scanpy/_utils/compute/is_constant.py
scanpy-1.8.0/scanpy/cli.py
scanpy-1.8.0/scanpy/datasets/10x_pbmc68k_reduced.h5ad
scanpy-1.8.0/scanpy/datasets/__init__.py
scanpy-1.8.0/scanpy/datasets/_datasets.py
scanpy-1.8.0/scanpy/datasets/_ebi_expression_atlas.py
scanpy-1.8.0/scanpy/datasets/_utils.py
scanpy-1.8.0/scanpy/datasets/krumsiek11.txt
scanpy-1.8.0/scanpy/datasets/toggleswitch.txt
scanpy-1.8.0/scanpy/external/__init__.py
scanpy-1.8.0/scanpy/external/exporting.py
scanpy-1.8.0/scanpy/external/pl.py
scanpy-1.8.0/scanpy/external/pp/__init__.py
scanpy-1.8.0/scanpy/external/pp/_bbknn.py
scanpy-1.8.0/scanpy/external/pp/_dca.py
scanpy-1.8.0/scanpy/external/pp/_harmony_integrate.py
scanpy-1.8.0/scanpy/external/pp/_hashsolo.py
scanpy-1.8.0/scanpy/external/pp/_magic.py
scanpy-1.8.0/scanpy/external/pp/_mnn_correct.py
scanpy-1.8.0/scanpy/external/pp/_scanorama_integrate.py
scanpy-1.8.0/scanpy/external/pp/_scrublet.py
scanpy-1.8.0/scanpy/external/tl/__init__.py
scanpy-1.8.0/scanpy/external/tl/_harmony_timeseries.py
scanpy-1.8.0/scanpy/external/tl/_palantir.py
scanpy-1.8.0/scanpy/external/tl/_phate.py
scanpy-1.8.0/scanpy/external/tl/_phenograph.py
scanpy-1.8.0/scanpy/external/tl/_pypairs.py
scanpy-1.8.0/scanpy/external/tl/_sam.py
scanpy-1.8.0/scanpy/external/tl/_trimap.py
scanpy-1.8.0/scanpy/external/tl/_wishbone.py
scanpy-1.8.0/scanpy/get/__init__.py
scanpy-1.8.0/scanpy/get/get.py
scanpy-1.8.0/scanpy/logging.py
scanpy-1.8.0/scanpy/metrics/__init__.py
scanpy-1.8.0/scanpy/metrics/_gearys_c.py
scanpy-1.8.0/scanpy/metrics/_metrics.py
scanpy-1.8.0/scanpy/metrics/_morans_i.py
scanpy-1.8.0/scanpy/neighbors/__init__.py
scanpy-1.8.0/scanpy/plotting/__init__.py
scanpy-1.8.0/scanpy/plotting/_anndata.py
scanpy-1.8.0/scanpy/plotting/_baseplot_class.py
scanpy-1.8.0/scanpy/plotting/_docs.py
scanpy-1.8.0/scanpy/plotting/_dotplot.py
scanpy-1.8.0/scanpy/plotting/_matrixplot.py
scanpy-1.8.0/scanpy/plotting/_preprocessing.py
scanpy-1.8.0/scanpy/plotting/_qc.py
scanpy-1.8.0/scanpy/plotting/_rcmod.py
scanpy-1.8.0/scanpy/plotting/_stacked_violin.py
scanpy-1.8.0/scanpy/plotting/_tools/__init__.py
scanpy-1.8.0/scanpy/plotting/_tools/paga.py
scanpy-1.8.0/scanpy/plotting/_tools/scatterplots.py
scanpy-1.8.0/scanpy/plotting/_utils.py
scanpy-1.8.0/scanpy/plotting/palettes.py
scanpy-1.8.0/scanpy/preprocessing/__init__.py
scanpy-1.8.0/scanpy/preprocessing/_combat.py
scanpy-1.8.0/scanpy/preprocessing/_deprecated/__init__.py
scanpy-1.8.0/scanpy/preprocessing/_deprecated/highly_variable_genes.py
scanpy-1.8.0/scanpy/preprocessing/_distributed.py
scanpy-1.8.0/scanpy/preprocessing/_docs.py
scanpy-1.8.0/scanpy/preprocessing/_highly_variable_genes.py
scanpy-1.8.0/scanpy/preprocessing/_normalization.py
scanpy-1.8.0/scanpy/preprocessing/_pca.py
scanpy-1.8.0/scanpy/preprocessing/_qc.py
scanpy-1.8.0/scanpy/preprocessing/_recipes.py
scanpy-1.8.0/scanpy/preprocessing/_simple.py
scanpy-1.8.0/scanpy/preprocessing/_utils.py
scanpy-1.8.0/scanpy/queries/__init__.py
scanpy-1.8.0/scanpy/queries/_queries.py
scanpy-1.8.0/scanpy/readwrite.py
scanpy-1.8.0/scanpy/sim_models/__init__.py
scanpy-1.8.0/scanpy/sim_models/krumsiek11.txt
scanpy-1.8.0/scanpy/sim_models/krumsiek11_params.txt
scanpy-1.8.0/scanpy/sim_models/toggleswitch.txt
scanpy-1.8.0/scanpy/sim_models/toggleswitch_params.txt
scanpy-1.8.0/scanpy/tools/__init__.py
scanpy-1.8.0/scanpy/tools/_dendrogram.py
scanpy-1.8.0/scanpy/tools/_diffmap.py
scanpy-1.8.0/scanpy/tools/_dpt.py
scanpy-1.8.0/scanpy/tools/_draw_graph.py
scanpy-1.8.0/scanpy/tools/_embedding_density.py
scanpy-1.8.0/scanpy/tools/_ingest.py
scanpy-1.8.0/scanpy/tools/_leiden.py
scanpy-1.8.0/scanpy/tools/_louvain.py
scanpy-1.8.0/scanpy/tools/_marker_gene_overlap.py
scanpy-1.8.0/scanpy/tools/_paga.py
scanpy-1.8.0/scanpy/tools/_pca.py
scanpy-1.8.0/scanpy/tools/_rank_genes_groups.py
scanpy-1.8.0/scanpy/tools/_score_genes.py
scanpy-1.8.0/scanpy/tools/_sim.py
scanpy-1.8.0/scanpy/tools/_top_genes.py
scanpy-1.8.0/scanpy/tools/_tsne.py
scanpy-1.8.0/scanpy/tools/_umap.py
scanpy-1.8.0/scanpy/tools/_utils.py
scanpy-1.8.0/scanpy/tools/_utils_clustering.py
scanpy-1.8.0/PKG-INFO
```
</details>
## Flit build
Using flit to build it includes all the files I would expect, but get's the version wrong on the wheel
```
rm -r dist
flit build
ls dist
```
```
scanpy-1.8.0.dev112+g1a3ae03c.d20210628-py3-none-any.whl
scanpy-1.8.0.tar.gz
```
## `setup.py`
Is including a bunch of files it shouldn't and is deprecated anyways.
---------
@flying-sheep @Zethson
|
1.0
|
Build process having some issues - ## Current build process
Current build process (as used on azure) does not include the `LICENSE`, as well as a number of other files (which may note be necessary)
```
git checkout 1.8.0
python -m build --sdist --wheel .
tar tzf dist/scanpy-1.8.0.tar.gz
```
<details>
<summary> contents of source dist </summary>
```
scanpy-1.8.0/README.rst
scanpy-1.8.0/pyproject.toml
scanpy-1.8.0/scanpy/__init__.py
scanpy-1.8.0/scanpy/__main__.py
scanpy-1.8.0/scanpy/_compat.py
scanpy-1.8.0/scanpy/_metadata.py
scanpy-1.8.0/scanpy/_settings.py
scanpy-1.8.0/scanpy/_utils/__init__.py
scanpy-1.8.0/scanpy/_utils/compute/is_constant.py
scanpy-1.8.0/scanpy/cli.py
scanpy-1.8.0/scanpy/datasets/10x_pbmc68k_reduced.h5ad
scanpy-1.8.0/scanpy/datasets/__init__.py
scanpy-1.8.0/scanpy/datasets/_datasets.py
scanpy-1.8.0/scanpy/datasets/_ebi_expression_atlas.py
scanpy-1.8.0/scanpy/datasets/_utils.py
scanpy-1.8.0/scanpy/datasets/krumsiek11.txt
scanpy-1.8.0/scanpy/datasets/toggleswitch.txt
scanpy-1.8.0/scanpy/external/__init__.py
scanpy-1.8.0/scanpy/external/exporting.py
scanpy-1.8.0/scanpy/external/pl.py
scanpy-1.8.0/scanpy/external/pp/__init__.py
scanpy-1.8.0/scanpy/external/pp/_bbknn.py
scanpy-1.8.0/scanpy/external/pp/_dca.py
scanpy-1.8.0/scanpy/external/pp/_harmony_integrate.py
scanpy-1.8.0/scanpy/external/pp/_hashsolo.py
scanpy-1.8.0/scanpy/external/pp/_magic.py
scanpy-1.8.0/scanpy/external/pp/_mnn_correct.py
scanpy-1.8.0/scanpy/external/pp/_scanorama_integrate.py
scanpy-1.8.0/scanpy/external/pp/_scrublet.py
scanpy-1.8.0/scanpy/external/tl/__init__.py
scanpy-1.8.0/scanpy/external/tl/_harmony_timeseries.py
scanpy-1.8.0/scanpy/external/tl/_palantir.py
scanpy-1.8.0/scanpy/external/tl/_phate.py
scanpy-1.8.0/scanpy/external/tl/_phenograph.py
scanpy-1.8.0/scanpy/external/tl/_pypairs.py
scanpy-1.8.0/scanpy/external/tl/_sam.py
scanpy-1.8.0/scanpy/external/tl/_trimap.py
scanpy-1.8.0/scanpy/external/tl/_wishbone.py
scanpy-1.8.0/scanpy/get/__init__.py
scanpy-1.8.0/scanpy/get/get.py
scanpy-1.8.0/scanpy/logging.py
scanpy-1.8.0/scanpy/metrics/__init__.py
scanpy-1.8.0/scanpy/metrics/_gearys_c.py
scanpy-1.8.0/scanpy/metrics/_metrics.py
scanpy-1.8.0/scanpy/metrics/_morans_i.py
scanpy-1.8.0/scanpy/neighbors/__init__.py
scanpy-1.8.0/scanpy/plotting/__init__.py
scanpy-1.8.0/scanpy/plotting/_anndata.py
scanpy-1.8.0/scanpy/plotting/_baseplot_class.py
scanpy-1.8.0/scanpy/plotting/_docs.py
scanpy-1.8.0/scanpy/plotting/_dotplot.py
scanpy-1.8.0/scanpy/plotting/_matrixplot.py
scanpy-1.8.0/scanpy/plotting/_preprocessing.py
scanpy-1.8.0/scanpy/plotting/_qc.py
scanpy-1.8.0/scanpy/plotting/_rcmod.py
scanpy-1.8.0/scanpy/plotting/_stacked_violin.py
scanpy-1.8.0/scanpy/plotting/_tools/__init__.py
scanpy-1.8.0/scanpy/plotting/_tools/paga.py
scanpy-1.8.0/scanpy/plotting/_tools/scatterplots.py
scanpy-1.8.0/scanpy/plotting/_utils.py
scanpy-1.8.0/scanpy/plotting/palettes.py
scanpy-1.8.0/scanpy/preprocessing/__init__.py
scanpy-1.8.0/scanpy/preprocessing/_combat.py
scanpy-1.8.0/scanpy/preprocessing/_deprecated/__init__.py
scanpy-1.8.0/scanpy/preprocessing/_deprecated/highly_variable_genes.py
scanpy-1.8.0/scanpy/preprocessing/_distributed.py
scanpy-1.8.0/scanpy/preprocessing/_docs.py
scanpy-1.8.0/scanpy/preprocessing/_highly_variable_genes.py
scanpy-1.8.0/scanpy/preprocessing/_normalization.py
scanpy-1.8.0/scanpy/preprocessing/_pca.py
scanpy-1.8.0/scanpy/preprocessing/_qc.py
scanpy-1.8.0/scanpy/preprocessing/_recipes.py
scanpy-1.8.0/scanpy/preprocessing/_simple.py
scanpy-1.8.0/scanpy/preprocessing/_utils.py
scanpy-1.8.0/scanpy/queries/__init__.py
scanpy-1.8.0/scanpy/queries/_queries.py
scanpy-1.8.0/scanpy/readwrite.py
scanpy-1.8.0/scanpy/sim_models/__init__.py
scanpy-1.8.0/scanpy/sim_models/krumsiek11.txt
scanpy-1.8.0/scanpy/sim_models/krumsiek11_params.txt
scanpy-1.8.0/scanpy/sim_models/toggleswitch.txt
scanpy-1.8.0/scanpy/sim_models/toggleswitch_params.txt
scanpy-1.8.0/scanpy/tools/__init__.py
scanpy-1.8.0/scanpy/tools/_dendrogram.py
scanpy-1.8.0/scanpy/tools/_diffmap.py
scanpy-1.8.0/scanpy/tools/_dpt.py
scanpy-1.8.0/scanpy/tools/_draw_graph.py
scanpy-1.8.0/scanpy/tools/_embedding_density.py
scanpy-1.8.0/scanpy/tools/_ingest.py
scanpy-1.8.0/scanpy/tools/_leiden.py
scanpy-1.8.0/scanpy/tools/_louvain.py
scanpy-1.8.0/scanpy/tools/_marker_gene_overlap.py
scanpy-1.8.0/scanpy/tools/_paga.py
scanpy-1.8.0/scanpy/tools/_pca.py
scanpy-1.8.0/scanpy/tools/_rank_genes_groups.py
scanpy-1.8.0/scanpy/tools/_score_genes.py
scanpy-1.8.0/scanpy/tools/_sim.py
scanpy-1.8.0/scanpy/tools/_top_genes.py
scanpy-1.8.0/scanpy/tools/_tsne.py
scanpy-1.8.0/scanpy/tools/_umap.py
scanpy-1.8.0/scanpy/tools/_utils.py
scanpy-1.8.0/scanpy/tools/_utils_clustering.py
scanpy-1.8.0/PKG-INFO
```
</details>
## Flit build
Using flit to build it includes all the files I would expect, but get's the version wrong on the wheel
```
rm -r dist
flit build
ls dist
```
```
scanpy-1.8.0.dev112+g1a3ae03c.d20210628-py3-none-any.whl
scanpy-1.8.0.tar.gz
```
## `setup.py`
Is including a bunch of files it shouldn't and is deprecated anyways.
---------
@flying-sheep @Zethson
|
process
|
build process having some issues current build process current build process as used on azure does not include the license as well as a number of other files which may note be necessary git checkout python m build sdist wheel tar tzf dist scanpy tar gz contents of source dist scanpy readme rst scanpy pyproject toml scanpy scanpy init py scanpy scanpy main py scanpy scanpy compat py scanpy scanpy metadata py scanpy scanpy settings py scanpy scanpy utils init py scanpy scanpy utils compute is constant py scanpy scanpy cli py scanpy scanpy datasets reduced scanpy scanpy datasets init py scanpy scanpy datasets datasets py scanpy scanpy datasets ebi expression atlas py scanpy scanpy datasets utils py scanpy scanpy datasets txt scanpy scanpy datasets toggleswitch txt scanpy scanpy external init py scanpy scanpy external exporting py scanpy scanpy external pl py scanpy scanpy external pp init py scanpy scanpy external pp bbknn py scanpy scanpy external pp dca py scanpy scanpy external pp harmony integrate py scanpy scanpy external pp hashsolo py scanpy scanpy external pp magic py scanpy scanpy external pp mnn correct py scanpy scanpy external pp scanorama integrate py scanpy scanpy external pp scrublet py scanpy scanpy external tl init py scanpy scanpy external tl harmony timeseries py scanpy scanpy external tl palantir py scanpy scanpy external tl phate py scanpy scanpy external tl phenograph py scanpy scanpy external tl pypairs py scanpy scanpy external tl sam py scanpy scanpy external tl trimap py scanpy scanpy external tl wishbone py scanpy scanpy get init py scanpy scanpy get get py scanpy scanpy logging py scanpy scanpy metrics init py scanpy scanpy metrics gearys c py scanpy scanpy metrics metrics py scanpy scanpy metrics morans i py scanpy scanpy neighbors init py scanpy scanpy plotting init py scanpy scanpy plotting anndata py scanpy scanpy plotting baseplot class py scanpy scanpy plotting docs py scanpy scanpy plotting dotplot py scanpy scanpy plotting matrixplot py scanpy scanpy plotting preprocessing py scanpy scanpy plotting qc py scanpy scanpy plotting rcmod py scanpy scanpy plotting stacked violin py scanpy scanpy plotting tools init py scanpy scanpy plotting tools paga py scanpy scanpy plotting tools scatterplots py scanpy scanpy plotting utils py scanpy scanpy plotting palettes py scanpy scanpy preprocessing init py scanpy scanpy preprocessing combat py scanpy scanpy preprocessing deprecated init py scanpy scanpy preprocessing deprecated highly variable genes py scanpy scanpy preprocessing distributed py scanpy scanpy preprocessing docs py scanpy scanpy preprocessing highly variable genes py scanpy scanpy preprocessing normalization py scanpy scanpy preprocessing pca py scanpy scanpy preprocessing qc py scanpy scanpy preprocessing recipes py scanpy scanpy preprocessing simple py scanpy scanpy preprocessing utils py scanpy scanpy queries init py scanpy scanpy queries queries py scanpy scanpy readwrite py scanpy scanpy sim models init py scanpy scanpy sim models txt scanpy scanpy sim models params txt scanpy scanpy sim models toggleswitch txt scanpy scanpy sim models toggleswitch params txt scanpy scanpy tools init py scanpy scanpy tools dendrogram py scanpy scanpy tools diffmap py scanpy scanpy tools dpt py scanpy scanpy tools draw graph py scanpy scanpy tools embedding density py scanpy scanpy tools ingest py scanpy scanpy tools leiden py scanpy scanpy tools louvain py scanpy scanpy tools marker gene overlap py scanpy scanpy tools paga py scanpy scanpy tools pca py scanpy scanpy tools rank genes groups py scanpy scanpy tools score genes py scanpy scanpy tools sim py scanpy scanpy tools top genes py scanpy scanpy tools tsne py scanpy scanpy tools umap py scanpy scanpy tools utils py scanpy scanpy tools utils clustering py scanpy pkg info flit build using flit to build it includes all the files i would expect but get s the version wrong on the wheel rm r dist flit build ls dist scanpy none any whl scanpy tar gz setup py is including a bunch of files it shouldn t and is deprecated anyways flying sheep zethson
| 1
|
8,476
| 11,643,042,987
|
IssuesEvent
|
2020-02-29 11:01:53
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
opened
|
UCP: Migrate scalar function `MakeDate` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `MakeDate` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `MakeDate` from TiDB -
## Description
Port the scalar function `MakeDate` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function makedate from tidb description port the scalar function makedate from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
18,098
| 24,124,207,074
|
IssuesEvent
|
2022-09-20 21:49:55
|
GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
|
https://api.github.com/repos/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
|
closed
|
Identify how a successful app deployment is verified
|
process
|
Aside from Terraform deployment success, do we want to add some kind of app success check?
|
1.0
|
Identify how a successful app deployment is verified - Aside from Terraform deployment success, do we want to add some kind of app success check?
|
process
|
identify how a successful app deployment is verified aside from terraform deployment success do we want to add some kind of app success check
| 1
|
241,926
| 20,173,272,944
|
IssuesEvent
|
2022-02-10 12:23:17
|
DanielMurphy22/SmokeTests
|
https://api.github.com/repos/DanielMurphy22/SmokeTests
|
closed
|
Windows Clean/Dirty Install Smoke Tests
|
Needs Close Explanation and Resolved Label Manual Tests Windows Only Stale
|
Before testing:
- Check this testing issue relates to the OS you will test on.
- If unassigned, please assign yourself as for a normal Github issue.
- Please run these tests on the release package of Mantid; **not a locally built version**.
Afterwards:
- Comment below with any issues you came across.
- If no issues were found, or they are now all resolved, please close the testing issue.
- Check the master issue for this OS for other unassigned smoke tests.
If you have any questions please contact the creator of this issue.
:soap: :hankey:
### Dirty install
* Make sure that you have several versions of Mantid installed
* Last release
* A nightly
* If possible an old release
* Install the latest version of the new Mantid
- [ ] Check that Mantid boots up correctly
### Clean install
* Remove all existing Mantid versions and associated files
**On Windows**:
* Uninstall the program
* Clear shortcuts from desktop
* Clean out the registry
* Load regedit (Command Prompt > regedit)
**On macOS** :
* Remove the application
* Remove the `~/.mantid directory`
* Remove `~/Library/Preferences/org.mantidproject.MantidPlot.plist`
**On Linux** :
* Remove the package: `/opt/Mantid`
* Remove `~/.config/Mantid`
* Remove `~/.mantid/`
* Re-install the latest version of the new Mantid
- [ ] Check that Mantid boots up correctly
|
1.0
|
Windows Clean/Dirty Install Smoke Tests -
Before testing:
- Check this testing issue relates to the OS you will test on.
- If unassigned, please assign yourself as for a normal Github issue.
- Please run these tests on the release package of Mantid; **not a locally built version**.
Afterwards:
- Comment below with any issues you came across.
- If no issues were found, or they are now all resolved, please close the testing issue.
- Check the master issue for this OS for other unassigned smoke tests.
If you have any questions please contact the creator of this issue.
:soap: :hankey:
### Dirty install
* Make sure that you have several versions of Mantid installed
* Last release
* A nightly
* If possible an old release
* Install the latest version of the new Mantid
- [ ] Check that Mantid boots up correctly
### Clean install
* Remove all existing Mantid versions and associated files
**On Windows**:
* Uninstall the program
* Clear shortcuts from desktop
* Clean out the registry
* Load regedit (Command Prompt > regedit)
**On macOS** :
* Remove the application
* Remove the `~/.mantid directory`
* Remove `~/Library/Preferences/org.mantidproject.MantidPlot.plist`
**On Linux** :
* Remove the package: `/opt/Mantid`
* Remove `~/.config/Mantid`
* Remove `~/.mantid/`
* Re-install the latest version of the new Mantid
- [ ] Check that Mantid boots up correctly
|
non_process
|
windows clean dirty install smoke tests before testing check this testing issue relates to the os you will test on if unassigned please assign yourself as for a normal github issue please run these tests on the release package of mantid not a locally built version afterwards comment below with any issues you came across if no issues were found or they are now all resolved please close the testing issue check the master issue for this os for other unassigned smoke tests if you have any questions please contact the creator of this issue soap hankey dirty install make sure that you have several versions of mantid installed last release a nightly if possible an old release install the latest version of the new mantid check that mantid boots up correctly clean install remove all existing mantid versions and associated files on windows uninstall the program clear shortcuts from desktop clean out the registry load regedit command prompt regedit on macos remove the application remove the mantid directory remove library preferences org mantidproject mantidplot plist on linux remove the package opt mantid remove config mantid remove mantid re install the latest version of the new mantid check that mantid boots up correctly
| 0
|
9,759
| 12,742,259,665
|
IssuesEvent
|
2020-06-26 08:04:57
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
TestCheckChildProcessUserAndGroupIdsElevated failed in CI
|
area-System.Diagnostics.Process disabled-test
|
https://mc.dot.net/#/user/dotnet-bot/pr~2Fdotnet~2Fcorefx~2Frefs~2Fpull~2F38817~2Fmerge/test~2Ffunctional~2Fcli~2Finnerloop~2F/20190624.2/workItem/System.Diagnostics.Process.Tests/wilogs
```
System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(useRootGroups: True) [FAIL]
Assert.NotNull() Failure
Stack Trace:
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(640,0): at System.Diagnostics.Tests.ProcessTests.GetCurrentRealUserName()
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(615,0): at System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(Boolean useRootGroups)
System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(useRootGroups: False) [FAIL]
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(640,0): at System.Diagnostics.Tests.ProcessTests.GetCurrentRealUserName()
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(615,0): at System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(Boolean useRootGroups)
```
cc: @tmds
|
1.0
|
TestCheckChildProcessUserAndGroupIdsElevated failed in CI - https://mc.dot.net/#/user/dotnet-bot/pr~2Fdotnet~2Fcorefx~2Frefs~2Fpull~2F38817~2Fmerge/test~2Ffunctional~2Fcli~2Finnerloop~2F/20190624.2/workItem/System.Diagnostics.Process.Tests/wilogs
```
System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(useRootGroups: True) [FAIL]
Assert.NotNull() Failure
Stack Trace:
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(640,0): at System.Diagnostics.Tests.ProcessTests.GetCurrentRealUserName()
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(615,0): at System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(Boolean useRootGroups)
System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(useRootGroups: False) [FAIL]
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(640,0): at System.Diagnostics.Tests.ProcessTests.GetCurrentRealUserName()
/_/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs(615,0): at System.Diagnostics.Tests.ProcessTests.TestCheckChildProcessUserAndGroupIdsElevated(Boolean useRootGroups)
```
cc: @tmds
|
process
|
testcheckchildprocessuserandgroupidselevated failed in ci system diagnostics tests processtests testcheckchildprocessuserandgroupidselevated userootgroups true assert notnull failure stack trace src system diagnostics process tests processtests unix cs at system diagnostics tests processtests getcurrentrealusername src system diagnostics process tests processtests unix cs at system diagnostics tests processtests testcheckchildprocessuserandgroupidselevated boolean userootgroups system diagnostics tests processtests testcheckchildprocessuserandgroupidselevated userootgroups false src system diagnostics process tests processtests unix cs at system diagnostics tests processtests getcurrentrealusername src system diagnostics process tests processtests unix cs at system diagnostics tests processtests testcheckchildprocessuserandgroupidselevated boolean userootgroups cc tmds
| 1
|
823,112
| 30,928,276,951
|
IssuesEvent
|
2023-08-06 19:09:28
|
priyankarpal/ProjectsHut
|
https://api.github.com/repos/priyankarpal/ProjectsHut
|
closed
|
chore: project addition by iqrasarwar
|
good first issue 🟩 priority: low 🏁 status: ready for dev projects addition
|
### Add a new project to the list
i want to showcase my project on website
### Record
- [X] I have checked the existing [issues](https://github.com/priyankarpal/ProjectsHut/issues)
- [X] I have read the [Contributing Guidelines](https://github.com/priyankarpal/ProjectsHut/blob/main/contributing.md)
- [X] I agree to follow this project's [Code of Conduct](https://github.com/priyankarpal/ProjectsHut/blob/main/CODE_OF_CONDUCT.md)
- [X] I want to work on this issue
|
1.0
|
chore: project addition by iqrasarwar - ### Add a new project to the list
i want to showcase my project on website
### Record
- [X] I have checked the existing [issues](https://github.com/priyankarpal/ProjectsHut/issues)
- [X] I have read the [Contributing Guidelines](https://github.com/priyankarpal/ProjectsHut/blob/main/contributing.md)
- [X] I agree to follow this project's [Code of Conduct](https://github.com/priyankarpal/ProjectsHut/blob/main/CODE_OF_CONDUCT.md)
- [X] I want to work on this issue
|
non_process
|
chore project addition by iqrasarwar add a new project to the list i want to showcase my project on website record i have checked the existing i have read the i agree to follow this project s i want to work on this issue
| 0
|
6,046
| 8,870,194,525
|
IssuesEvent
|
2019-01-11 08:46:20
|
atilaneves/dpp
|
https://api.github.com/repos/atilaneves/dpp
|
closed
|
C macros that declare an extern variable cause D code that won't compile due to multiple definitions
|
preprocessor wontfix
|
Eg postgresql pgmagic has extern int dummy variable; repeated
|
1.0
|
C macros that declare an extern variable cause D code that won't compile due to multiple definitions - Eg postgresql pgmagic has extern int dummy variable; repeated
|
process
|
c macros that declare an extern variable cause d code that won t compile due to multiple definitions eg postgresql pgmagic has extern int dummy variable repeated
| 1
|
13,834
| 16,598,838,838
|
IssuesEvent
|
2021-06-01 16:29:29
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
closed
|
[$30] Clicking on winners tab for a pure v5 task gives a blank page
|
Bug Bash Dev Env P1 Pure V5 Task QA Pass in PROD ShapeupProcess tcx_OpenForPickup
|
Clicking on winners tab for a pure v5 task gives a blank page
This is because of missing `legacy.track` in challenge api response.
@rootelement , since pure v5 task does get synced to legacy on completion, wont we have the legacy data when thee winners tab is displayed (meaning challenge is already complete).
@luizrrodrigues , what do we need the `legacy.track` details on the FE for?
|
1.0
|
[$30] Clicking on winners tab for a pure v5 task gives a blank page - Clicking on winners tab for a pure v5 task gives a blank page
This is because of missing `legacy.track` in challenge api response.
@rootelement , since pure v5 task does get synced to legacy on completion, wont we have the legacy data when thee winners tab is displayed (meaning challenge is already complete).
@luizrrodrigues , what do we need the `legacy.track` details on the FE for?
|
process
|
clicking on winners tab for a pure task gives a blank page clicking on winners tab for a pure task gives a blank page this is because of missing legacy track in challenge api response rootelement since pure task does get synced to legacy on completion wont we have the legacy data when thee winners tab is displayed meaning challenge is already complete luizrrodrigues what do we need the legacy track details on the fe for
| 1
|
80,222
| 10,165,623,621
|
IssuesEvent
|
2019-08-07 14:15:40
|
arcticicestudio/nord-docs
|
https://api.github.com/repos/arcticicestudio/nord-docs
|
opened
|
Transition: Nord Slack
|
context-documentation context-workflow scope-maintainability scope-quality scope-ux target-slack type-feature
|
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/48676311-39475300-eb65-11e8-9654-16c24c1c9a94.png" width="12%"/></p>
> Associated epics: #133
This issue documents the transition of the documentations, assets and visualizations of [Nord Slack][s] to _Nord Docs_. It serves as bridge between the tasks that must be solved for _Nord Slack_ repository and the resulting tasks for Nord Docs.
➜ Please see the corresponding issue https://github.com/arcticicestudio/nord-slack/issues/5 for all details.
### Tasks
- [ ] Transfer and improve old Nord Slack assets.
- [ ] Implement landing and doc page for Nord Slack.
- [ ] Transfer Nord Slack documentations in polished format.
[s]: https://github.com/arcticicestudio/nord-slack
|
1.0
|
Transition: Nord Slack - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48676311-39475300-eb65-11e8-9654-16c24c1c9a94.png" width="12%"/></p>
> Associated epics: #133
This issue documents the transition of the documentations, assets and visualizations of [Nord Slack][s] to _Nord Docs_. It serves as bridge between the tasks that must be solved for _Nord Slack_ repository and the resulting tasks for Nord Docs.
➜ Please see the corresponding issue https://github.com/arcticicestudio/nord-slack/issues/5 for all details.
### Tasks
- [ ] Transfer and improve old Nord Slack assets.
- [ ] Implement landing and doc page for Nord Slack.
- [ ] Transfer Nord Slack documentations in polished format.
[s]: https://github.com/arcticicestudio/nord-slack
|
non_process
|
transition nord slack associated epics this issue documents the transition of the documentations assets and visualizations of to nord docs it serves as bridge between the tasks that must be solved for nord slack repository and the resulting tasks for nord docs ➜ please see the corresponding issue for all details tasks transfer and improve old nord slack assets implement landing and doc page for nord slack transfer nord slack documentations in polished format
| 0
|
7,287
| 10,436,497,133
|
IssuesEvent
|
2019-09-17 19:42:30
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
opened
|
OBO definitions appearing twice
|
ontology processing problem
|
User writes:
```
After making a change to our ontology and re-uploading to BioPortal, we're seeing our definition show up twice on our landing page for this term: http://bioportal.bioontology.org/ontologies/ECSO/?p=classes&conceptid=http%3A%2F%2Fpurl.dataone.org%2Fodo%2FECSO_00002092.
The RDF/XML for this term can be found in this diff: https://github.com/DataONEorg/sem-prov-ontologies/commit/694957803cf77f5723ddd6ff1b7c88e810cc8c75#diff-3aac6f836ca99404f3acabd594fa36d3L16842.
Our ontology isn't configured with a custom term URI For the definition field. This isn't really a huge deal but it would be nice to only show the definition once. Is there something we can do on our end to change this?
```
Inspection verifies that in BioPortal, the RDF below produces two definitions:
```
Definitions A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.
definition A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.
```
The RDF appears based on the OBO pattern in http://www.obofoundry.org/principles/fp-006-textual-definitions.html, and is recognized correctly by Protege as a source citation, not a restatement of the axiom. The issue appears to be common to other similar constructs in the ECSO ontology.
```
<!-- http://purl.dataone.org/odo/ECSO_00002092 -->
<owl:Class rdf:about="http://purl.dataone.org/odo/ECSO_00002092">
<rdfs:subClassOf rdf:resource="http://purl.dataone.org/odo/ECSO_00001881"/>
<obo:IAO_0000115>A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings</obo:IAO_0000115>
<dc:creator rdf:resource="http://orcid.org/0000-0003-1264-1166"/>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-05-03T23:26:08Z</dc:date>
<rdfs:label xml:lang="en">brightness temperature</rdfs:label>
</owl:Class>
<owl:Axiom>
<owl:annotatedSource rdf:resource="http://purl.dataone.org/odo/ECSO_00002092"/>
<owl:annotatedProperty rdf:resource="http://purl.obolibrary.org/obo/IAO_0000115"/>
<owl:annotatedTarget>A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.</owl:annotatedTarget>
<oboInOwl:hasDbXref rdf:resource="https://en.wikipedia.org/wiki/Brightness_temperature"/>
</owl:Axiom>
```
|
1.0
|
OBO definitions appearing twice - User writes:
```
After making a change to our ontology and re-uploading to BioPortal, we're seeing our definition show up twice on our landing page for this term: http://bioportal.bioontology.org/ontologies/ECSO/?p=classes&conceptid=http%3A%2F%2Fpurl.dataone.org%2Fodo%2FECSO_00002092.
The RDF/XML for this term can be found in this diff: https://github.com/DataONEorg/sem-prov-ontologies/commit/694957803cf77f5723ddd6ff1b7c88e810cc8c75#diff-3aac6f836ca99404f3acabd594fa36d3L16842.
Our ontology isn't configured with a custom term URI For the definition field. This isn't really a huge deal but it would be nice to only show the definition once. Is there something we can do on our end to change this?
```
Inspection verifies that in BioPortal, the RDF below produces two definitions:
```
Definitions A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.
definition A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.
```
The RDF appears based on the OBO pattern in http://www.obofoundry.org/principles/fp-006-textual-definitions.html, and is recognized correctly by Protege as a source citation, not a restatement of the axiom. The issue appears to be common to other similar constructs in the ECSO ontology.
```
<!-- http://purl.dataone.org/odo/ECSO_00002092 -->
<owl:Class rdf:about="http://purl.dataone.org/odo/ECSO_00002092">
<rdfs:subClassOf rdf:resource="http://purl.dataone.org/odo/ECSO_00001881"/>
<obo:IAO_0000115>A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings</obo:IAO_0000115>
<dc:creator rdf:resource="http://orcid.org/0000-0003-1264-1166"/>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-05-03T23:26:08Z</dc:date>
<rdfs:label xml:lang="en">brightness temperature</rdfs:label>
</owl:Class>
<owl:Axiom>
<owl:annotatedSource rdf:resource="http://purl.dataone.org/odo/ECSO_00002092"/>
<owl:annotatedProperty rdf:resource="http://purl.obolibrary.org/obo/IAO_0000115"/>
<owl:annotatedTarget>A blackbody temperature when the blackbody is in thermal equilibrium with its surroundings.</owl:annotatedTarget>
<oboInOwl:hasDbXref rdf:resource="https://en.wikipedia.org/wiki/Brightness_temperature"/>
</owl:Axiom>
```
|
process
|
obo definitions appearing twice user writes after making a change to our ontology and re uploading to bioportal we re seeing our definition show up twice on our landing page for this term the rdf xml for this term can be found in this diff our ontology isn t configured with a custom term uri for the definition field this isn t really a huge deal but it would be nice to only show the definition once is there something we can do on our end to change this inspection verifies that in bioportal the rdf below produces two definitions definitions a blackbody temperature when the blackbody is in thermal equilibrium with its surroundings definition a blackbody temperature when the blackbody is in thermal equilibrium with its surroundings the rdf appears based on the obo pattern in and is recognized correctly by protege as a source citation not a restatement of the axiom the issue appears to be common to other similar constructs in the ecso ontology owl class rdf about rdfs subclassof rdf resource a blackbody temperature when the blackbody is in thermal equilibrium with its surroundings dc creator rdf resource dc date rdf datatype brightness temperature owl annotatedsource rdf resource owl annotatedproperty rdf resource a blackbody temperature when the blackbody is in thermal equilibrium with its surroundings oboinowl hasdbxref rdf resource
| 1
|
11,327
| 14,142,619,464
|
IssuesEvent
|
2020-11-10 14:17:22
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Fix version pinning logic
|
kind/tech process/candidate team/typescript
|
When the TS pipeline finished a release, it should add a tag to `prisma-engines`.
This logic broke some time ago and we should fix it again.
|
1.0
|
Fix version pinning logic - When the TS pipeline finished a release, it should add a tag to `prisma-engines`.
This logic broke some time ago and we should fix it again.
|
process
|
fix version pinning logic when the ts pipeline finished a release it should add a tag to prisma engines this logic broke some time ago and we should fix it again
| 1
|
9,039
| 12,130,107,964
|
IssuesEvent
|
2020-04-23 00:30:40
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/ndb/transactions/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/ndb/transactions/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/ndb/transactions/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/ndb/transactions/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard ndb transactions requirements test txt remove gcp devrel py tools from appengine standard ndb transactions requirements test txt
| 1
|
250,614
| 27,107,659,179
|
IssuesEvent
|
2023-02-15 13:17:11
|
jgeraigery/pyadi-iio
|
https://api.github.com/repos/jgeraigery/pyadi-iio
|
opened
|
tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl: 2 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</b></p></summary>
<p></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (tensorflow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-25577](https://www.mend.io/vulnerability-database/CVE-2023-25577) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Werkzeug-2.2.2-py3-none-any.whl | Transitive | N/A* | ❌ |
| [CVE-2023-23934](https://www.mend.io/vulnerability-database/CVE-2023-23934) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.6 | Werkzeug-2.2.2-py3-none-any.whl | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2023-25577</summary>
### Vulnerable Library - <b>Werkzeug-2.2.2-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
Dependency Hierarchy:
- tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (Root Library)
- tensorboard-2.11.2-py3-none-any.whl
- :x: **Werkzeug-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. Prior to version 2.2.3, Werkzeug's multipart form data parser will parse an unlimited number of parts, including file parts. Parts can be a small amount of bytes, but each requires CPU time to parse and may use more memory as Python data. If a request can be made to an endpoint that accesses `request.data`, `request.form`, `request.files`, or `request.get_data(parse_form_data=False)`, it can cause unexpectedly high resource usage. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. The amount of RAM required can trigger an out of memory kill of the process. Unlimited file parts can use up memory and file handles. If many concurrent requests are sent continuously, this can exhaust or kill all available workers. Version 2.2.3 contains a patch for this issue.
<p>Publish Date: 2023-02-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25577>CVE-2023-25577</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-25577">https://www.cve.org/CVERecord?id=CVE-2023-25577</a></p>
<p>Release Date: 2023-02-14</p>
<p>Fix Resolution: Werkzeug - 2.2.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2023-23934</summary>
### Vulnerable Library - <b>Werkzeug-2.2.2-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
Dependency Hierarchy:
- tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (Root Library)
- tensorboard-2.11.2-py3-none-any.whl
- :x: **Werkzeug-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. Browsers may allow "nameless" cookies that look like `=value` instead of `key=value`. A vulnerable browser may allow a compromised application on an adjacent subdomain to exploit this to set a cookie like `=__Host-test=bad` for another subdomain. Werkzeug prior to 2.2.3 will parse the cookie `=__Host-test=bad` as __Host-test=bad`. If a Werkzeug application is running next to a vulnerable or malicious subdomain which sets such a cookie using a vulnerable browser, the Werkzeug application will see the bad cookie value but the valid cookie key. The issue is fixed in Werkzeug 2.2.3.
<p>Publish Date: 2023-02-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-23934>CVE-2023-23934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-23934">https://www.cve.org/CVERecord?id=CVE-2023-23934</a></p>
<p>Release Date: 2023-02-14</p>
<p>Fix Resolution: Werkzeug - 2.2.3</p>
</p>
<p></p>
</details>
|
True
|
tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl: 2 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</b></p></summary>
<p></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (tensorflow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-25577](https://www.mend.io/vulnerability-database/CVE-2023-25577) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | Werkzeug-2.2.2-py3-none-any.whl | Transitive | N/A* | ❌ |
| [CVE-2023-23934](https://www.mend.io/vulnerability-database/CVE-2023-23934) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.6 | Werkzeug-2.2.2-py3-none-any.whl | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2023-25577</summary>
### Vulnerable Library - <b>Werkzeug-2.2.2-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
Dependency Hierarchy:
- tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (Root Library)
- tensorboard-2.11.2-py3-none-any.whl
- :x: **Werkzeug-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. Prior to version 2.2.3, Werkzeug's multipart form data parser will parse an unlimited number of parts, including file parts. Parts can be a small amount of bytes, but each requires CPU time to parse and may use more memory as Python data. If a request can be made to an endpoint that accesses `request.data`, `request.form`, `request.files`, or `request.get_data(parse_form_data=False)`, it can cause unexpectedly high resource usage. This allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it. The amount of CPU time required can block worker processes from handling legitimate requests. The amount of RAM required can trigger an out of memory kill of the process. Unlimited file parts can use up memory and file handles. If many concurrent requests are sent continuously, this can exhaust or kill all available workers. Version 2.2.3 contains a patch for this issue.
<p>Publish Date: 2023-02-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25577>CVE-2023-25577</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-25577">https://www.cve.org/CVERecord?id=CVE-2023-25577</a></p>
<p>Release Date: 2023-02-14</p>
<p>Fix Resolution: Werkzeug - 2.2.3</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2023-23934</summary>
### Vulnerable Library - <b>Werkzeug-2.2.2-py3-none-any.whl</b></p>
<p>The comprehensive WSGI web application library.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl">https://files.pythonhosted.org/packages/c8/27/be6ddbcf60115305205de79c29004a0c6bc53cec814f733467b1bb89386d/Werkzeug-2.2.2-py3-none-any.whl</a></p>
<p>Path to dependency file: /examples/cn0549/requirements.txt</p>
<p>Path to vulnerable library: /examples/cn0549/requirements.txt</p>
<p>
Dependency Hierarchy:
- tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (Root Library)
- tensorboard-2.11.2-py3-none-any.whl
- :x: **Werkzeug-2.2.2-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Werkzeug is a comprehensive WSGI web application library. Browsers may allow "nameless" cookies that look like `=value` instead of `key=value`. A vulnerable browser may allow a compromised application on an adjacent subdomain to exploit this to set a cookie like `=__Host-test=bad` for another subdomain. Werkzeug prior to 2.2.3 will parse the cookie `=__Host-test=bad` as __Host-test=bad`. If a Werkzeug application is running next to a vulnerable or malicious subdomain which sets such a cookie using a vulnerable browser, the Werkzeug application will see the bad cookie value but the valid cookie key. The issue is fixed in Werkzeug 2.2.3.
<p>Publish Date: 2023-02-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-23934>CVE-2023-23934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-23934">https://www.cve.org/CVERecord?id=CVE-2023-23934</a></p>
<p>Release Date: 2023-02-14</p>
<p>Fix Resolution: Werkzeug - 2.2.3</p>
</p>
<p></p>
</details>
|
non_process
|
tensorflow manylinux whl vulnerabilities highest severity is vulnerable library tensorflow manylinux whl path to dependency file examples requirements txt path to vulnerable library examples requirements txt vulnerabilities cve severity cvss dependency type fixed in tensorflow version remediation available high werkzeug none any whl transitive n a low werkzeug none any whl transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library werkzeug none any whl the comprehensive wsgi web application library library home page a href path to dependency file examples requirements txt path to vulnerable library examples requirements txt dependency hierarchy tensorflow manylinux whl root library tensorboard none any whl x werkzeug none any whl vulnerable library found in base branch master vulnerability details werkzeug is a comprehensive wsgi web application library prior to version werkzeug s multipart form data parser will parse an unlimited number of parts including file parts parts can be a small amount of bytes but each requires cpu time to parse and may use more memory as python data if a request can be made to an endpoint that accesses request data request form request files or request get data parse form data false it can cause unexpectedly high resource usage this allows an attacker to cause a denial of service by sending crafted multipart data to an endpoint that will parse it the amount of cpu time required can block worker processes from handling legitimate requests the amount of ram required can trigger an out of memory kill of the process unlimited file parts can use up memory and file handles if many concurrent requests are sent continuously this can exhaust or kill all available workers version contains a patch for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution werkzeug cve vulnerable library werkzeug none any whl the comprehensive wsgi web application library library home page a href path to dependency file examples requirements txt path to vulnerable library examples requirements txt dependency hierarchy tensorflow manylinux whl root library tensorboard none any whl x werkzeug none any whl vulnerable library found in base branch master vulnerability details werkzeug is a comprehensive wsgi web application library browsers may allow nameless cookies that look like value instead of key value a vulnerable browser may allow a compromised application on an adjacent subdomain to exploit this to set a cookie like host test bad for another subdomain werkzeug prior to will parse the cookie host test bad as host test bad if a werkzeug application is running next to a vulnerable or malicious subdomain which sets such a cookie using a vulnerable browser the werkzeug application will see the bad cookie value but the valid cookie key the issue is fixed in werkzeug publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution werkzeug
| 0
|
17,828
| 23,768,850,174
|
IssuesEvent
|
2022-09-01 14:45:36
|
ArneBinder/pie-utils
|
https://api.github.com/repos/ArneBinder/pie-utils
|
closed
|
create a partition via regex
|
document processor
|
Implement a document processor that creates a partition via a regex split pattern. This should take advantage from [previous implementation](https://github.com/ArneBinder/pytorch-ie-sam-template/blob/main/src/document_processors/partition.py). This should also collect the distribution of the lengths of the parts (partition entries) and the full texts (to compare against) and also the number of parts per document.
|
1.0
|
create a partition via regex - Implement a document processor that creates a partition via a regex split pattern. This should take advantage from [previous implementation](https://github.com/ArneBinder/pytorch-ie-sam-template/blob/main/src/document_processors/partition.py). This should also collect the distribution of the lengths of the parts (partition entries) and the full texts (to compare against) and also the number of parts per document.
|
process
|
create a partition via regex implement a document processor that creates a partition via a regex split pattern this should take advantage from this should also collect the distribution of the lengths of the parts partition entries and the full texts to compare against and also the number of parts per document
| 1
|
133,102
| 10,790,048,302
|
IssuesEvent
|
2019-11-05 13:20:58
|
mozilla/iris_firefox
|
https://api.github.com/repos/mozilla/iris_firefox
|
reopened
|
Fix disable_search_suggestions test
|
regression test case
|
This test is failing the pattern match assert for show_search_suggestions_in_address_bar_results_checked_pattern.png
The test will probably fail for the unchecked version of the same.
The test currently has these checking at .similarity(0.9). Increasing similarity is generally a bad idea when the pattern involved contains text. I had to reduce similarity to (0.6) in both patterns to get this to work.
|
1.0
|
Fix disable_search_suggestions test - This test is failing the pattern match assert for show_search_suggestions_in_address_bar_results_checked_pattern.png
The test will probably fail for the unchecked version of the same.
The test currently has these checking at .similarity(0.9). Increasing similarity is generally a bad idea when the pattern involved contains text. I had to reduce similarity to (0.6) in both patterns to get this to work.
|
non_process
|
fix disable search suggestions test this test is failing the pattern match assert for show search suggestions in address bar results checked pattern png the test will probably fail for the unchecked version of the same the test currently has these checking at similarity increasing similarity is generally a bad idea when the pattern involved contains text i had to reduce similarity to in both patterns to get this to work
| 0
|
65,151
| 26,994,244,903
|
IssuesEvent
|
2023-02-09 22:50:39
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
The OPENID az cli example is incorrect and using it would cause AADSTS70021 error
|
app-service/svc triaged cxp doc-bug Pri2
|
The Az Cli example for federated credentials has a typo that would cause AADSTS70021 error.
{
"name": "<CREDENTIAL-NAME>",
"issuer": "https://token.actions.githubusercontent.com/",
"subject": "repo:organization/repository:ref:refs/heads/main",
"description": "Testing",
"audiences": [
"api://AzureADTokenExchange"
]
}
"issuer": "https://token.actions.githubusercontent.com/", should not have "/" at the end.
The correct way is:
"issuer": "https://token.actions.githubusercontent.com"
commit link: https://github.com/MicrosoftDocs/azure-docs/commit/5f8759438cac694b7fa3bfd335c7b6f179e349c4
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 967b3514-23c5-514d-186f-d901ac94069b
* Version Independent ID: 7a8581b2-feb6-534e-c88d-a0d7c7de7b92
* Content: [Configure CI/CD with GitHub Actions - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/deploy-github-actions?tabs=openid)
* Content Source: [articles/app-service/deploy-github-actions.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/deploy-github-actions.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
1.0
|
The OPENID az cli example is incorrect and using it would cause AADSTS70021 error - The Az Cli example for federated credentials has a typo that would cause AADSTS70021 error.
{
"name": "<CREDENTIAL-NAME>",
"issuer": "https://token.actions.githubusercontent.com/",
"subject": "repo:organization/repository:ref:refs/heads/main",
"description": "Testing",
"audiences": [
"api://AzureADTokenExchange"
]
}
"issuer": "https://token.actions.githubusercontent.com/", should not have "/" at the end.
The correct way is:
"issuer": "https://token.actions.githubusercontent.com"
commit link: https://github.com/MicrosoftDocs/azure-docs/commit/5f8759438cac694b7fa3bfd335c7b6f179e349c4
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 967b3514-23c5-514d-186f-d901ac94069b
* Version Independent ID: 7a8581b2-feb6-534e-c88d-a0d7c7de7b92
* Content: [Configure CI/CD with GitHub Actions - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/deploy-github-actions?tabs=openid)
* Content Source: [articles/app-service/deploy-github-actions.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/deploy-github-actions.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
non_process
|
the openid az cli example is incorrect and using it would cause error the az cli example for federated credentials has a typo that would cause error name issuer subject repo organization repository ref refs heads main description testing audiences api azureadtokenexchange issuer should not have at the end the correct way is issuer commit link document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin
| 0
|
307,780
| 26,562,046,514
|
IssuesEvent
|
2023-01-20 16:39:19
|
MiSTer-devel/PSX_MiSTer
|
https://api.github.com/repos/MiSTer-devel/PSX_MiSTer
|
closed
|
Mille Miglia (Europe) (En,Fr,De,Es,It), Black Screen Loading Delays
|
Please Retest
|
Mille Miglia (Europe) (En,Fr,De,Es,It)
Has about 20-30s loading delays with black screen. game still works though.
Video: https://streamable.com/ctbgs9
Default settings.
|
1.0
|
Mille Miglia (Europe) (En,Fr,De,Es,It), Black Screen Loading Delays - Mille Miglia (Europe) (En,Fr,De,Es,It)
Has about 20-30s loading delays with black screen. game still works though.
Video: https://streamable.com/ctbgs9
Default settings.
|
non_process
|
mille miglia europe en fr de es it black screen loading delays mille miglia europe en fr de es it has about loading delays with black screen game still works though video default settings
| 0
|
22,635
| 31,884,474,136
|
IssuesEvent
|
2023-09-16 19:32:54
|
google/ground-android
|
https://api.github.com/repos/google/ground-android
|
opened
|
Set up demo projects
|
type: process priority: p1
|
- [ ] Ecam AFQ
- [ ] TerraBio
- [ ] Palm oil plantation mapping (FDP)
- [ ] Restoration mapping (UN Decade)
|
1.0
|
Set up demo projects - - [ ] Ecam AFQ
- [ ] TerraBio
- [ ] Palm oil plantation mapping (FDP)
- [ ] Restoration mapping (UN Decade)
|
process
|
set up demo projects ecam afq terrabio palm oil plantation mapping fdp restoration mapping un decade
| 1
|
14,963
| 18,454,664,783
|
IssuesEvent
|
2021-10-15 14:58:48
|
wml-frc/CJ-Vision
|
https://api.github.com/repos/wml-frc/CJ-Vision
|
closed
|
Possible Implementation of Virtual Layers
|
enhancement Threading Processing Display
|
I like the idea of layers, and it surprisingly makes sense for a visual-based program. The current setup is chaotic threading, each thread starts one by one and waits for the prior required thread before being freed. But it's mostly a "hope and pray" design.
However, a virtual layering system is more secure in theory. It allows the user to program based on implementing their layer (whatever custom processing they do of the image) into the layer queue and be executed in the correct order. Threading with this logic can be difficult, but orderly asynchronous looping through a queue does exist. And not everything needs to be threaded, too much can decrease performance.
|
1.0
|
Possible Implementation of Virtual Layers - I like the idea of layers, and it surprisingly makes sense for a visual-based program. The current setup is chaotic threading, each thread starts one by one and waits for the prior required thread before being freed. But it's mostly a "hope and pray" design.
However, a virtual layering system is more secure in theory. It allows the user to program based on implementing their layer (whatever custom processing they do of the image) into the layer queue and be executed in the correct order. Threading with this logic can be difficult, but orderly asynchronous looping through a queue does exist. And not everything needs to be threaded, too much can decrease performance.
|
process
|
possible implementation of virtual layers i like the idea of layers and it surprisingly makes sense for a visual based program the current setup is chaotic threading each thread starts one by one and waits for the prior required thread before being freed but it s mostly a hope and pray design however a virtual layering system is more secure in theory it allows the user to program based on implementing their layer whatever custom processing they do of the image into the layer queue and be executed in the correct order threading with this logic can be difficult but orderly asynchronous looping through a queue does exist and not everything needs to be threaded too much can decrease performance
| 1
|
4,091
| 7,043,938,734
|
IssuesEvent
|
2017-12-31 15:17:31
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Default mapping for "log" field used in examples
|
:Processors enhancement Filebeat
|
Hello again,
I've setup Filebeat 6.0.1 on Kubernetes, based on https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml
which seems to follow best practices for kubernetes deployments.
I do not use any ingest pipelines nor any other processing of log entries. I just wish to send pure log lines from docker logs to elasticsearch. Example config part:
```
- type: log
paths:
- /var/lib/docker/containers/*/*.log
json.message_key: log
json.keys_under_root: true
```
This configuration parses docker logs and sends log message to ES index under "log" field. This field has type: keyword as seen in mapping
```
"log" : {
"properties" : {
"level" : {
"type" : "keyword",
"ignore_above" : 1024
},
```
I believe this should either:
a) be of type "text" for more meaningful user experience and easier "grepping" of log entries
b) or these log lines should be send under "message" key which has type: text already set
Could you advice how to handle that? Either default examples should make filebeat send log lines under "message" field or mapping of "log" field should be changed. As I am not experienced here I cannot tell which approach is better.
As a workaround, can you provide info how to achieve a) ? I can't tell from the docs. The `json.message_key` setting applies to source file (docker json logs from json-file log driver) and there doesn't seem to be any setting to tell it to send this under `message` field
Filebeat version 6.0.1
|
1.0
|
Default mapping for "log" field used in examples - Hello again,
I've setup Filebeat 6.0.1 on Kubernetes, based on https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml
which seems to follow best practices for kubernetes deployments.
I do not use any ingest pipelines nor any other processing of log entries. I just wish to send pure log lines from docker logs to elasticsearch. Example config part:
```
- type: log
paths:
- /var/lib/docker/containers/*/*.log
json.message_key: log
json.keys_under_root: true
```
This configuration parses docker logs and sends log message to ES index under "log" field. This field has type: keyword as seen in mapping
```
"log" : {
"properties" : {
"level" : {
"type" : "keyword",
"ignore_above" : 1024
},
```
I believe this should either:
a) be of type "text" for more meaningful user experience and easier "grepping" of log entries
b) or these log lines should be send under "message" key which has type: text already set
Could you advice how to handle that? Either default examples should make filebeat send log lines under "message" field or mapping of "log" field should be changed. As I am not experienced here I cannot tell which approach is better.
As a workaround, can you provide info how to achieve a) ? I can't tell from the docs. The `json.message_key` setting applies to source file (docker json logs from json-file log driver) and there doesn't seem to be any setting to tell it to send this under `message` field
Filebeat version 6.0.1
|
process
|
default mapping for log field used in examples hello again i ve setup filebeat on kubernetes based on which seems to follow best practices for kubernetes deployments i do not use any ingest pipelines nor any other processing of log entries i just wish to send pure log lines from docker logs to elasticsearch example config part type log paths var lib docker containers log json message key log json keys under root true this configuration parses docker logs and sends log message to es index under log field this field has type keyword as seen in mapping log properties level type keyword ignore above i believe this should either a be of type text for more meaningful user experience and easier grepping of log entries b or these log lines should be send under message key which has type text already set could you advice how to handle that either default examples should make filebeat send log lines under message field or mapping of log field should be changed as i am not experienced here i cannot tell which approach is better as a workaround can you provide info how to achieve a i can t tell from the docs the json message key setting applies to source file docker json logs from json file log driver and there doesn t seem to be any setting to tell it to send this under message field filebeat version
| 1
|
20,324
| 26,964,743,781
|
IssuesEvent
|
2023-02-08 21:15:55
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
copy/paste history doesn't respect custom module order
|
priority: low scope: image processing scope: DAM bug: pending release notes: pending
|
**Describe the bug/issue**
When a history stack with a custom order is copied from one image and pasted to another the module order is not respected.
Copied from:

Pasted result:

**To Reproduce**
1. Open a image in darkroom view
2. Create multiple instances of the diffuse and sharpen module
3. Move them to different positions in the module order
4. Copy the hisotry stack
5. Open another image in darkroom view
6. Paste the history stack
7. Compare the stacks
**Expected behavior**
Module order remains the same
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : darktable 4.3.0+524~g27d91012c
* OS : Linux 5.15.0-58-generic #64-Ubuntu SMP
* Linux - Distro : Ubuntu 22.04
* Memory : 32G
* Graphics card : Nvidia 3070ti
* Graphics driver : Nvidia 525.78
* OpenCL installed : Yes
* OpenCL activated : Yes
* Xorg : Yes
* Desktop : Gnome
* GTK+ : 3.24.33
* gcc : 11.3.0
* cflags : default
* CMAKE_BUILD_TYPE : RelWithDebInfo
**Additional context**
- Can you reproduce with another darktable version(s)? **yes with version 4.2.0**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
|
1.0
|
copy/paste history doesn't respect custom module order - **Describe the bug/issue**
When a history stack with a custom order is copied from one image and pasted to another the module order is not respected.
Copied from:

Pasted result:

**To Reproduce**
1. Open a image in darkroom view
2. Create multiple instances of the diffuse and sharpen module
3. Move them to different positions in the module order
4. Copy the hisotry stack
5. Open another image in darkroom view
6. Paste the history stack
7. Compare the stacks
**Expected behavior**
Module order remains the same
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : darktable 4.3.0+524~g27d91012c
* OS : Linux 5.15.0-58-generic #64-Ubuntu SMP
* Linux - Distro : Ubuntu 22.04
* Memory : 32G
* Graphics card : Nvidia 3070ti
* Graphics driver : Nvidia 525.78
* OpenCL installed : Yes
* OpenCL activated : Yes
* Xorg : Yes
* Desktop : Gnome
* GTK+ : 3.24.33
* gcc : 11.3.0
* cflags : default
* CMAKE_BUILD_TYPE : RelWithDebInfo
**Additional context**
- Can you reproduce with another darktable version(s)? **yes with version 4.2.0**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
|
process
|
copy paste history doesn t respect custom module order describe the bug issue when a history stack with a custom order is copied from one image and pasted to another the module order is not respected copied from pasted result to reproduce open a image in darkroom view create multiple instances of the diffuse and sharpen module move them to different positions in the module order copy the hisotry stack open another image in darkroom view paste the history stack compare the stacks expected behavior module order remains the same platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version darktable os linux generic ubuntu smp linux distro ubuntu memory graphics card nvidia graphics driver nvidia opencl installed yes opencl activated yes xorg yes desktop gnome gtk gcc cflags default cmake build type relwithdebinfo additional context can you reproduce with another darktable version s yes with version are the steps above reproducible with a fresh edit i e after discarding history yes
| 1
|
12,398
| 14,910,699,883
|
IssuesEvent
|
2021-01-22 09:57:52
|
amor71/LiuAlgoTrader
|
https://api.github.com/repos/amor71/LiuAlgoTrader
|
closed
|
hypothesis
|
in-process
|
**Is your feature request related to a problem? Please describe.**
Adding automated testing
**Describe the solution you'd like**
See if `hypothesis` may be a good solution for this framework
**Describe alternatives you've considered**
adding many, many unit-testing
|
1.0
|
hypothesis - **Is your feature request related to a problem? Please describe.**
Adding automated testing
**Describe the solution you'd like**
See if `hypothesis` may be a good solution for this framework
**Describe alternatives you've considered**
adding many, many unit-testing
|
process
|
hypothesis is your feature request related to a problem please describe adding automated testing describe the solution you d like see if hypothesis may be a good solution for this framework describe alternatives you ve considered adding many many unit testing
| 1
|
59
| 2,509,643,732
|
IssuesEvent
|
2015-01-13 15:23:20
|
ujh/iomrascalai
|
https://api.github.com/repos/ujh/iomrascalai
|
closed
|
Speed up the liberty updating logic
|
performance
|
* Save the actual Coords of the liberties in the Chain
* ...
* Profit?!?
With this in place it should be possible to finally implement #57 and then #58 will hopefully lead to a speed increase.
|
True
|
Speed up the liberty updating logic - * Save the actual Coords of the liberties in the Chain
* ...
* Profit?!?
With this in place it should be possible to finally implement #57 and then #58 will hopefully lead to a speed increase.
|
non_process
|
speed up the liberty updating logic save the actual coords of the liberties in the chain profit with this in place it should be possible to finally implement and then will hopefully lead to a speed increase
| 0
|
3,641
| 6,676,751,319
|
IssuesEvent
|
2017-10-05 07:37:40
|
facebook/osquery
|
https://api.github.com/repos/facebook/osquery
|
closed
|
empty process_events table in 2.7.0 on macOS using kernel extension
|
kernel macOS process auditing triage
|
After upgrading from 2.5.0 to 2.7.0 on macos we lost process events. kernel extension has the same version as binary. But scheduled queries return 0 rows.
|
1.0
|
empty process_events table in 2.7.0 on macOS using kernel extension - After upgrading from 2.5.0 to 2.7.0 on macos we lost process events. kernel extension has the same version as binary. But scheduled queries return 0 rows.
|
process
|
empty process events table in on macos using kernel extension after upgrading from to on macos we lost process events kernel extension has the same version as binary but scheduled queries return rows
| 1
|
66,928
| 27,635,553,557
|
IssuesEvent
|
2023-03-10 14:12:04
|
pingidentity/terraform-provider-pingone
|
https://api.github.com/repos/pingidentity/terraform-provider-pingone
|
closed
|
Support for Organization data source
|
type/enhancement service/base upstream/sdk
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please provide a helpful description of the feature request here. --->
Support for organisation data source
### New or Affected Resource(s)
<!--- Please provide a list of the new and/or affected resources/data sources, for example: -->
- pingone_organization
<!--- Optionally include a brief description on the type of change required, but this isn't essential -->
### Potential Terraform Configuration
<!-- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code -->
```hcl
# Copy-paste your PingOne related Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
# Remember to replace any account/customer sensitive information in the configuration before submitting the issue
data "pingone_organization" "example_by_name" {
name = "internal_org_name_example"
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://apidocs.pingidentity.com/pingone/platform/v1/api/#get-read-one-organization
- or -
* https://docs.pingidentity.com/bundle/pingone/page/cxs1575407884833.html
--->
* #0000
|
1.0
|
Support for Organization data source - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please provide a helpful description of the feature request here. --->
Support for organisation data source
### New or Affected Resource(s)
<!--- Please provide a list of the new and/or affected resources/data sources, for example: -->
- pingone_organization
<!--- Optionally include a brief description on the type of change required, but this isn't essential -->
### Potential Terraform Configuration
<!-- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code -->
```hcl
# Copy-paste your PingOne related Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
# Remember to replace any account/customer sensitive information in the configuration before submitting the issue
data "pingone_organization" "example_by_name" {
name = "internal_org_name_example"
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://apidocs.pingidentity.com/pingone/platform/v1/api/#get-read-one-organization
- or -
* https://docs.pingidentity.com/bundle/pingone/page/cxs1575407884833.html
--->
* #0000
|
non_process
|
support for organization data source community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description support for organisation data source new or affected resource s pingone organization potential terraform configuration hcl copy paste your pingone related terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key remember to replace any account customer sensitive information in the configuration before submitting the issue data pingone organization example by name name internal org name example references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example or
| 0
|
6,268
| 9,222,288,806
|
IssuesEvent
|
2019-03-11 22:21:51
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Postprocessor amazon-import creating AMI image with EBS volume size 300GB
|
post-processor/amazon-import
|
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
FOR FEATURES:
Postprocessor amazon-import is importing the OVA as a AMI image well , but in the block device mapping associated to the AMI is associating a snapshot that by default has a volume size of 300GB .
Example :
Block Device : /dev/sda1=snap-04973552a14e13c52:300:false:gp2
When we attend to launch a EC2 instance based in the AMI created we cannot set the volume to one small size , less than 300GB . This is a lot of storage!!. In our case we just need 70 GB. There is any way to add a parameter to the postprocessor that we can set the "volume-size".

taking to your logs when running packer:
vmware-iso (amazon-import): Import task import-ami-fh1ncn5d complete
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Import task import-ami-fh1ncn5d complete
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Starting rename of AMI (ami-646a0f72)
vmware-iso (amazon-import): Starting rename of AMI (ami-646a0f72)
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Waiting for AMI rename to complete (may take a while)
vmware-iso (amazon-import): Waiting for AMI rename to complete (may take a while)
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Waiting for state to become: available
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Using 2s as polling delay (change with AWS_POLL_DELAY_SECONDS)
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Allowing 300s to complete (change with AWS_TIMEOUT_SECONDS)
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): AMI rename completed
vmware-iso (amazon-import): AMI rename completed
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Repacking tags into AWS format
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Adding tag "Description": "packer amazon-import Safeconnect OVA 1493227619"
vmware-iso (amazon-import): Adding tag "Description": "packer amazon-import Safeconnect OVA 1493227619"
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Getting details of ami-1f771209
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 **Walking block device mappings for ami-1f771209 to find snapshots
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Tagging snapshot snap-04973552a14e13c52**
vmware-iso (amazon-import): Tagging snapshot snap-04973552a14e13c52
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Tagging AMI ami-1f771209
vmware-iso (amazon-import): Tagging AMI ami-1f771209
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Adding created AMI ID ami-1f771209 in region us-east-1 to output artifacts
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Deleting import source s3://impulse-ova-6-4-4/safeconnect-6.4.ova
vmware-iso (amazon-import): Deleting import source s3://impulse-ova-6-4-4/safeconnect-6.4.ova
2017/04/26 15:02:27 ui: Build 'vmware-iso' finished.
looks like that your implementation when is copying the AMI generated from the conversion to a new ami with the name defined , you are assigning a block device mapping with a snapshot that you search , we need that this dont happen. For small solutions we dont need that amount of volume size.
|
1.0
|
Postprocessor amazon-import creating AMI image with EBS volume size 300GB - Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
FOR FEATURES:
Postprocessor amazon-import is importing the OVA as a AMI image well , but in the block device mapping associated to the AMI is associating a snapshot that by default has a volume size of 300GB .
Example :
Block Device : /dev/sda1=snap-04973552a14e13c52:300:false:gp2
When we attend to launch a EC2 instance based in the AMI created we cannot set the volume to one small size , less than 300GB . This is a lot of storage!!. In our case we just need 70 GB. There is any way to add a parameter to the postprocessor that we can set the "volume-size".

taking to your logs when running packer:
vmware-iso (amazon-import): Import task import-ami-fh1ncn5d complete
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Import task import-ami-fh1ncn5d complete
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Starting rename of AMI (ami-646a0f72)
vmware-iso (amazon-import): Starting rename of AMI (ami-646a0f72)
2017/04/26 14:57:21 ui: vmware-iso (amazon-import): Waiting for AMI rename to complete (may take a while)
vmware-iso (amazon-import): Waiting for AMI rename to complete (may take a while)
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Waiting for state to become: available
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Using 2s as polling delay (change with AWS_POLL_DELAY_SECONDS)
2017/04/26 14:57:21 packer: 2017/04/26 14:57:21 Allowing 300s to complete (change with AWS_TIMEOUT_SECONDS)
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): AMI rename completed
vmware-iso (amazon-import): AMI rename completed
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Repacking tags into AWS format
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Adding tag "Description": "packer amazon-import Safeconnect OVA 1493227619"
vmware-iso (amazon-import): Adding tag "Description": "packer amazon-import Safeconnect OVA 1493227619"
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Getting details of ami-1f771209
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 **Walking block device mappings for ami-1f771209 to find snapshots
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Tagging snapshot snap-04973552a14e13c52**
vmware-iso (amazon-import): Tagging snapshot snap-04973552a14e13c52
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Tagging AMI ami-1f771209
vmware-iso (amazon-import): Tagging AMI ami-1f771209
2017/04/26 15:02:26 packer: 2017/04/26 15:02:26 Adding created AMI ID ami-1f771209 in region us-east-1 to output artifacts
2017/04/26 15:02:26 ui: vmware-iso (amazon-import): Deleting import source s3://impulse-ova-6-4-4/safeconnect-6.4.ova
vmware-iso (amazon-import): Deleting import source s3://impulse-ova-6-4-4/safeconnect-6.4.ova
2017/04/26 15:02:27 ui: Build 'vmware-iso' finished.
looks like that your implementation when is copying the AMI generated from the conversion to a new ami with the name defined , you are assigning a block device mapping with a snapshot that you search , we need that this dont happen. For small solutions we dont need that amount of volume size.
|
process
|
postprocessor amazon import creating ami image with ebs volume size reference for features postprocessor amazon import is importing the ova as a ami image well but in the block device mapping associated to the ami is associating a snapshot that by default has a volume size of example block device dev snap false when we attend to launch a instance based in the ami created we cannot set the volume to one small size less than this is a lot of storage in our case we just need gb there is any way to add a parameter to the postprocessor that we can set the volume size taking to your logs when running packer vmware iso amazon import import task import ami complete ui vmware iso amazon import import task import ami complete ui vmware iso amazon import starting rename of ami ami vmware iso amazon import starting rename of ami ami ui vmware iso amazon import waiting for ami rename to complete may take a while vmware iso amazon import waiting for ami rename to complete may take a while packer waiting for state to become available packer using as polling delay change with aws poll delay seconds packer allowing to complete change with aws timeout seconds ui vmware iso amazon import ami rename completed vmware iso amazon import ami rename completed packer repacking tags into aws format ui vmware iso amazon import adding tag description packer amazon import safeconnect ova vmware iso amazon import adding tag description packer amazon import safeconnect ova packer getting details of ami packer walking block device mappings for ami to find snapshots ui vmware iso amazon import tagging snapshot snap vmware iso amazon import tagging snapshot snap ui vmware iso amazon import tagging ami ami vmware iso amazon import tagging ami ami packer adding created ami id ami in region us east to output artifacts ui vmware iso amazon import deleting import source impulse ova safeconnect ova vmware iso amazon import deleting import source impulse ova safeconnect ova ui build vmware iso finished looks like that your implementation when is copying the ami generated from the conversion to a new ami with the name defined you are assigning a block device mapping with a snapshot that you search we need that this dont happen for small solutions we dont need that amount of volume size
| 1
|
726,335
| 24,995,599,473
|
IssuesEvent
|
2022-11-02 23:39:02
|
SzFMV2022-Osz/AutomatedCar-A
|
https://api.github.com/repos/SzFMV2022-Osz/AutomatedCar-A
|
closed
|
Powertrain osztály elkészítése
|
effort: moderate priority: critical
|
Ez az osztály kapcsolja össze a Messengert a kormánnyal és a motorral. Az utóbbi osztályok megfelelő metódusait aszinkron indítja, majd megvárja a kért metódusok végrehajtását. Ezután a kapott vektorokat összegzi és továbbadja a VFB, illetve a többi adatot pl.: sebesség, kormányállás....
|
1.0
|
Powertrain osztály elkészítése - Ez az osztály kapcsolja össze a Messengert a kormánnyal és a motorral. Az utóbbi osztályok megfelelő metódusait aszinkron indítja, majd megvárja a kért metódusok végrehajtását. Ezután a kapott vektorokat összegzi és továbbadja a VFB, illetve a többi adatot pl.: sebesség, kormányállás....
|
non_process
|
powertrain osztály elkészítése ez az osztály kapcsolja össze a messengert a kormánnyal és a motorral az utóbbi osztályok megfelelő metódusait aszinkron indítja majd megvárja a kért metódusok végrehajtását ezután a kapott vektorokat összegzi és továbbadja a vfb illetve a többi adatot pl sebesség kormányállás
| 0
|
22,650
| 31,895,827,498
|
IssuesEvent
|
2023-09-18 01:31:57
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - highestBiostratigraphicZone
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_highestBiostratigraphicZone
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): highestBiostratigraphicZone
* Term label (English, not normative): Highest Biostratigraphic Zone
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the highest possible geological biostratigraphic zone of the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Blancan
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - highestBiostratigraphicZone - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_highestBiostratigraphicZone
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): highestBiostratigraphicZone
* Term label (English, not normative): Highest Biostratigraphic Zone
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the highest possible geological biostratigraphic zone of the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Blancan
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term highestbiostratigraphiczone term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes highestbiostratigraphiczone term label english not normative highest biostratigraphic zone organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the highest possible geological biostratigraphic zone of the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative blancan refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
145
| 2,577,438,484
|
IssuesEvent
|
2015-02-12 17:01:27
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
[proposal] Blow out the TraversalEngine concept
|
enhancement process
|
`TraversalEngine` is currently an enum of `STANDARD` (OLTP) and `COMPUTER` (OLAP). I think it needs to be an interface with the following methods.
```java
public void applyStrategies(final Traversal traversal)
public TraversalStrategies getTraversalStrategies()
public Graph getGraph()
```
`TinkerGraph` will provide `TinkerGraphComputerTraversalEngine`. This guy will:
```java
public void applyStrategies(final Traversal traversal) {
this.getTraversalStrategies().apply(traversal);
if(traversal.isRootTraversal())
traversal.addStep(new ComputerResultStep(this.getGraph().compute().workers(workers).program(TraversalVertexProgram.class).traversal(traversal))))
}
```
When the `Traversal` is created (e.g. `g.V` or `v.out`), the `TraversalEngine` is provided to the traversal by, e.g., `TinkerGraph`, `TinkerVertex`. The user desired execution source is stated as such:
```java
tinkerGraph.setTraversalEngine(TinkerGraphComputerTraversalEngine.create().workers(2).build()) // OLAP
tinkerGraph.setTraversalEngine(StandardTraversalEngine.instance()) // OLTP
tinkerGraph.setTraversalEngine(GremlinServerTraversalEngine.create().host(1.2.3).port(23).remoteEngine(StandardTraversalEngine.instance()).build()) // GremlinServer
```
The first time a Traversal is `next()'d` or `hasNext()'d`, the `TraversalEngine` of the `Traversal` is executed. Note that `ComputeResultStep()` is added by `TinkerGraphComputerTraversalEngine` (see above). This means, at `next()` -- `TinkerGraphComputer` is submitted a `TraversalVertexProgram` and executed. The first `next()` is the result of the `TraversalVertexProgram`. This is sort of how we have it now, though `ComputerResultStep()` "wraps" the submitted traversal.
....there are still lots of holes -- e.g. do we need a `TraversalStrategiesCache`? How does `GremlinServer` come into play? Can different traversals within a traversal have different `TraversalEngines`.......
anywho....just trying to get ideas out.
|
1.0
|
[proposal] Blow out the TraversalEngine concept - `TraversalEngine` is currently an enum of `STANDARD` (OLTP) and `COMPUTER` (OLAP). I think it needs to be an interface with the following methods.
```java
public void applyStrategies(final Traversal traversal)
public TraversalStrategies getTraversalStrategies()
public Graph getGraph()
```
`TinkerGraph` will provide `TinkerGraphComputerTraversalEngine`. This guy will:
```java
public void applyStrategies(final Traversal traversal) {
this.getTraversalStrategies().apply(traversal);
if(traversal.isRootTraversal())
traversal.addStep(new ComputerResultStep(this.getGraph().compute().workers(workers).program(TraversalVertexProgram.class).traversal(traversal))))
}
```
When the `Traversal` is created (e.g. `g.V` or `v.out`), the `TraversalEngine` is provided to the traversal by, e.g., `TinkerGraph`, `TinkerVertex`. The user desired execution source is stated as such:
```java
tinkerGraph.setTraversalEngine(TinkerGraphComputerTraversalEngine.create().workers(2).build()) // OLAP
tinkerGraph.setTraversalEngine(StandardTraversalEngine.instance()) // OLTP
tinkerGraph.setTraversalEngine(GremlinServerTraversalEngine.create().host(1.2.3).port(23).remoteEngine(StandardTraversalEngine.instance()).build()) // GremlinServer
```
The first time a Traversal is `next()'d` or `hasNext()'d`, the `TraversalEngine` of the `Traversal` is executed. Note that `ComputeResultStep()` is added by `TinkerGraphComputerTraversalEngine` (see above). This means, at `next()` -- `TinkerGraphComputer` is submitted a `TraversalVertexProgram` and executed. The first `next()` is the result of the `TraversalVertexProgram`. This is sort of how we have it now, though `ComputerResultStep()` "wraps" the submitted traversal.
....there are still lots of holes -- e.g. do we need a `TraversalStrategiesCache`? How does `GremlinServer` come into play? Can different traversals within a traversal have different `TraversalEngines`.......
anywho....just trying to get ideas out.
|
process
|
blow out the traversalengine concept traversalengine is currently an enum of standard oltp and computer olap i think it needs to be an interface with the following methods java public void applystrategies final traversal traversal public traversalstrategies gettraversalstrategies public graph getgraph tinkergraph will provide tinkergraphcomputertraversalengine this guy will java public void applystrategies final traversal traversal this gettraversalstrategies apply traversal if traversal isroottraversal traversal addstep new computerresultstep this getgraph compute workers workers program traversalvertexprogram class traversal traversal when the traversal is created e g g v or v out the traversalengine is provided to the traversal by e g tinkergraph tinkervertex the user desired execution source is stated as such java tinkergraph settraversalengine tinkergraphcomputertraversalengine create workers build olap tinkergraph settraversalengine standardtraversalengine instance oltp tinkergraph settraversalengine gremlinservertraversalengine create host port remoteengine standardtraversalengine instance build gremlinserver the first time a traversal is next d or hasnext d the traversalengine of the traversal is executed note that computeresultstep is added by tinkergraphcomputertraversalengine see above this means at next tinkergraphcomputer is submitted a traversalvertexprogram and executed the first next is the result of the traversalvertexprogram this is sort of how we have it now though computerresultstep wraps the submitted traversal there are still lots of holes e g do we need a traversalstrategiescache how does gremlinserver come into play can different traversals within a traversal have different traversalengines anywho just trying to get ideas out
| 1
|
13,651
| 16,360,131,740
|
IssuesEvent
|
2021-05-14 08:10:39
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
TypeError Cannot read property 'prototype' of undefined - page failing to load
|
AREA: client SYSTEM: client side processing TYPE: bug
|
Hello team!
I am seeing an issue where the frontend page is failing to load due to a typeerror, hammerhead seems to have an issue rewriting window.location.
Browser console error;
```
hammerhead.js:14136 Uncaught TypeError: Cannot read property 'prototype' of undefined
at new Location (hammerhead.js:14136)
at Object.get (hammerhead.js:19834)
at value (hammerhead.js:19873)
at Object.location (2.594d2bf1.chunk.js:2)
at Object.o.getHostnameNoWww (2.594d2bf1.chunk.js:2)
at new h (2.594d2bf1.chunk.js:2)
at Module.<anonymous> (2.594d2bf1.chunk.js:2)
at i (2.594d2bf1.chunk.js:2)
at 2.594d2bf1.chunk.js:2
at Object.<anonymous> (2.594d2bf1.chunk.js:2)
```
Here is the test code;
```
import {Selector} from 'testcafe';
const okButton = Selector('[data-test="ok"]');
fixture("My xFixture")
.page("https://savings.secure.investec.com/login");
test("First test", async (t) => {
await t
.debug()
.click(okButton);
});
```
Testcafe config;
{
"browsers": ["chrome"],
"src": ["tests/firstTest.js"],
"quarantineMode": true,
"debugMode": true,
"debugOnFail": true,
"stopOnFirstFail": true,
"skipJsErrors": true,
"skipUncaughtErrors": true,
"appInitDelay": 3000,
"concurrency":1,
"selectorTimeout": 60000,
"assertionTimeout": 60000,
"pageLoadTimeout": 60000,
"speed": 1,
"disablePageCaching": false,
"developmentMode": true,
"qrCode": true,
"hostname": "127.0.0.1"
}
### Your Environment details:
* node.js version: v14.15.
* browser name and version: IE11, Chrome 69, Firefox 100
* platform and version: Windows
Any advice would be much appreciated, do let me know if you require further information.
Thank you.
Stephen
|
1.0
|
TypeError Cannot read property 'prototype' of undefined - page failing to load - Hello team!
I am seeing an issue where the frontend page is failing to load due to a typeerror, hammerhead seems to have an issue rewriting window.location.
Browser console error;
```
hammerhead.js:14136 Uncaught TypeError: Cannot read property 'prototype' of undefined
at new Location (hammerhead.js:14136)
at Object.get (hammerhead.js:19834)
at value (hammerhead.js:19873)
at Object.location (2.594d2bf1.chunk.js:2)
at Object.o.getHostnameNoWww (2.594d2bf1.chunk.js:2)
at new h (2.594d2bf1.chunk.js:2)
at Module.<anonymous> (2.594d2bf1.chunk.js:2)
at i (2.594d2bf1.chunk.js:2)
at 2.594d2bf1.chunk.js:2
at Object.<anonymous> (2.594d2bf1.chunk.js:2)
```
Here is the test code;
```
import {Selector} from 'testcafe';
const okButton = Selector('[data-test="ok"]');
fixture("My xFixture")
.page("https://savings.secure.investec.com/login");
test("First test", async (t) => {
await t
.debug()
.click(okButton);
});
```
Testcafe config;
{
"browsers": ["chrome"],
"src": ["tests/firstTest.js"],
"quarantineMode": true,
"debugMode": true,
"debugOnFail": true,
"stopOnFirstFail": true,
"skipJsErrors": true,
"skipUncaughtErrors": true,
"appInitDelay": 3000,
"concurrency":1,
"selectorTimeout": 60000,
"assertionTimeout": 60000,
"pageLoadTimeout": 60000,
"speed": 1,
"disablePageCaching": false,
"developmentMode": true,
"qrCode": true,
"hostname": "127.0.0.1"
}
### Your Environment details:
* node.js version: v14.15.
* browser name and version: IE11, Chrome 69, Firefox 100
* platform and version: Windows
Any advice would be much appreciated, do let me know if you require further information.
Thank you.
Stephen
|
process
|
typeerror cannot read property prototype of undefined page failing to load hello team i am seeing an issue where the frontend page is failing to load due to a typeerror hammerhead seems to have an issue rewriting window location browser console error hammerhead js uncaught typeerror cannot read property prototype of undefined at new location hammerhead js at object get hammerhead js at value hammerhead js at object location chunk js at object o gethostnamenowww chunk js at new h chunk js at module chunk js at i chunk js at chunk js at object chunk js here is the test code import selector from testcafe const okbutton selector fixture my xfixture page test first test async t await t debug click okbutton testcafe config browsers src quarantinemode true debugmode true debugonfail true stoponfirstfail true skipjserrors true skipuncaughterrors true appinitdelay concurrency selectortimeout assertiontimeout pageloadtimeout speed disablepagecaching false developmentmode true qrcode true hostname your environment details node js version browser name and version chrome firefox platform and version windows any advice would be much appreciated do let me know if you require further information thank you stephen
| 1
|
12,859
| 15,244,345,547
|
IssuesEvent
|
2021-02-19 12:38:32
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Highlighting Matched Skills: Clarification
|
ShapeupProcess challenge- recommender-tool question
|
@Oanh-and-only-Oanh ,
The Open for registration list shows the copilot entered tags below the challenge. When the recommended challenges toggle is on, the matched skills are displayed. these are extracted skills form the spec.
Is this behaviour fine, where the user sees additional skills to the challenge once recommended toggle is on?
<img width="983" alt="Screenshot 2021-02-19 at 6 04 51 PM" src="https://user-images.githubusercontent.com/58783823/108505443-6b643400-72dd-11eb-8b45-7319a630de91.png">
<img width="978" alt="Screenshot 2021-02-19 at 6 05 46 PM" src="https://user-images.githubusercontent.com/58783823/108505445-6c956100-72dd-11eb-8b79-30d4c9d8dda9.png">
|
1.0
|
Highlighting Matched Skills: Clarification - @Oanh-and-only-Oanh ,
The Open for registration list shows the copilot entered tags below the challenge. When the recommended challenges toggle is on, the matched skills are displayed. these are extracted skills form the spec.
Is this behaviour fine, where the user sees additional skills to the challenge once recommended toggle is on?
<img width="983" alt="Screenshot 2021-02-19 at 6 04 51 PM" src="https://user-images.githubusercontent.com/58783823/108505443-6b643400-72dd-11eb-8b45-7319a630de91.png">
<img width="978" alt="Screenshot 2021-02-19 at 6 05 46 PM" src="https://user-images.githubusercontent.com/58783823/108505445-6c956100-72dd-11eb-8b79-30d4c9d8dda9.png">
|
process
|
highlighting matched skills clarification oanh and only oanh the open for registration list shows the copilot entered tags below the challenge when the recommended challenges toggle is on the matched skills are displayed these are extracted skills form the spec is this behaviour fine where the user sees additional skills to the challenge once recommended toggle is on img width alt screenshot at pm src img width alt screenshot at pm src
| 1
|
15,498
| 19,703,243,279
|
IssuesEvent
|
2022-01-12 18:50:44
|
googleapis/google-cloud-php
|
https://api.github.com/repos/googleapis/google-cloud-php
|
opened
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryDataTransfer/.repo-metadata.json
* api_shortname field missing from BigQueryDataTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Billing/.repo-metadata.json
* release_level must be equal to one of the allowed values in Billing/.repo-metadata.json
* api_shortname field missing from Billing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Container/.repo-metadata.json
* release_level must be equal to one of the allowed values in Container/.repo-metadata.json
* api_shortname field missing from Container/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dlp/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dlp/.repo-metadata.json
* api_shortname field missing from Dlp/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DocumentAi/.repo-metadata.json
* release_level must be equal to one of the allowed values in DocumentAi/.repo-metadata.json
* api_shortname field missing from DocumentAi/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Kms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Kms/.repo-metadata.json
* api_shortname field missing from Kms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in LifeSciences/.repo-metadata.json
* release_level must be equal to one of the allowed values in LifeSciences/.repo-metadata.json
* api_shortname field missing from LifeSciences/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Memcache/.repo-metadata.json
* release_level must be equal to one of the allowed values in Memcache/.repo-metadata.json
* api_shortname field missing from Memcache/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Notebooks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Notebooks/.repo-metadata.json
* api_shortname field missing from Notebooks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityCenter/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityCenter/.repo-metadata.json
* api_shortname field missing from SecurityCenter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Shell/.repo-metadata.json
* release_level must be equal to one of the allowed values in Shell/.repo-metadata.json
* api_shortname field missing from Shell/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoTranscoder/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoTranscoder/.repo-metadata.json
* api_shortname field missing from VideoTranscoder/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Vision/.repo-metadata.json
* release_level must be equal to one of the allowed values in Vision/.repo-metadata.json
* api_shortname field missing from Vision/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryDataTransfer/.repo-metadata.json
* api_shortname field missing from BigQueryDataTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Billing/.repo-metadata.json
* release_level must be equal to one of the allowed values in Billing/.repo-metadata.json
* api_shortname field missing from Billing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Container/.repo-metadata.json
* release_level must be equal to one of the allowed values in Container/.repo-metadata.json
* api_shortname field missing from Container/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dlp/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dlp/.repo-metadata.json
* api_shortname field missing from Dlp/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DocumentAi/.repo-metadata.json
* release_level must be equal to one of the allowed values in DocumentAi/.repo-metadata.json
* api_shortname field missing from DocumentAi/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Kms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Kms/.repo-metadata.json
* api_shortname field missing from Kms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in LifeSciences/.repo-metadata.json
* release_level must be equal to one of the allowed values in LifeSciences/.repo-metadata.json
* api_shortname field missing from LifeSciences/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Memcache/.repo-metadata.json
* release_level must be equal to one of the allowed values in Memcache/.repo-metadata.json
* api_shortname field missing from Memcache/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Notebooks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Notebooks/.repo-metadata.json
* api_shortname field missing from Notebooks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityCenter/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityCenter/.repo-metadata.json
* api_shortname field missing from SecurityCenter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Shell/.repo-metadata.json
* release_level must be equal to one of the allowed values in Shell/.repo-metadata.json
* api_shortname field missing from Shell/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoTranscoder/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoTranscoder/.repo-metadata.json
* api_shortname field missing from VideoTranscoder/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Vision/.repo-metadata.json
* release_level must be equal to one of the allowed values in Vision/.repo-metadata.json
* api_shortname field missing from Vision/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 client documentation must match pattern in accessapproval repo metadata json release level must be equal to one of the allowed values in accessapproval repo metadata json api shortname field missing from accessapproval repo metadata json client documentation must match pattern in accesscontextmanager repo metadata json release level must be equal to one of the allowed values in accesscontextmanager repo metadata json api shortname field missing from accesscontextmanager repo metadata json release level must be equal to one of the allowed values in analyticsadmin repo metadata json api shortname field missing from analyticsadmin repo metadata json release level must be equal to one of the allowed values in analyticsdata repo metadata json api shortname field missing from analyticsdata repo metadata json client documentation must match pattern in apigateway repo metadata json release level must be equal to one of the allowed values in apigateway repo metadata json api shortname field missing from apigateway repo metadata json client documentation must match pattern in apigeeconnect repo metadata json release level must be equal to one of the allowed values in apigeeconnect repo metadata json api shortname field missing from apigeeconnect repo metadata json client documentation must match pattern in appengineadmin repo metadata json release level must be equal to one of the allowed values in appengineadmin repo metadata json api shortname field missing from appengineadmin repo metadata json client documentation must match pattern in artifactregistry repo metadata json release level must be equal to one of the allowed values in artifactregistry repo metadata json api shortname field missing from artifactregistry repo metadata json client documentation must match pattern in asset repo metadata json release level must be equal to one of the allowed values in asset repo metadata json api shortname field missing from asset repo metadata json release level must be equal to one of the allowed values in assuredworkloads repo metadata json api shortname field missing from assuredworkloads repo metadata json client documentation must match pattern in automl repo metadata json release level must be equal to one of the allowed values in automl repo metadata json api shortname field missing from automl repo metadata json release level must be equal to one of the allowed values in bigquery repo metadata json api shortname field missing from bigquery repo metadata json release level must be equal to one of the allowed values in bigqueryconnection repo metadata json api shortname field missing from bigqueryconnection repo metadata json release level must be equal to one of the allowed values in bigquerydatatransfer repo metadata json api shortname field missing from bigquerydatatransfer repo metadata json release level must be equal to one of the allowed values in bigqueryreservation repo metadata json api shortname field missing from bigqueryreservation repo metadata json client documentation must match pattern in bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigquerystorage repo metadata json api shortname field missing from bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigtable repo metadata json api shortname field missing from bigtable repo metadata json client documentation must match pattern in billing repo metadata json release level must be equal to one of the allowed values in billing repo metadata json api shortname field missing from billing repo metadata json client documentation must match pattern in billingbudgets repo metadata json release level must be equal to one of the allowed values in billingbudgets repo metadata json api shortname field missing from billingbudgets repo metadata json client documentation must match pattern in binaryauthorization repo metadata json release level must be equal to one of the allowed values in binaryauthorization repo metadata json api shortname field missing from binaryauthorization repo metadata json client documentation must match pattern in build repo metadata json release level must be equal to one of the allowed values in build repo metadata json api shortname field missing from build repo metadata json client documentation must match pattern in channel repo metadata json release level must be equal to one of the allowed values in channel repo metadata json api shortname field missing from channel repo metadata json client documentation must match pattern in commonprotos repo metadata json release level must be equal to one of the allowed values in commonprotos repo metadata json release level must be equal to one of the allowed values in compute repo metadata json api shortname field missing from compute repo metadata json client documentation must match pattern in contactcenterinsights repo metadata json release level must be equal to one of the allowed values in contactcenterinsights repo metadata json api shortname field missing from contactcenterinsights repo metadata json client documentation must match pattern in container repo metadata json release level must be equal to one of the allowed values in container repo metadata json api shortname field missing from container repo metadata json client documentation must match pattern in containeranalysis repo metadata json release level must be equal to one of the allowed values in containeranalysis repo metadata json api shortname field missing from containeranalysis repo metadata json client documentation must match pattern in core repo metadata json release level must be equal to one of the allowed values in core repo metadata json client documentation must match pattern in datacatalog repo metadata json release level must be equal to one of the allowed values in datacatalog repo metadata json api shortname field missing from datacatalog repo metadata json client documentation must match pattern in datafusion repo metadata json release level must be equal to one of the allowed values in datafusion repo metadata json api shortname field missing from datafusion repo metadata json client documentation must match pattern in datalabeling repo metadata json release level must be equal to one of the allowed values in datalabeling repo metadata json api shortname field missing from datalabeling repo metadata json client documentation must match pattern in dataflow repo metadata json release level must be equal to one of the allowed values in dataflow repo metadata json api shortname field missing from dataflow repo metadata json client documentation must match pattern in dataproc repo metadata json release level must be equal to one of the allowed values in dataproc repo metadata json api shortname field missing from dataproc repo metadata json client documentation must match pattern in dataprocmetastore repo metadata json release level must be equal to one of the allowed values in dataprocmetastore repo metadata json api shortname field missing from dataprocmetastore repo metadata json client documentation must match pattern in datastore repo metadata json release level must be equal to one of the allowed values in datastore repo metadata json api shortname field missing from datastore repo metadata json client documentation must match pattern in datastoreadmin repo metadata json release level must be equal to one of the allowed values in datastoreadmin repo metadata json api shortname field missing from datastoreadmin repo metadata json client documentation must match pattern in debugger repo metadata json release level must be equal to one of the allowed values in debugger repo metadata json api shortname field missing from debugger repo metadata json client documentation must match pattern in deploy repo metadata json release level must be equal to one of the allowed values in deploy repo metadata json api shortname field missing from deploy repo metadata json client documentation must match pattern in dialogflow repo metadata json release level must be equal to one of the allowed values in dialogflow repo metadata json api shortname field missing from dialogflow repo metadata json client documentation must match pattern in dlp repo metadata json release level must be equal to one of the allowed values in dlp repo metadata json api shortname field missing from dlp repo metadata json client documentation must match pattern in dms repo metadata json release level must be equal to one of the allowed values in dms repo metadata json api shortname field missing from dms repo metadata json client documentation must match pattern in documentai repo metadata json release level must be equal to one of the allowed values in documentai repo metadata json api shortname field missing from documentai repo metadata json client documentation must match pattern in domains repo metadata json release level must be equal to one of the allowed values in domains repo metadata json api shortname field missing from domains repo metadata json client documentation must match pattern in errorreporting repo metadata json release level must be equal to one of the allowed values in errorreporting repo metadata json api shortname field missing from errorreporting repo metadata json client documentation must match pattern in essentialcontacts repo metadata json release level must be equal to one of the allowed values in essentialcontacts repo metadata json api shortname field missing from essentialcontacts repo metadata json client documentation must match pattern in eventarc repo metadata json release level must be equal to one of the allowed values in eventarc repo metadata json api shortname field missing from eventarc repo metadata json client documentation must match pattern in filestore repo metadata json release level must be equal to one of the allowed values in filestore repo metadata json api shortname field missing from filestore repo metadata json client documentation must match pattern in firestore repo metadata json release level must be equal to one of the allowed values in firestore repo metadata json api shortname field missing from firestore repo metadata json client documentation must match pattern in functions repo metadata json release level must be equal to one of the allowed values in functions repo metadata json api shortname field missing from functions repo metadata json client documentation must match pattern in gaming repo metadata json release level must be equal to one of the allowed values in gaming repo metadata json api shortname field missing from gaming repo metadata json client documentation must match pattern in gkeconnectgateway repo metadata json release level must be equal to one of the allowed values in gkeconnectgateway repo metadata json api shortname field missing from gkeconnectgateway repo metadata json client documentation must match pattern in gkehub repo metadata json release level must be equal to one of the allowed values in gkehub repo metadata json api shortname field missing from gkehub repo metadata json client documentation must match pattern in grafeas repo metadata json release level must be equal to one of the allowed values in grafeas repo metadata json api shortname field missing from grafeas repo metadata json client documentation must match pattern in iamcredentials repo metadata json release level must be equal to one of the allowed values in iamcredentials repo metadata json api shortname field missing from iamcredentials repo metadata json client documentation must match pattern in iap repo metadata json release level must be equal to one of the allowed values in iap repo metadata json api shortname field missing from iap repo metadata json client documentation must match pattern in iot repo metadata json release level must be equal to one of the allowed values in iot repo metadata json api shortname field missing from iot repo metadata json client documentation must match pattern in kms repo metadata json release level must be equal to one of the allowed values in kms repo metadata json api shortname field missing from kms repo metadata json client documentation must match pattern in language repo metadata json release level must be equal to one of the allowed values in language repo metadata json api shortname field missing from language repo metadata json client documentation must match pattern in lifesciences repo metadata json release level must be equal to one of the allowed values in lifesciences repo metadata json api shortname field missing from lifesciences repo metadata json client documentation must match pattern in logging repo metadata json release level must be equal to one of the allowed values in logging repo metadata json api shortname field missing from logging repo metadata json client documentation must match pattern in managedidentities repo metadata json release level must be equal to one of the allowed values in managedidentities repo metadata json api shortname field missing from managedidentities repo metadata json client documentation must match pattern in mediatranslation repo metadata json release level must be equal to one of the allowed values in mediatranslation repo metadata json api shortname field missing from mediatranslation repo metadata json client documentation must match pattern in memcache repo metadata json release level must be equal to one of the allowed values in memcache repo metadata json api shortname field missing from memcache repo metadata json client documentation must match pattern in monitoring repo metadata json release level must be equal to one of the allowed values in monitoring repo metadata json api shortname field missing from monitoring repo metadata json client documentation must match pattern in networkconnectivity repo metadata json release level must be equal to one of the allowed values in networkconnectivity repo metadata json api shortname field missing from networkconnectivity repo metadata json client documentation must match pattern in networkmanagement repo metadata json release level must be equal to one of the allowed values in networkmanagement repo metadata json api shortname field missing from networkmanagement repo metadata json client documentation must match pattern in networksecurity repo metadata json release level must be equal to one of the allowed values in networksecurity repo metadata json api shortname field missing from networksecurity repo metadata json client documentation must match pattern in notebooks repo metadata json release level must be equal to one of the allowed values in notebooks repo metadata json api shortname field missing from notebooks repo metadata json client documentation must match pattern in orchestrationairflow repo metadata json release level must be equal to one of the allowed values in orchestrationairflow repo metadata json api shortname field missing from orchestrationairflow repo metadata json client documentation must match pattern in orgpolicy repo metadata json release level must be equal to one of the allowed values in orgpolicy repo metadata json api shortname field missing from orgpolicy repo metadata json client documentation must match pattern in osconfig repo metadata json release level must be equal to one of the allowed values in osconfig repo metadata json api shortname field missing from osconfig repo metadata json client documentation must match pattern in oslogin repo metadata json release level must be equal to one of the allowed values in oslogin repo metadata json api shortname field missing from oslogin repo metadata json client documentation must match pattern in policytroubleshooter repo metadata json release level must be equal to one of the allowed values in policytroubleshooter repo metadata json api shortname field missing from policytroubleshooter repo metadata json client documentation must match pattern in privatecatalog repo metadata json release level must be equal to one of the allowed values in privatecatalog repo metadata json api shortname field missing from privatecatalog repo metadata json client documentation must match pattern in profiler repo metadata json release level must be equal to one of the allowed values in profiler repo metadata json api shortname field missing from profiler repo metadata json client documentation must match pattern in pubsub repo metadata json release level must be equal to one of the allowed values in pubsub repo metadata json api shortname field missing from pubsub repo metadata json client documentation must match pattern in recaptchaenterprise repo metadata json release level must be equal to one of the allowed values in recaptchaenterprise repo metadata json api shortname field missing from recaptchaenterprise repo metadata json client documentation must match pattern in recommendationengine repo metadata json release level must be equal to one of the allowed values in recommendationengine repo metadata json api shortname field missing from recommendationengine repo metadata json client documentation must match pattern in recommender repo metadata json release level must be equal to one of the allowed values in recommender repo metadata json api shortname field missing from recommender repo metadata json client documentation must match pattern in redis repo metadata json release level must be equal to one of the allowed values in redis repo metadata json api shortname field missing from redis repo metadata json client documentation must match pattern in resourcemanager repo metadata json release level must be equal to one of the allowed values in resourcemanager repo metadata json api shortname field missing from resourcemanager repo metadata json client documentation must match pattern in resourcesettings repo metadata json release level must be equal to one of the allowed values in resourcesettings repo metadata json api shortname field missing from resourcesettings repo metadata json client documentation must match pattern in retail repo metadata json release level must be equal to one of the allowed values in retail repo metadata json api shortname field missing from retail repo metadata json client documentation must match pattern in scheduler repo metadata json release level must be equal to one of the allowed values in scheduler repo metadata json api shortname field missing from scheduler repo metadata json client documentation must match pattern in secretmanager repo metadata json release level must be equal to one of the allowed values in secretmanager repo metadata json api shortname field missing from secretmanager repo metadata json client documentation must match pattern in securitycenter repo metadata json release level must be equal to one of the allowed values in securitycenter repo metadata json api shortname field missing from securitycenter repo metadata json client documentation must match pattern in securityprivateca repo metadata json release level must be equal to one of the allowed values in securityprivateca repo metadata json api shortname field missing from securityprivateca repo metadata json client documentation must match pattern in servicecontrol repo metadata json release level must be equal to one of the allowed values in servicecontrol repo metadata json api shortname field missing from servicecontrol repo metadata json client documentation must match pattern in servicedirectory repo metadata json release level must be equal to one of the allowed values in servicedirectory repo metadata json api shortname field missing from servicedirectory repo metadata json client documentation must match pattern in servicemanagement repo metadata json release level must be equal to one of the allowed values in servicemanagement repo metadata json api shortname field missing from servicemanagement repo metadata json client documentation must match pattern in serviceusage repo metadata json release level must be equal to one of the allowed values in serviceusage repo metadata json api shortname field missing from serviceusage repo metadata json client documentation must match pattern in shell repo metadata json release level must be equal to one of the allowed values in shell repo metadata json api shortname field missing from shell repo metadata json client documentation must match pattern in spanner repo metadata json release level must be equal to one of the allowed values in spanner repo metadata json api shortname field missing from spanner repo metadata json client documentation must match pattern in speech repo metadata json release level must be equal to one of the allowed values in speech repo metadata json api shortname field missing from speech repo metadata json client documentation must match pattern in sqladmin repo metadata json release level must be equal to one of the allowed values in sqladmin repo metadata json api shortname field missing from sqladmin repo metadata json client documentation must match pattern in storage repo metadata json release level must be equal to one of the allowed values in storage repo metadata json api shortname field missing from storage repo metadata json client documentation must match pattern in storagetransfer repo metadata json release level must be equal to one of the allowed values in storagetransfer repo metadata json api shortname field missing from storagetransfer repo metadata json client documentation must match pattern in talent repo metadata json release level must be equal to one of the allowed values in talent repo metadata json api shortname field missing from talent repo metadata json client documentation must match pattern in tasks repo metadata json release level must be equal to one of the allowed values in tasks repo metadata json api shortname field missing from tasks repo metadata json client documentation must match pattern in texttospeech repo metadata json release level must be equal to one of the allowed values in texttospeech repo metadata json api shortname field missing from texttospeech repo metadata json client documentation must match pattern in tpu repo metadata json release level must be equal to one of the allowed values in tpu repo metadata json api shortname field missing from tpu repo metadata json client documentation must match pattern in trace repo metadata json release level must be equal to one of the allowed values in trace repo metadata json api shortname field missing from trace repo metadata json client documentation must match pattern in translate repo metadata json release level must be equal to one of the allowed values in translate repo metadata json api shortname field missing from translate repo metadata json client documentation must match pattern in videointelligence repo metadata json release level must be equal to one of the allowed values in videointelligence repo metadata json api shortname field missing from videointelligence repo metadata json client documentation must match pattern in videotranscoder repo metadata json release level must be equal to one of the allowed values in videotranscoder repo metadata json api shortname field missing from videotranscoder repo metadata json client documentation must match pattern in vision repo metadata json release level must be equal to one of the allowed values in vision repo metadata json api shortname field missing from vision repo metadata json client documentation must match pattern in vpcaccess repo metadata json release level must be equal to one of the allowed values in vpcaccess repo metadata json api shortname field missing from vpcaccess repo metadata json client documentation must match pattern in webrisk repo metadata json release level must be equal to one of the allowed values in webrisk repo metadata json api shortname field missing from webrisk repo metadata json client documentation must match pattern in websecurityscanner repo metadata json release level must be equal to one of the allowed values in websecurityscanner repo metadata json api shortname field missing from websecurityscanner repo metadata json client documentation must match pattern in workflows repo metadata json release level must be equal to one of the allowed values in workflows repo metadata json api shortname field missing from workflows repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
18,234
| 24,301,375,205
|
IssuesEvent
|
2022-09-29 14:06:12
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/servicegraphprocessor] Adjust metric names to fit documentation
|
bug priority:p2 processor/servicegraph
|
### What happened?
```Markdown
## Description
In the documentation the metric names do not align with the actual names of the metrics, e.g. 'traces_service_graph_request_total' and 'request_total'
## Steps to Reproduce
## Expected Result
Metrics are prefixed with traces_service_graph
## Actual Result
Metrics are not prefixed
```
### Collector version
v0.60.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
otlp/servicegraph:
protocols:
grpc:
endpoint: "localhost:12345"
processors:
servicegraph:
metrics_exporter: prometheus
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
logging:
logLevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [servicegraph]
exporters: [logging]
metrics:
receivers: [otlp/servicegraph]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
[processor/servicegraphprocessor] Adjust metric names to fit documentation - ### What happened?
```Markdown
## Description
In the documentation the metric names do not align with the actual names of the metrics, e.g. 'traces_service_graph_request_total' and 'request_total'
## Steps to Reproduce
## Expected Result
Metrics are prefixed with traces_service_graph
## Actual Result
Metrics are not prefixed
```
### Collector version
v0.60.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
otlp/servicegraph:
protocols:
grpc:
endpoint: "localhost:12345"
processors:
servicegraph:
metrics_exporter: prometheus
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
logging:
logLevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [servicegraph]
exporters: [logging]
metrics:
receivers: [otlp/servicegraph]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
process
|
adjust metric names to fit documentation what happened markdown description in the documentation the metric names do not align with the actual names of the metrics e g traces service graph request total and request total steps to reproduce expected result metrics are prefixed with traces service graph actual result metrics are not prefixed collector version environment information no response opentelemetry collector configuration yaml receivers otlp protocols grpc otlp servicegraph protocols grpc endpoint localhost processors servicegraph metrics exporter prometheus exporters prometheus endpoint logging loglevel debug service pipelines traces receivers processors exporters metrics receivers exporters log output no response additional context no response
| 1
|
14,455
| 17,533,229,203
|
IssuesEvent
|
2021-08-12 01:46:32
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Convert map to Raster (Processing Toolbox) Clips Text on Output
|
Feedback stale Processing Bug
|
**Bug Description**
"Convert map to Raster" tool Clips all text items that intersect internal "Tile" boundaries on geotif output "Output Layer". Increasing or decreasing "Tile size" shifts the text clipping to new internal boundaries, respectfully.
The only option to produce a map free of text clipping is to specify a Tile Size that is larger than my desired map. The map area text will be clean, however, the overall geotif will be four times larger (spatially) and the map will have a large amount of data not intended to be shown.
**How to Reproduce**
1. Open a map with a fair amount of text, for example, a large state map with cities labeled.
2. Go to: Processing Toolbox > Convert Map to Raster
3. Minimum exent.. > Choose "Use Canvas Extent"
4. "Single layer to render option" > Leave blank
5. Leave all other parameters as defaults
6. Choose "Run" and carefully inspect output for tile boundary text clipping. Character strings will be masked. Resulting in partial characters and or partial words.
7. Afterward, repeat process with a smaller "Tile size" specification, i.e., change to 512 and the text clipping error of degree will increase.
8. Changing the "Tile size" to 2048 decreases text clipping, but may increase spatial coverage if the map extent exceeds the height or width of the tile size.
Thank you in advance,
Leon

QGIS version
3.8.0-Zanzibar
QGIS code revision
11aff65f10
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
2.4.1
Running against GDAL/OGR
2.4.1
Compiled against GEOS
3.7.2-CAPI-1.11.0
Running against GEOS
3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version
9.2.4
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
5.2.0
Running against PROJ
Rel. 5.2.0, September 15th, 2018
OS Version
Windows 7 SP 1 (6.1)
|
1.0
|
Convert map to Raster (Processing Toolbox) Clips Text on Output - **Bug Description**
"Convert map to Raster" tool Clips all text items that intersect internal "Tile" boundaries on geotif output "Output Layer". Increasing or decreasing "Tile size" shifts the text clipping to new internal boundaries, respectfully.
The only option to produce a map free of text clipping is to specify a Tile Size that is larger than my desired map. The map area text will be clean, however, the overall geotif will be four times larger (spatially) and the map will have a large amount of data not intended to be shown.
**How to Reproduce**
1. Open a map with a fair amount of text, for example, a large state map with cities labeled.
2. Go to: Processing Toolbox > Convert Map to Raster
3. Minimum exent.. > Choose "Use Canvas Extent"
4. "Single layer to render option" > Leave blank
5. Leave all other parameters as defaults
6. Choose "Run" and carefully inspect output for tile boundary text clipping. Character strings will be masked. Resulting in partial characters and or partial words.
7. Afterward, repeat process with a smaller "Tile size" specification, i.e., change to 512 and the text clipping error of degree will increase.
8. Changing the "Tile size" to 2048 decreases text clipping, but may increase spatial coverage if the map extent exceeds the height or width of the tile size.
Thank you in advance,
Leon

QGIS version
3.8.0-Zanzibar
QGIS code revision
11aff65f10
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
2.4.1
Running against GDAL/OGR
2.4.1
Compiled against GEOS
3.7.2-CAPI-1.11.0
Running against GEOS
3.7.2-CAPI-1.11.0 b55d2125
PostgreSQL Client Version
9.2.4
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
5.2.0
Running against PROJ
Rel. 5.2.0, September 15th, 2018
OS Version
Windows 7 SP 1 (6.1)
|
process
|
convert map to raster processing toolbox clips text on output bug description convert map to raster tool clips all text items that intersect internal tile boundaries on geotif output output layer increasing or decreasing tile size shifts the text clipping to new internal boundaries respectfully the only option to produce a map free of text clipping is to specify a tile size that is larger than my desired map the map area text will be clean however the overall geotif will be four times larger spatially and the map will have a large amount of data not intended to be shown how to reproduce open a map with a fair amount of text for example a large state map with cities labeled go to processing toolbox convert map to raster minimum exent choose use canvas extent single layer to render option leave blank leave all other parameters as defaults choose run and carefully inspect output for tile boundary text clipping character strings will be masked resulting in partial characters and or partial words afterward repeat process with a smaller tile size specification i e change to and the text clipping error of degree will increase changing the tile size to decreases text clipping but may increase spatial coverage if the map extent exceeds the height or width of the tile size thank you in advance leon qgis version zanzibar qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi postgresql client version spatialite version qwt version version compiled against proj running against proj rel september os version windows sp
| 1
|
16,918
| 22,266,202,399
|
IssuesEvent
|
2022-06-10 07:42:30
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Add acceptance test support for eth_getBalance
|
enhancement P2 process
|
### Problem
The current acceptance tests implemented in https://github.com/hashgraph/hedera-json-rpc-relay/pull/119 were not able to include `eth_getBalance` since it returned `0x0` instead of a valid balance
### Solution
Add a test that calls `eth_getBalance` for the primary and secondary accounts setup
### Alternatives
_No response_
|
1.0
|
Add acceptance test support for eth_getBalance - ### Problem
The current acceptance tests implemented in https://github.com/hashgraph/hedera-json-rpc-relay/pull/119 were not able to include `eth_getBalance` since it returned `0x0` instead of a valid balance
### Solution
Add a test that calls `eth_getBalance` for the primary and secondary accounts setup
### Alternatives
_No response_
|
process
|
add acceptance test support for eth getbalance problem the current acceptance tests implemented in were not able to include eth getbalance since it returned instead of a valid balance solution add a test that calls eth getbalance for the primary and secondary accounts setup alternatives no response
| 1
|
384,776
| 11,403,120,127
|
IssuesEvent
|
2020-01-31 06:07:08
|
unitystation/unitystation
|
https://api.github.com/repos/unitystation/unitystation
|
closed
|
Meat can no longer be microwaved
|
Bug High Priority In Progress
|
## Description
Meat was recently https://github.com/unitystation/unitystation/commit/dbaf9bfa0b8918c14c4912be41080ea30dde7175 changed to a prefab variant. The microwave can no longer cook it because the recipe has not been adjusted.
The recipe has to be changed to allow the microwaving of meat again.
Has only been tested on GER.
|
1.0
|
Meat can no longer be microwaved - ## Description
Meat was recently https://github.com/unitystation/unitystation/commit/dbaf9bfa0b8918c14c4912be41080ea30dde7175 changed to a prefab variant. The microwave can no longer cook it because the recipe has not been adjusted.
The recipe has to be changed to allow the microwaving of meat again.
Has only been tested on GER.
|
non_process
|
meat can no longer be microwaved description meat was recently changed to a prefab variant the microwave can no longer cook it because the recipe has not been adjusted the recipe has to be changed to allow the microwaving of meat again has only been tested on ger
| 0
|
19,505
| 25,813,594,708
|
IssuesEvent
|
2022-12-12 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 12 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction
- **Authors:** Shuliang Ning, Mengcheng Lan, Yanran Li, Chaofeng Chen, Qian Chen, Xunlai Chen, Xiaoguang Han, Shuguang Cui
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04655
- **Pdf link:** https://arxiv.org/pdf/2212.04655
- **Abstract**
The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner. This way often leads to severe performance degradation when they try to extrapolate a longer period of future, thus limiting the practical use of the prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation. However, only a few MIMO models for video prediction are proposed and they only achieve inferior performance due to the date. The real strength of the MIMO model in this area is not well noticed and is largely under-explored. Motivated by that, we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple MIMO architecture can go. Surprisingly, our empirical studies reveal that a simple MIMO model can outperform the state-of-the-art work with a large margin much more than expected, especially in dealing with longterm error accumulation. After exploring a number of ways and designs, we propose a new MIMO architecture based on extending the pure Transformer with local spatio-temporal blocks and a new multi-output decoder, namely MIMO-VP, to establish a new standard in video prediction. We evaluate our model in four highly competitive benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments show that our model wins 1st place on all the benchmarks with remarkable performance gains and surpasses the best SISO model in all aspects including efficiency, quantity, and quality. We believe our model can serve as a new baseline to facilitate the future research of video prediction tasks. The code will be released.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Image-Based Fire Detection in Industrial Environments with YOLOv4
- **Authors:** Otto Zell, Joel Pålsson, Kevin Hernandez-Diaz, Fernando Alonso-Fernandez, Felix Nilsson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04786
- **Pdf link:** https://arxiv.org/pdf/2212.04786
- **Abstract**
Fires have destructive power when they break out and affect their surroundings on a devastatingly large scale. The best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow. Accordingly, this work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream. Object detection has made giant leaps in speed and accuracy over the last six years, making real-time detection feasible. To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system in an industrial warehouse setting, which is characterized by high ceilings. A drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height. The AI models brought forward in this research managed to outperform these detectors by a significant amount of time, providing precious anticipation that could help to minimize the effects of fires further.
### Album cover art image generation with Generative Adversarial Networks
- **Authors:** Felipe Perez Stoppa, Ester Vidaña-Vila, Joan Navarro
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.04844
- **Pdf link:** https://arxiv.org/pdf/2212.04844
- **Abstract**
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
## Keyword: ISP
### Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty Optimization
- **Authors:** Neslihan Kose, Ranganath Krishnan, Akash Dhamasia, Omesh Tickoo, Michael Paulitsch
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.04812
- **Pdf link:** https://arxiv.org/pdf/2212.04812
- **Abstract**
Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making. Assessing the quality of uncertainty estimates is challenging as ground truth for uncertainty estimates is not available. Ideally, in a well-calibrated model, uncertainty estimates should perfectly correlate with model error. We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error. Our approach targets continuous structured prediction and regression tasks, and is evaluated on multiple datasets including a large-scale vehicle motion prediction task involving real-world distributional shifts. We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
### PACMAN: a framework for pulse oximeter digit detection and reading in a low-resource setting
- **Authors:** Chiraphat Boonnag, Wanumaidah Saengmolee, Narongrid Seesawad, Amrest Chinkamol, Saendee Rattanasomrerk, Kanyakorn Veerakanjana, Kamonwan Thanontip, Warissara Limpornchitwilai, Piyalitt Ittichaiwong, Theerawit Wilaiprasitporn
- **Subjects:** Image and Video Processing (eess.IV); Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04964
- **Pdf link:** https://arxiv.org/pdf/2212.04964
- **Abstract**
In light of the COVID-19 pandemic, patients were required to manually input their daily oxygen saturation (SpO2) and pulse rate (PR) values into a health monitoring system-unfortunately, such a process trend to be an error in typing. Several studies attempted to detect the physiological value from the captured image using optical character recognition (OCR). However, the technology has limited availability with high cost. Thus, this study aimed to propose a novel framework called PACMAN (Pandemic Accelerated Human-Machine Collaboration) with a low-resource deep learning-based computer vision. We compared state-of-the-art object detection algorithms (scaled YOLOv4, YOLOv5, and YOLOR), including the commercial OCR tools for digit recognition on the captured images from pulse oximeter display. All images were derived from crowdsourced data collection with varying quality and alignment. YOLOv5 was the best-performing model against the given model comparison across all datasets, notably the correctly orientated image dataset. We further improved the model performance with the digits auto-orientation algorithm and applied a clustering algorithm to extract SpO2 and PR values. The accuracy performance of YOLOv5 with the implementations was approximately 81.0-89.5%, which was enhanced compared to without any additional implementation. Accordingly, this study highlighted the completion of PACMAN framework to detect and read digits in real-world datasets. The proposed framework has been currently integrated into the patient monitoring system utilized by hospitals nationwide.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Image-Based Fire Detection in Industrial Environments with YOLOv4
- **Authors:** Otto Zell, Joel Pålsson, Kevin Hernandez-Diaz, Fernando Alonso-Fernandez, Felix Nilsson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04786
- **Pdf link:** https://arxiv.org/pdf/2212.04786
- **Abstract**
Fires have destructive power when they break out and affect their surroundings on a devastatingly large scale. The best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow. Accordingly, this work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream. Object detection has made giant leaps in speed and accuracy over the last six years, making real-time detection feasible. To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system in an industrial warehouse setting, which is characterized by high ceilings. A drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height. The AI models brought forward in this research managed to outperform these detectors by a significant amount of time, providing precious anticipation that could help to minimize the effects of fires further.
### Album cover art image generation with Generative Adversarial Networks
- **Authors:** Felipe Perez Stoppa, Ester Vidaña-Vila, Joan Navarro
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.04844
- **Pdf link:** https://arxiv.org/pdf/2212.04844
- **Abstract**
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 12 Dec 22 - ## Keyword: events
### MIMO Is All You Need : A Strong Multi-In-Multi-Out Baseline for Video Prediction
- **Authors:** Shuliang Ning, Mengcheng Lan, Yanran Li, Chaofeng Chen, Qian Chen, Xunlai Chen, Xiaoguang Han, Shuguang Cui
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04655
- **Pdf link:** https://arxiv.org/pdf/2212.04655
- **Abstract**
The mainstream of the existing approaches for video prediction builds up their models based on a Single-In-Single-Out (SISO) architecture, which takes the current frame as input to predict the next frame in a recursive manner. This way often leads to severe performance degradation when they try to extrapolate a longer period of future, thus limiting the practical use of the prediction model. Alternatively, a Multi-In-Multi-Out (MIMO) architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation. However, only a few MIMO models for video prediction are proposed and they only achieve inferior performance due to the date. The real strength of the MIMO model in this area is not well noticed and is largely under-explored. Motivated by that, we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple MIMO architecture can go. Surprisingly, our empirical studies reveal that a simple MIMO model can outperform the state-of-the-art work with a large margin much more than expected, especially in dealing with longterm error accumulation. After exploring a number of ways and designs, we propose a new MIMO architecture based on extending the pure Transformer with local spatio-temporal blocks and a new multi-output decoder, namely MIMO-VP, to establish a new standard in video prediction. We evaluate our model in four highly competitive benchmarks (Moving MNIST, Human3.6M, Weather, KITTI). Extensive experiments show that our model wins 1st place on all the benchmarks with remarkable performance gains and surpasses the best SISO model in all aspects including efficiency, quantity, and quality. We believe our model can serve as a new baseline to facilitate the future research of video prediction tasks. The code will be released.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Image-Based Fire Detection in Industrial Environments with YOLOv4
- **Authors:** Otto Zell, Joel Pålsson, Kevin Hernandez-Diaz, Fernando Alonso-Fernandez, Felix Nilsson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04786
- **Pdf link:** https://arxiv.org/pdf/2212.04786
- **Abstract**
Fires have destructive power when they break out and affect their surroundings on a devastatingly large scale. The best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow. Accordingly, this work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream. Object detection has made giant leaps in speed and accuracy over the last six years, making real-time detection feasible. To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system in an industrial warehouse setting, which is characterized by high ceilings. A drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height. The AI models brought forward in this research managed to outperform these detectors by a significant amount of time, providing precious anticipation that could help to minimize the effects of fires further.
### Album cover art image generation with Generative Adversarial Networks
- **Authors:** Felipe Perez Stoppa, Ester Vidaña-Vila, Joan Navarro
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.04844
- **Pdf link:** https://arxiv.org/pdf/2212.04844
- **Abstract**
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
## Keyword: ISP
### Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty Optimization
- **Authors:** Neslihan Kose, Ranganath Krishnan, Akash Dhamasia, Omesh Tickoo, Michael Paulitsch
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.04812
- **Pdf link:** https://arxiv.org/pdf/2212.04812
- **Abstract**
Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making. Assessing the quality of uncertainty estimates is challenging as ground truth for uncertainty estimates is not available. Ideally, in a well-calibrated model, uncertainty estimates should perfectly correlate with model error. We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error. Our approach targets continuous structured prediction and regression tasks, and is evaluated on multiple datasets including a large-scale vehicle motion prediction task involving real-world distributional shifts. We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
### PACMAN: a framework for pulse oximeter digit detection and reading in a low-resource setting
- **Authors:** Chiraphat Boonnag, Wanumaidah Saengmolee, Narongrid Seesawad, Amrest Chinkamol, Saendee Rattanasomrerk, Kanyakorn Veerakanjana, Kamonwan Thanontip, Warissara Limpornchitwilai, Piyalitt Ittichaiwong, Theerawit Wilaiprasitporn
- **Subjects:** Image and Video Processing (eess.IV); Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04964
- **Pdf link:** https://arxiv.org/pdf/2212.04964
- **Abstract**
In light of the COVID-19 pandemic, patients were required to manually input their daily oxygen saturation (SpO2) and pulse rate (PR) values into a health monitoring system-unfortunately, such a process trend to be an error in typing. Several studies attempted to detect the physiological value from the captured image using optical character recognition (OCR). However, the technology has limited availability with high cost. Thus, this study aimed to propose a novel framework called PACMAN (Pandemic Accelerated Human-Machine Collaboration) with a low-resource deep learning-based computer vision. We compared state-of-the-art object detection algorithms (scaled YOLOv4, YOLOv5, and YOLOR), including the commercial OCR tools for digit recognition on the captured images from pulse oximeter display. All images were derived from crowdsourced data collection with varying quality and alignment. YOLOv5 was the best-performing model against the given model comparison across all datasets, notably the correctly orientated image dataset. We further improved the model performance with the digits auto-orientation algorithm and applied a clustering algorithm to extract SpO2 and PR values. The accuracy performance of YOLOv5 with the implementations was approximately 81.0-89.5%, which was enhanced compared to without any additional implementation. Accordingly, this study highlighted the completion of PACMAN framework to detect and read digits in real-world datasets. The proposed framework has been currently integrated into the patient monitoring system utilized by hospitals nationwide.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Image-Based Fire Detection in Industrial Environments with YOLOv4
- **Authors:** Otto Zell, Joel Pålsson, Kevin Hernandez-Diaz, Fernando Alonso-Fernandez, Felix Nilsson
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04786
- **Pdf link:** https://arxiv.org/pdf/2212.04786
- **Abstract**
Fires have destructive power when they break out and affect their surroundings on a devastatingly large scale. The best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow. Accordingly, this work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream. Object detection has made giant leaps in speed and accuracy over the last six years, making real-time detection feasible. To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system in an industrial warehouse setting, which is characterized by high ceilings. A drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height. The AI models brought forward in this research managed to outperform these detectors by a significant amount of time, providing precious anticipation that could help to minimize the effects of fires further.
### Album cover art image generation with Generative Adversarial Networks
- **Authors:** Felipe Perez Stoppa, Ester Vidaña-Vila, Joan Navarro
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.04844
- **Pdf link:** https://arxiv.org/pdf/2212.04844
- **Abstract**
Generative Adversarial Networks (GANs) were introduced by Goodfellow in 2014, and since then have become popular for constructing generative artificial intelligence models. However, the drawbacks of such networks are numerous, like their longer training times, their sensitivity to hyperparameter tuning, several types of loss and optimization functions and other difficulties like mode collapse. Current applications of GANs include generating photo-realistic human faces, animals and objects. However, I wanted to explore the artistic ability of GANs in more detail, by using existing models and learning from them. This dissertation covers the basics of neural networks and works its way up to the particular aspects of GANs, together with experimentation and modification of existing available models, from least complex to most. The intention is to see if state of the art GANs (specifically StyleGAN2) can generate album art covers and if it is possible to tailor them by genre. This was attempted by first familiarizing myself with 3 existing GANs architectures, including the state of the art StyleGAN2. The StyleGAN2 code was used to train a model with a dataset containing 80K album cover images, then used to style images by picking curated images and mixing their styles.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon dec keyword events mimo is all you need a strong multi in multi out baseline for video prediction authors shuliang ning mengcheng lan yanran li chaofeng chen qian chen xunlai chen xiaoguang han shuguang cui subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the mainstream of the existing approaches for video prediction builds up their models based on a single in single out siso architecture which takes the current frame as input to predict the next frame in a recursive manner this way often leads to severe performance degradation when they try to extrapolate a longer period of future thus limiting the practical use of the prediction model alternatively a multi in multi out mimo architecture that outputs all the future frames at one shot naturally breaks the recursive manner and therefore prevents error accumulation however only a few mimo models for video prediction are proposed and they only achieve inferior performance due to the date the real strength of the mimo model in this area is not well noticed and is largely under explored motivated by that we conduct a comprehensive investigation in this paper to thoroughly exploit how far a simple mimo architecture can go surprisingly our empirical studies reveal that a simple mimo model can outperform the state of the art work with a large margin much more than expected especially in dealing with longterm error accumulation after exploring a number of ways and designs we propose a new mimo architecture based on extending the pure transformer with local spatio temporal blocks and a new multi output decoder namely mimo vp to establish a new standard in video prediction we evaluate our model in four highly competitive benchmarks moving mnist weather kitti extensive experiments show that our model wins place on all the benchmarks with remarkable performance gains and surpasses the best siso model in all aspects including efficiency quantity and quality we believe our model can serve as a new baseline to facilitate the future research of video prediction tasks the code will be released keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb image based fire detection in industrial environments with authors otto zell joel pålsson kevin hernandez diaz fernando alonso fernandez felix nilsson subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fires have destructive power when they break out and affect their surroundings on a devastatingly large scale the best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow accordingly this work looks into the potential of ai to detect and recognize fires and reduce detection time using object detection on an image stream object detection has made giant leaps in speed and accuracy over the last six years making real time detection feasible to our end we collected and labeled appropriate data from several public sources which have been used to train and evaluate several models based on the popular object detector our focus driven by a collaborating industrial partner is to implement our system in an industrial warehouse setting which is characterized by high ceilings a drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height the ai models brought forward in this research managed to outperform these detectors by a significant amount of time providing precious anticipation that could help to minimize the effects of fires further album cover art image generation with generative adversarial networks authors felipe perez stoppa ester vidaña vila joan navarro subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg image and video processing eess iv arxiv link pdf link abstract generative adversarial networks gans were introduced by goodfellow in and since then have become popular for constructing generative artificial intelligence models however the drawbacks of such networks are numerous like their longer training times their sensitivity to hyperparameter tuning several types of loss and optimization functions and other difficulties like mode collapse current applications of gans include generating photo realistic human faces animals and objects however i wanted to explore the artistic ability of gans in more detail by using existing models and learning from them this dissertation covers the basics of neural networks and works its way up to the particular aspects of gans together with experimentation and modification of existing available models from least complex to most the intention is to see if state of the art gans specifically can generate album art covers and if it is possible to tailor them by genre this was attempted by first familiarizing myself with existing gans architectures including the state of the art the code was used to train a model with a dataset containing album cover images then used to style images by picking curated images and mixing their styles keyword isp reliable multimodal trajectory prediction via error aligned uncertainty optimization authors neslihan kose ranganath krishnan akash dhamasia omesh tickoo michael paulitsch subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract reliable uncertainty quantification in deep neural networks is very crucial in safety critical applications such as automated driving for trustworthy and informed decision making assessing the quality of uncertainty estimates is challenging as ground truth for uncertainty estimates is not available ideally in a well calibrated model uncertainty estimates should perfectly correlate with model error we propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error our approach targets continuous structured prediction and regression tasks and is evaluated on multiple datasets including a large scale vehicle motion prediction task involving real world distributional shifts we demonstrate that our method improves average displacement error by and and the uncertainty correlation with model error by and as quantified by pearson correlation coefficient on two state of the art baselines pacman a framework for pulse oximeter digit detection and reading in a low resource setting authors chiraphat boonnag wanumaidah saengmolee narongrid seesawad amrest chinkamol saendee rattanasomrerk kanyakorn veerakanjana kamonwan thanontip warissara limpornchitwilai piyalitt ittichaiwong theerawit wilaiprasitporn subjects image and video processing eess iv computer vision and pattern recognition cs cv arxiv link pdf link abstract in light of the covid pandemic patients were required to manually input their daily oxygen saturation and pulse rate pr values into a health monitoring system unfortunately such a process trend to be an error in typing several studies attempted to detect the physiological value from the captured image using optical character recognition ocr however the technology has limited availability with high cost thus this study aimed to propose a novel framework called pacman pandemic accelerated human machine collaboration with a low resource deep learning based computer vision we compared state of the art object detection algorithms scaled and yolor including the commercial ocr tools for digit recognition on the captured images from pulse oximeter display all images were derived from crowdsourced data collection with varying quality and alignment was the best performing model against the given model comparison across all datasets notably the correctly orientated image dataset we further improved the model performance with the digits auto orientation algorithm and applied a clustering algorithm to extract and pr values the accuracy performance of with the implementations was approximately which was enhanced compared to without any additional implementation accordingly this study highlighted the completion of pacman framework to detect and read digits in real world datasets the proposed framework has been currently integrated into the patient monitoring system utilized by hospitals nationwide keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw image based fire detection in industrial environments with authors otto zell joel pålsson kevin hernandez diaz fernando alonso fernandez felix nilsson subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fires have destructive power when they break out and affect their surroundings on a devastatingly large scale the best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow accordingly this work looks into the potential of ai to detect and recognize fires and reduce detection time using object detection on an image stream object detection has made giant leaps in speed and accuracy over the last six years making real time detection feasible to our end we collected and labeled appropriate data from several public sources which have been used to train and evaluate several models based on the popular object detector our focus driven by a collaborating industrial partner is to implement our system in an industrial warehouse setting which is characterized by high ceilings a drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height the ai models brought forward in this research managed to outperform these detectors by a significant amount of time providing precious anticipation that could help to minimize the effects of fires further album cover art image generation with generative adversarial networks authors felipe perez stoppa ester vidaña vila joan navarro subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg image and video processing eess iv arxiv link pdf link abstract generative adversarial networks gans were introduced by goodfellow in and since then have become popular for constructing generative artificial intelligence models however the drawbacks of such networks are numerous like their longer training times their sensitivity to hyperparameter tuning several types of loss and optimization functions and other difficulties like mode collapse current applications of gans include generating photo realistic human faces animals and objects however i wanted to explore the artistic ability of gans in more detail by using existing models and learning from them this dissertation covers the basics of neural networks and works its way up to the particular aspects of gans together with experimentation and modification of existing available models from least complex to most the intention is to see if state of the art gans specifically can generate album art covers and if it is possible to tailor them by genre this was attempted by first familiarizing myself with existing gans architectures including the state of the art the code was used to train a model with a dataset containing album cover images then used to style images by picking curated images and mixing their styles keyword raw image there is no result
| 1
|
17,402
| 23,219,152,081
|
IssuesEvent
|
2022-08-02 16:28:39
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Guard against possible NPE in XMLUtils.toErrorReporter(DITAOTLogger)
|
bug priority/medium preprocess
|
Using an invalid plugin I got this NPE in the XMLUtils.toErrorReporter method:
.... java.lang.NullPointerException: Cannot invoke "net.sf.saxon.s9api.Location.getSystemId()" because the return value of "net.sf.saxon.s9api.XmlProcessingError.getLocation()" is null
at org.dita.dost.util.XMLUtils.lambda$0(XMLUtils.java:202)
at net.sf.saxon.style.Compilation.compileSingletonPackage(Compilation.java:117)
at net.sf.saxon.s9api.XsltCompiler.compile(XsltCompiler.java:838)
at org.dita.dost.module.XsltModule.execute(XsltModule.java:112)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:189)
the NPE is caused by a FileNotFoundException thrown somewhere because an XSLT stylesheet is missing in my plugin:
.../plugins/com.oxygenxml.webhelp.responsive/xsl/indexterms/extractIndexterms.xsl (No such file or directory)
[pipeline] at java.base/java.io.FileInputStream.open0(Native Method)
[pipeline] at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
[pipeline] at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
[pipeline] at java.base/java.io.FileInputStream.<init>(FileInputStream.java:111)
[pipeline] at java.base/sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:86)
[pipeline] at java.base/sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:189)
[pipeline] at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:1114)
[pipeline] at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
[pipeline] at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
[pipeline] at org.ditang.relaxng.defaults.RelaxDefaultsParserConfiguration.parse(RelaxDefaultsParserConfiguration.java:119)
[pipeline] at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
[pipeline] at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
[pipeline] at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
[pipeline] at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
[pipeline] at net.sf.saxon.event.Sender.sendSAXSource(Sender.java:439)
[pipeline] at net.sf.saxon.event.Sender.send(Sender.java:168)
[pipeline] at net.sf.saxon.style.StylesheetModule.sendStylesheetSource(StylesheetModule.java:157)
[pipeline] at net.sf.saxon.style.StylesheetModule.loadStylesheet(StylesheetModule.java:229)
[pipeline] at net.sf.saxon.style.Compilation.compileSingletonPackage(Compilation.java:113)
[pipeline] at net.sf.saxon.s9api.XsltCompiler.compile(XsltCompiler.java:838)
[pipeline] at org.dita.dost.module.XsltModule.execute(XsltModule.java:112)
[pipeline] at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:1
|
1.0
|
Guard against possible NPE in XMLUtils.toErrorReporter(DITAOTLogger) - Using an invalid plugin I got this NPE in the XMLUtils.toErrorReporter method:
.... java.lang.NullPointerException: Cannot invoke "net.sf.saxon.s9api.Location.getSystemId()" because the return value of "net.sf.saxon.s9api.XmlProcessingError.getLocation()" is null
at org.dita.dost.util.XMLUtils.lambda$0(XMLUtils.java:202)
at net.sf.saxon.style.Compilation.compileSingletonPackage(Compilation.java:117)
at net.sf.saxon.s9api.XsltCompiler.compile(XsltCompiler.java:838)
at org.dita.dost.module.XsltModule.execute(XsltModule.java:112)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:189)
the NPE is caused by a FileNotFoundException thrown somewhere because an XSLT stylesheet is missing in my plugin:
.../plugins/com.oxygenxml.webhelp.responsive/xsl/indexterms/extractIndexterms.xsl (No such file or directory)
[pipeline] at java.base/java.io.FileInputStream.open0(Native Method)
[pipeline] at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
[pipeline] at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
[pipeline] at java.base/java.io.FileInputStream.<init>(FileInputStream.java:111)
[pipeline] at java.base/sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:86)
[pipeline] at java.base/sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:189)
[pipeline] at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:1114)
[pipeline] at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
[pipeline] at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
[pipeline] at org.ditang.relaxng.defaults.RelaxDefaultsParserConfiguration.parse(RelaxDefaultsParserConfiguration.java:119)
[pipeline] at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
[pipeline] at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
[pipeline] at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
[pipeline] at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
[pipeline] at net.sf.saxon.event.Sender.sendSAXSource(Sender.java:439)
[pipeline] at net.sf.saxon.event.Sender.send(Sender.java:168)
[pipeline] at net.sf.saxon.style.StylesheetModule.sendStylesheetSource(StylesheetModule.java:157)
[pipeline] at net.sf.saxon.style.StylesheetModule.loadStylesheet(StylesheetModule.java:229)
[pipeline] at net.sf.saxon.style.Compilation.compileSingletonPackage(Compilation.java:113)
[pipeline] at net.sf.saxon.s9api.XsltCompiler.compile(XsltCompiler.java:838)
[pipeline] at org.dita.dost.module.XsltModule.execute(XsltModule.java:112)
[pipeline] at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:1
|
process
|
guard against possible npe in xmlutils toerrorreporter ditaotlogger using an invalid plugin i got this npe in the xmlutils toerrorreporter method java lang nullpointerexception cannot invoke net sf saxon location getsystemid because the return value of net sf saxon xmlprocessingerror getlocation is null at org dita dost util xmlutils lambda xmlutils java at net sf saxon style compilation compilesingletonpackage compilation java at net sf saxon xsltcompiler compile xsltcompiler java at org dita dost module xsltmodule execute xsltmodule java at org dita dost ant extensibleantinvoker execute extensibleantinvoker java the npe is caused by a filenotfoundexception thrown somewhere because an xslt stylesheet is missing in my plugin plugins com oxygenxml webhelp responsive xsl indexterms extractindexterms xsl no such file or directory at java base java io fileinputstream native method at java base java io fileinputstream open fileinputstream java at java base java io fileinputstream fileinputstream java at java base java io fileinputstream fileinputstream java at java base sun net at java base sun net at org apache xerces impl xmlentitymanager setupcurrententity xmlentitymanager java at org apache xerces impl xmlversiondetector determinedocversion unknown source at org apache xerces parsers parse unknown source at org ditang relaxng defaults relaxdefaultsparserconfiguration parse relaxdefaultsparserconfiguration java at org apache xerces parsers parse unknown source at org apache xerces parsers xmlparser parse unknown source at org apache xerces parsers abstractsaxparser parse unknown source at org apache xerces jaxp saxparserimpl jaxpsaxparser parse unknown source at net sf saxon event sender sendsaxsource sender java at net sf saxon event sender send sender java at net sf saxon style stylesheetmodule sendstylesheetsource stylesheetmodule java at net sf saxon style stylesheetmodule loadstylesheet stylesheetmodule java at net sf saxon style compilation compilesingletonpackage compilation java at net sf saxon xsltcompiler compile xsltcompiler java at org dita dost module xsltmodule execute xsltmodule java at org dita dost ant extensibleantinvoker execute extensibleantinvoker java
| 1
|
10,199
| 13,064,948,832
|
IssuesEvent
|
2020-07-30 18:57:24
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Consider: GitHub actions for CI
|
type: process
|
**No rush or urgency here**
We should look into using GitHub actions as our CI runner. It looks like it supports most/all of what we need. It supports running tests on linux, windows, and macos. It supports using docker images. It supports caching data between runs. It supports nice UI integration, and real-time streaming of logs in the UI (awesome!).
Some obvious concerns are:
* I believe we can only cache 5GB per repo, which I imagine would not be enough.
* There may be other resource limits that we'd hit.
We can, of course, also consider running our own GitHub Action runners, but that's more work.
Just something that we might want to consider since Kokoro sometimes bites us.
|
1.0
|
Consider: GitHub actions for CI - **No rush or urgency here**
We should look into using GitHub actions as our CI runner. It looks like it supports most/all of what we need. It supports running tests on linux, windows, and macos. It supports using docker images. It supports caching data between runs. It supports nice UI integration, and real-time streaming of logs in the UI (awesome!).
Some obvious concerns are:
* I believe we can only cache 5GB per repo, which I imagine would not be enough.
* There may be other resource limits that we'd hit.
We can, of course, also consider running our own GitHub Action runners, but that's more work.
Just something that we might want to consider since Kokoro sometimes bites us.
|
process
|
consider github actions for ci no rush or urgency here we should look into using github actions as our ci runner it looks like it supports most all of what we need it supports running tests on linux windows and macos it supports using docker images it supports caching data between runs it supports nice ui integration and real time streaming of logs in the ui awesome some obvious concerns are i believe we can only cache per repo which i imagine would not be enough there may be other resource limits that we d hit we can of course also consider running our own github action runners but that s more work just something that we might want to consider since kokoro sometimes bites us
| 1
|
25,976
| 11,235,882,392
|
IssuesEvent
|
2020-01-09 09:20:24
|
PeterNgTr/harvey-ui-tests
|
https://api.github.com/repos/PeterNgTr/harvey-ui-tests
|
opened
|
CVE-2019-10744 (High) detected in lodash-4.17.11.tgz
|
security vulnerability
|
## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/harvey-ui-tests/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/harvey-ui-tests/node_modules/async/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- selenium-standalone-6.15.4.tgz (Root Library)
- async-2.6.1.tgz
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/PeterNgTr/harvey-ui-tests/commit/f575e2932670c5f02fd84d06df81832601646cb5">f575e2932670c5f02fd84d06df81832601646cb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790">https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: 4.17.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10744 (High) detected in lodash-4.17.11.tgz - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/harvey-ui-tests/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/harvey-ui-tests/node_modules/async/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- selenium-standalone-6.15.4.tgz (Root Library)
- async-2.6.1.tgz
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/PeterNgTr/harvey-ui-tests/commit/f575e2932670c5f02fd84d06df81832601646cb5">f575e2932670c5f02fd84d06df81832601646cb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790">https://github.com/lodash/lodash/pull/4336/commits/a01e4fa727e7294cb7b2845570ba96b206926790</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: 4.17.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm harvey ui tests package json path to vulnerable library tmp ws scm harvey ui tests node modules async node modules lodash package json dependency hierarchy selenium standalone tgz root library async tgz x lodash tgz vulnerable library found in head commit a href vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
12,196
| 14,742,410,951
|
IssuesEvent
|
2021-01-07 12:14:59
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - ? about a balance
|
anc-process anp-0.5 ant-bug ant-child/secondary has attachment
|
In GitLab by @kdjstudios on Apr 12, 2019, 14:43
**Submitted by:** Gaylan Garrett <Gaylan.Garrett@Nexa.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-12-38371/conversation
**Server:** External
**Client/Site:** Keener
**Account:**
**Issue:**
Please see the print out below. I believe the client made a payment of 2469.95 while it was in draft mode ( I was in draft mode several days doing research ) but now it say balance is 5,166.75 but all invoices are paid except for the recent invoice 2696.80. It is as if the payment of 2469.95 was made but is still in the balance.

|
1.0
|
Keener - ? about a balance - In GitLab by @kdjstudios on Apr 12, 2019, 14:43
**Submitted by:** Gaylan Garrett <Gaylan.Garrett@Nexa.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-12-38371/conversation
**Server:** External
**Client/Site:** Keener
**Account:**
**Issue:**
Please see the print out below. I believe the client made a payment of 2469.95 while it was in draft mode ( I was in draft mode several days doing research ) but now it say balance is 5,166.75 but all invoices are paid except for the recent invoice 2696.80. It is as if the payment of 2469.95 was made but is still in the balance.

|
process
|
keener about a balance in gitlab by kdjstudios on apr submitted by gaylan garrett helpdesk server external client site keener account issue please see the print out below i believe the client made a payment of while it was in draft mode i was in draft mode several days doing research but now it say balance is but all invoices are paid except for the recent invoice it is as if the payment of was made but is still in the balance uploads image png
| 1
|
135,577
| 12,686,853,229
|
IssuesEvent
|
2020-06-20 13:25:38
|
sapmentors/cap-community
|
https://api.github.com/repos/sapmentors/cap-community
|
closed
|
Documentation: Missing FQNs
|
documentation
|
I've found two place in https://cap.cloud.sap/ where I think the code samples are wrong:
First is at https://cap.cloud.sap/docs/node.js/api#cds-transaction. The sample shows
```
return tx.run (SELECT.from ('Books'))
```
IMO, this must be changed to use the fully qualified name, like
```
return tx.run (SELECT.from ('another-service.Books'))
```
The same hold for the sample at https://cap.cloud.sap/docs/guides/consuming-services#uniform-consumption
In the same spirit, the ``Common Usage`` samples at https://cap.cloud.sap/docs/node.js/api#cds-run-query and https://cap.cloud.sap/docs/node.js/api#cds-connect seem to be misleading.
Regards
|
1.0
|
Documentation: Missing FQNs - I've found two place in https://cap.cloud.sap/ where I think the code samples are wrong:
First is at https://cap.cloud.sap/docs/node.js/api#cds-transaction. The sample shows
```
return tx.run (SELECT.from ('Books'))
```
IMO, this must be changed to use the fully qualified name, like
```
return tx.run (SELECT.from ('another-service.Books'))
```
The same hold for the sample at https://cap.cloud.sap/docs/guides/consuming-services#uniform-consumption
In the same spirit, the ``Common Usage`` samples at https://cap.cloud.sap/docs/node.js/api#cds-run-query and https://cap.cloud.sap/docs/node.js/api#cds-connect seem to be misleading.
Regards
|
non_process
|
documentation missing fqns i ve found two place in where i think the code samples are wrong first is at the sample shows return tx run select from books imo this must be changed to use the fully qualified name like return tx run select from another service books the same hold for the sample at in the same spirit the common usage samples at and seem to be misleading regards
| 0
|
10,154
| 13,044,162,608
|
IssuesEvent
|
2020-07-29 03:47:34
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `CurrentUser` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `CurrentUser` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `CurrentUser` from TiDB -
## Description
Port the scalar function `CurrentUser` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function currentuser from tidb description port the scalar function currentuser from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
96,359
| 8,607,442,599
|
IssuesEvent
|
2018-11-17 22:30:27
|
bitchan/eccrypto
|
https://api.github.com/repos/bitchan/eccrypto
|
closed
|
Improve unittesting by means of Saucelabs and Codeclimate
|
enhancement tests
|
I would suggest that in order to verify the compatibility of the library with the various trargetted browsers and in order to keep track of various code quality indicator you could implement integration with [Saucelabs](saucelabs.com) and [Codeclimate](https://codeclimate.com)
i'm thinking this while evaluating the validity of the eccrypto for the integration in openpgp.js (https://github.com/openpgpjs/openpgpjs/issues/428), and i've right know implemented what i'm suggesting to you for the https://github.com/indutny/elliptic project: https://github.com/indutny/elliptic/pull/80
It would be great if you could do the same!

|
1.0
|
Improve unittesting by means of Saucelabs and Codeclimate - I would suggest that in order to verify the compatibility of the library with the various trargetted browsers and in order to keep track of various code quality indicator you could implement integration with [Saucelabs](saucelabs.com) and [Codeclimate](https://codeclimate.com)
i'm thinking this while evaluating the validity of the eccrypto for the integration in openpgp.js (https://github.com/openpgpjs/openpgpjs/issues/428), and i've right know implemented what i'm suggesting to you for the https://github.com/indutny/elliptic project: https://github.com/indutny/elliptic/pull/80
It would be great if you could do the same!

|
non_process
|
improve unittesting by means of saucelabs and codeclimate i would suggest that in order to verify the compatibility of the library with the various trargetted browsers and in order to keep track of various code quality indicator you could implement integration with saucelabs com and i m thinking this while evaluating the validity of the eccrypto for the integration in openpgp js and i ve right know implemented what i m suggesting to you for the project it would be great if you could do the same
| 0
|
20,215
| 26,806,363,984
|
IssuesEvent
|
2023-02-01 18:37:20
|
mmattDonk/AI-TTS-Donations
|
https://api.github.com/repos/mmattDonk/AI-TTS-Donations
|
closed
|
syntax without ||
|
💫 feature_request @solrock/processor processor Low priority
|
I think it would be a good feature to make it work without having to add `||` between messages. I know nothing about regex or coding but I would make it stop when it sees a new voice `text: ` or a sound `(1)`.
I don't know if I should make another feature request for this other suggestions since they go in hand with this syntax feature but I would make it so if you make a mistake it would go back to the previous voice, for example:
`drake: hello i am drake (1) spngbob: i am still drake`
it would read it **all** with **Drake's** voice as "hello i am drake" (sound effect) "spngbob: i am still drake".
(even the spngbob part)
This way you stop the bot from stopping completely when a user sends a wrong voice id.
I would also add a default non ai voice (or a default ai voice / a way to pick a default ai voice) to read the first segment if it's wrong since the beginning, that way you can always get tts output even if the person sending the message is a pepega
For example: if the person forgets to add a starting voice or makes a spelling mistake on the first voice id, it would read it as Brian until it sees a valid voice id.
|
2.0
|
syntax without || - I think it would be a good feature to make it work without having to add `||` between messages. I know nothing about regex or coding but I would make it stop when it sees a new voice `text: ` or a sound `(1)`.
I don't know if I should make another feature request for this other suggestions since they go in hand with this syntax feature but I would make it so if you make a mistake it would go back to the previous voice, for example:
`drake: hello i am drake (1) spngbob: i am still drake`
it would read it **all** with **Drake's** voice as "hello i am drake" (sound effect) "spngbob: i am still drake".
(even the spngbob part)
This way you stop the bot from stopping completely when a user sends a wrong voice id.
I would also add a default non ai voice (or a default ai voice / a way to pick a default ai voice) to read the first segment if it's wrong since the beginning, that way you can always get tts output even if the person sending the message is a pepega
For example: if the person forgets to add a starting voice or makes a spelling mistake on the first voice id, it would read it as Brian until it sees a valid voice id.
|
process
|
syntax without i think it would be a good feature to make it work without having to add between messages i know nothing about regex or coding but i would make it stop when it sees a new voice text or a sound i don t know if i should make another feature request for this other suggestions since they go in hand with this syntax feature but i would make it so if you make a mistake it would go back to the previous voice for example drake hello i am drake spngbob i am still drake it would read it all with drake s voice as hello i am drake sound effect spngbob i am still drake even the spngbob part this way you stop the bot from stopping completely when a user sends a wrong voice id i would also add a default non ai voice or a default ai voice a way to pick a default ai voice to read the first segment if it s wrong since the beginning that way you can always get tts output even if the person sending the message is a pepega for example if the person forgets to add a starting voice or makes a spelling mistake on the first voice id it would read it as brian until it sees a valid voice id
| 1
|
19,973
| 26,452,934,417
|
IssuesEvent
|
2023-01-16 12:34:57
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
Interpolation errors are not logged by benthos when it fails
|
processors bughancement
|
consider the below config:
```
processors:
- log:
level: DEBUG
message: '${! content() + meta() }'
```
The above will fail; meta().string() should be used instead.
Expected: benthos should log an error with the appropriate message
Actual: Instead of logging an error, the log processor actually logs an empty msg, like below:
```
DEBU @service=benthos label="" path=root.pipeline.processors.0
```
|
1.0
|
Interpolation errors are not logged by benthos when it fails - consider the below config:
```
processors:
- log:
level: DEBUG
message: '${! content() + meta() }'
```
The above will fail; meta().string() should be used instead.
Expected: benthos should log an error with the appropriate message
Actual: Instead of logging an error, the log processor actually logs an empty msg, like below:
```
DEBU @service=benthos label="" path=root.pipeline.processors.0
```
|
process
|
interpolation errors are not logged by benthos when it fails consider the below config processors log level debug message content meta the above will fail meta string should be used instead expected benthos should log an error with the appropriate message actual instead of logging an error the log processor actually logs an empty msg like below debu service benthos label path root pipeline processors
| 1
|
2,242
| 5,088,643,545
|
IssuesEvent
|
2016-12-31 23:53:06
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @MapKeyClass Annotation
|
annotation processor task
|
Handle the `@MapKeyClass` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.32 MapKeyClass Annotation
|
1.0
|
Handle @MapKeyClass Annotation - Handle the `@MapKeyClass` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.32 MapKeyClass Annotation
|
process
|
handle mapkeyclass annotation handle the mapkeyclass annotation for a property or field see mapkeyclass annotation
| 1
|
11,697
| 14,544,790,823
|
IssuesEvent
|
2020-12-15 18:41:35
|
code4romania/expert-consultation-client
|
https://api.github.com/repos/code4romania/expert-consultation-client
|
closed
|
Review comments of document breakdown unit
|
angular document processing documents enhancement
|
As an admin of the Legal Consultation platform I want to be able to accept or reject comments posted by users on document breakdown units.
In the Admin panel we need to build a panel where the admin can view latest comments on documents and can have a simple accept/reject button.

|
1.0
|
Review comments of document breakdown unit - As an admin of the Legal Consultation platform I want to be able to accept or reject comments posted by users on document breakdown units.
In the Admin panel we need to build a panel where the admin can view latest comments on documents and can have a simple accept/reject button.

|
process
|
review comments of document breakdown unit as an admin of the legal consultation platform i want to be able to accept or reject comments posted by users on document breakdown units in the admin panel we need to build a panel where the admin can view latest comments on documents and can have a simple accept reject button
| 1
|
180,530
| 6,650,490,683
|
IssuesEvent
|
2017-09-28 16:30:22
|
Rsl1122/Plan-PlayerAnalytics
|
https://api.github.com/repos/Rsl1122/Plan-PlayerAnalytics
|
closed
|
[DEV2] WebServer not restarted after connection failure (after successful connection)
|
Enhancement Priority: MEDIUM status: Done
|
Plan Version: DEV2
WebServer still in API mode and analysis is giving link to BungeeCord webserver after:
- Bungee start
- Bukkit start
- Successful connection
- Bungee shutdown
- Bukkit PostHtml (Fail)
- Attempt connection (Fail)
- Post html to local pagecache
-> WebServer can not serve analysis request
-> Address not updated
|
1.0
|
[DEV2] WebServer not restarted after connection failure (after successful connection) - Plan Version: DEV2
WebServer still in API mode and analysis is giving link to BungeeCord webserver after:
- Bungee start
- Bukkit start
- Successful connection
- Bungee shutdown
- Bukkit PostHtml (Fail)
- Attempt connection (Fail)
- Post html to local pagecache
-> WebServer can not serve analysis request
-> Address not updated
|
non_process
|
webserver not restarted after connection failure after successful connection plan version webserver still in api mode and analysis is giving link to bungeecord webserver after bungee start bukkit start successful connection bungee shutdown bukkit posthtml fail attempt connection fail post html to local pagecache webserver can not serve analysis request address not updated
| 0
|
633,749
| 20,264,464,139
|
IssuesEvent
|
2022-02-15 10:41:48
|
unep-grid/map-x-mgl
|
https://api.github.com/repos/unep-grid/map-x-mgl
|
opened
|
Callback doesn't close if the user refuses to login while opening a private project through URL
|
bug priority 2
|
#### Current behaviour:
When a user loads a private project using a direct link, the user is asked to login.
In case the user refuses to login (i.e. clicks "Close"), the user remains on project Home.
However, if the user decides to login manually in a second time, the private project is loaded.
Changing project manually before logging in avoid the issue to occur.
#### Solution:
- The callback should be closed when the user clicks "Close" on the login panel.
|
1.0
|
Callback doesn't close if the user refuses to login while opening a private project through URL - #### Current behaviour:
When a user loads a private project using a direct link, the user is asked to login.
In case the user refuses to login (i.e. clicks "Close"), the user remains on project Home.
However, if the user decides to login manually in a second time, the private project is loaded.
Changing project manually before logging in avoid the issue to occur.
#### Solution:
- The callback should be closed when the user clicks "Close" on the login panel.
|
non_process
|
callback doesn t close if the user refuses to login while opening a private project through url current behaviour when a user loads a private project using a direct link the user is asked to login in case the user refuses to login i e clicks close the user remains on project home however if the user decides to login manually in a second time the private project is loaded changing project manually before logging in avoid the issue to occur solution the callback should be closed when the user clicks close on the login panel
| 0
|
4,605
| 2,733,735,968
|
IssuesEvent
|
2015-04-17 15:35:36
|
softlayer/sl-ember-components
|
https://api.github.com/repos/softlayer/sl-ember-components
|
closed
|
Add tests for sl-calendar-year component
|
0 - Backlog sl-calendar tests
|
<!---
@huboard:{"order":8.526512829121202e-14,"milestone_order":205,"custom_state":""}
-->
|
1.0
|
Add tests for sl-calendar-year component -
<!---
@huboard:{"order":8.526512829121202e-14,"milestone_order":205,"custom_state":""}
-->
|
non_process
|
add tests for sl calendar year component huboard order milestone order custom state
| 0
|
220,475
| 24,565,101,120
|
IssuesEvent
|
2022-10-13 01:43:06
|
faizulho/gatsby-starter-docz-netlifycms-1
|
https://api.github.com/repos/faizulho/gatsby-starter-docz-netlifycms-1
|
closed
|
CVE-2021-33587 (High) detected in css-what-3.4.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/svgo/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.30.3.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.4.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/gatsby-starter-docz-netlifycms-1/commit/70a9e87b1e68c0bef6964284e0899376209b0f3d">70a9e87b1e68c0bef6964284e0899376209b0f3d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (css-what): 5.0.1</p>
<p>Direct dependency fix Resolution (gatsby): 3.5.0-telemetry-test.252</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33587 (High) detected in css-what-3.4.2.tgz - autoclosed - ## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/svgo/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.30.3.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.4.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/gatsby-starter-docz-netlifycms-1/commit/70a9e87b1e68c0bef6964284e0899376209b0f3d">70a9e87b1e68c0bef6964284e0899376209b0f3d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution (css-what): 5.0.1</p>
<p>Direct dependency fix Resolution (gatsby): 3.5.0-telemetry-test.252</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in css what tgz autoclosed cve high severity vulnerability vulnerable library css what tgz a css selector parser library home page a href path to dependency file package json path to vulnerable library node modules svgo node modules css what package json dependency hierarchy gatsby tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss svgo tgz svgo tgz css select tgz x css what tgz vulnerable library found in head commit a href found in base branch master vulnerability details the css what package through for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what direct dependency fix resolution gatsby telemetry test step up your open source security game with mend
| 0
|
374,848
| 26,135,658,093
|
IssuesEvent
|
2022-12-29 11:45:51
|
iluwatar/uml-reverse-mapper
|
https://api.github.com/repos/iluwatar/uml-reverse-mapper
|
closed
|
Few queries
|
priority:normal epic:documentation
|
Thanks for developing the application UML Reverse Mapper. I am looking for a similar application.
I have the following queries. I am using the Issues section, as I did not find any other place to raise them. Appreciate a clarification.
1) I referred to the procedure given in https://java-design-patterns.com/blog/auto-generate-class-diagrams-with-uml-reverse-mapper/ and have generated .uml files. How to open the UML file? Eclipse shows it as an XML file.
2) The app generates Class diagram as documented. What changes are required to auto-generate Sequence diagrams from Java code?
|
1.0
|
Few queries - Thanks for developing the application UML Reverse Mapper. I am looking for a similar application.
I have the following queries. I am using the Issues section, as I did not find any other place to raise them. Appreciate a clarification.
1) I referred to the procedure given in https://java-design-patterns.com/blog/auto-generate-class-diagrams-with-uml-reverse-mapper/ and have generated .uml files. How to open the UML file? Eclipse shows it as an XML file.
2) The app generates Class diagram as documented. What changes are required to auto-generate Sequence diagrams from Java code?
|
non_process
|
few queries thanks for developing the application uml reverse mapper i am looking for a similar application i have the following queries i am using the issues section as i did not find any other place to raise them appreciate a clarification i referred to the procedure given in and have generated uml files how to open the uml file eclipse shows it as an xml file the app generates class diagram as documented what changes are required to auto generate sequence diagrams from java code
| 0
|
7,349
| 10,482,994,484
|
IssuesEvent
|
2019-09-24 13:08:05
|
fed-gren/FE-Interview-Study
|
https://api.github.com/repos/fed-gren/FE-Interview-Study
|
opened
|
브라우저 렌더링 과정
|
process
|
### 키워드
- HTML, DOM, CSS, JS, Blocking
### 코멘트
- 코멘트를 작성해주세요.
### 링크
- [최신 브라우저의 내부 살펴보기 3 - 렌더러 프로세스의 내부 동작](https://d2.naver.com/helloworld/5237120)
|
1.0
|
브라우저 렌더링 과정 - ### 키워드
- HTML, DOM, CSS, JS, Blocking
### 코멘트
- 코멘트를 작성해주세요.
### 링크
- [최신 브라우저의 내부 살펴보기 3 - 렌더러 프로세스의 내부 동작](https://d2.naver.com/helloworld/5237120)
|
process
|
브라우저 렌더링 과정 키워드 html dom css js blocking 코멘트 코멘트를 작성해주세요 링크
| 1
|
1,953
| 4,774,454,325
|
IssuesEvent
|
2016-10-27 06:46:41
|
CERNDocumentServer/cds
|
https://api.github.com/repos/CERNDocumentServer/cds
|
closed
|
webhooks: high-level tests for receivers
|
avc_processing review
|
There should be high-level tests making requests to the webhook receivers, apart from the individual task tests.
|
1.0
|
webhooks: high-level tests for receivers - There should be high-level tests making requests to the webhook receivers, apart from the individual task tests.
|
process
|
webhooks high level tests for receivers there should be high level tests making requests to the webhook receivers apart from the individual task tests
| 1
|
3,974
| 6,198,923,841
|
IssuesEvent
|
2017-07-05 20:20:20
|
howdyai/botkit
|
https://api.github.com/repos/howdyai/botkit
|
closed
|
Cannot import botkit into ionic 2 app - any workarounds?
|
help wanted web services
|
Application is the starter ionic 2 project: `ionic start MyIonic2Project tutorial --v2`
I have gotten modules like lodash and slack to work with this kind of import:
```
import * as slack from 'slack'; // this works https://www.npmjs.com/package/slack
import * as _ from 'lodash'; // this works
import * as Botkit from 'botkit'; // this does not work
```
Also got bcryptjs working with a more difficult workaround involving downloading its min.js including it in the www/index.html `<script src="bcrypt.min.js"></script>` and using a `declare var dcodeIO: any;`from https://x-team.com/blog/include-javascript-libraries-in-an-ionic-2-typescript-project/ however I do not know how I would do that method with botkit.
More complete error and trace when trying `import * as Botkit from 'botkit'`:
```
Runtime Error
Cannot find module "node-uuid"
```
```
Error: Cannot find module "node-uuid"
at webpackMissingModule (http://localhost:8100/build/main.js:227795:74)
at Object.<anonymous> (http://localhost:8100/build/main.js:227795:160)
at Object.<anonymous> (http://localhost:8100/build/main.js:228215:4)
at Object.<anonymous> (http://localhost:8100/build/main.js:228217:30)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
at Object.<anonymous> (http://localhost:8100/build/main.js:203015:13)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
at Object.<anonymous> (http://localhost:8100/build/main.js:147152:22)
at Object.<anonymous> (http://localhost:8100/build/main.js:148379:30)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
```
|
1.0
|
Cannot import botkit into ionic 2 app - any workarounds? - Application is the starter ionic 2 project: `ionic start MyIonic2Project tutorial --v2`
I have gotten modules like lodash and slack to work with this kind of import:
```
import * as slack from 'slack'; // this works https://www.npmjs.com/package/slack
import * as _ from 'lodash'; // this works
import * as Botkit from 'botkit'; // this does not work
```
Also got bcryptjs working with a more difficult workaround involving downloading its min.js including it in the www/index.html `<script src="bcrypt.min.js"></script>` and using a `declare var dcodeIO: any;`from https://x-team.com/blog/include-javascript-libraries-in-an-ionic-2-typescript-project/ however I do not know how I would do that method with botkit.
More complete error and trace when trying `import * as Botkit from 'botkit'`:
```
Runtime Error
Cannot find module "node-uuid"
```
```
Error: Cannot find module "node-uuid"
at webpackMissingModule (http://localhost:8100/build/main.js:227795:74)
at Object.<anonymous> (http://localhost:8100/build/main.js:227795:160)
at Object.<anonymous> (http://localhost:8100/build/main.js:228215:4)
at Object.<anonymous> (http://localhost:8100/build/main.js:228217:30)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
at Object.<anonymous> (http://localhost:8100/build/main.js:203015:13)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
at Object.<anonymous> (http://localhost:8100/build/main.js:147152:22)
at Object.<anonymous> (http://localhost:8100/build/main.js:148379:30)
at __webpack_require__ (http://localhost:8100/build/main.js:20:30)
```
|
non_process
|
cannot import botkit into ionic app any workarounds application is the starter ionic project ionic start tutorial i have gotten modules like lodash and slack to work with this kind of import import as slack from slack this works import as from lodash this works import as botkit from botkit this does not work also got bcryptjs working with a more difficult workaround involving downloading its min js including it in the www index html and using a declare var dcodeio any from however i do not know how i would do that method with botkit more complete error and trace when trying import as botkit from botkit runtime error cannot find module node uuid error cannot find module node uuid at webpackmissingmodule at object at object at object at webpack require at object at webpack require at object at object at webpack require
| 0
|
55,744
| 23,591,712,689
|
IssuesEvent
|
2022-08-23 15:41:14
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
Support for Azure log search alert V2
|
new-resource service/monitor
|
Hi,
I have a question relate to azure log alert V2.
https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-common-schema-definitions#monitoringservice--log-alerts-v2
https://cloudadministrator.net/2021/11/04/azure-monitor-log-alert-v2/
**Any plan to release a module for azure alert v2?** the module we found in the following doc
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/monitor_scheduled_query_rules_alert is for v1.
Thanks for any help in advance.
|
1.0
|
Support for Azure log search alert V2 - Hi,
I have a question relate to azure log alert V2.
https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-common-schema-definitions#monitoringservice--log-alerts-v2
https://cloudadministrator.net/2021/11/04/azure-monitor-log-alert-v2/
**Any plan to release a module for azure alert v2?** the module we found in the following doc
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/monitor_scheduled_query_rules_alert is for v1.
Thanks for any help in advance.
|
non_process
|
support for azure log search alert hi i have a question relate to azure log alert any plan to release a module for azure alert the module we found in the following doc is for thanks for any help in advance
| 0
|
234,860
| 18,020,984,073
|
IssuesEvent
|
2021-09-16 19:23:37
|
elsa-workflows/elsa-core
|
https://api.github.com/repos/elsa-workflows/elsa-core
|
opened
|
Topics for blog posts and guides
|
documentation
|
- [ ] Conductor (integrating application with Elsa as external workflow server)
- [ ] Cascading dropdown input control (extending Elsa Designer with advanced custom controls)
|
1.0
|
Topics for blog posts and guides - - [ ] Conductor (integrating application with Elsa as external workflow server)
- [ ] Cascading dropdown input control (extending Elsa Designer with advanced custom controls)
|
non_process
|
topics for blog posts and guides conductor integrating application with elsa as external workflow server cascading dropdown input control extending elsa designer with advanced custom controls
| 0
|
15,531
| 19,703,296,077
|
IssuesEvent
|
2022-01-12 18:54:16
|
googleapis/python-dns
|
https://api.github.com/repos/googleapis/python-dns
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'dns' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'dns' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname dns invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
11,425
| 14,248,130,427
|
IssuesEvent
|
2020-11-19 12:29:43
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
Handle time lazily
|
sig/coprocessor type/enhancement
|
When executing query like `select * from table where create_time > "2018-01-01 00:00:00"`, `create_time` doesn't have to be decoded to compare with `2018-01-01 00:00:00`. The encoded format can be compared directly.
|
1.0
|
Handle time lazily - When executing query like `select * from table where create_time > "2018-01-01 00:00:00"`, `create_time` doesn't have to be decoded to compare with `2018-01-01 00:00:00`. The encoded format can be compared directly.
|
process
|
handle time lazily when executing query like select from table where create time create time doesn t have to be decoded to compare with the encoded format can be compared directly
| 1
|
2,168
| 5,019,179,430
|
IssuesEvent
|
2016-12-14 10:51:28
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Cannot access process list demo
|
browser: all bug comp: activiti-processList
|
After running npm start within the process list demo nothing is displayed with an error in the console

|
1.0
|
Cannot access process list demo - After running npm start within the process list demo nothing is displayed with an error in the console

|
process
|
cannot access process list demo after running npm start within the process list demo nothing is displayed with an error in the console
| 1
|
326,225
| 27,979,901,164
|
IssuesEvent
|
2023-03-26 02:18:50
|
LachezaraLaz/341-soen341project2023
|
https://api.github.com/repos/LachezaraLaz/341-soen341project2023
|
closed
|
UAT-84
|
User Story - Employer - Selecting a candidate Acceptance Tests
|
Input:
All candidates displayed for a job posting.
Candidates profile must be accessible.
Tests:
All candidates who applied for a job posting are displayed.
Employer can view the profile of any candidate.
Employer can select a candidate for a job posting. This selection will send a notification to the candidate.
Outputs:
All candidates who applied for a job posting are displayed.
Employer can view the profile of any candidate.
Employer can select a candidate for a job posting. This selection will send a notification to the candidate.
|
1.0
|
UAT-84 - Input:
All candidates displayed for a job posting.
Candidates profile must be accessible.
Tests:
All candidates who applied for a job posting are displayed.
Employer can view the profile of any candidate.
Employer can select a candidate for a job posting. This selection will send a notification to the candidate.
Outputs:
All candidates who applied for a job posting are displayed.
Employer can view the profile of any candidate.
Employer can select a candidate for a job posting. This selection will send a notification to the candidate.
|
non_process
|
uat input all candidates displayed for a job posting candidates profile must be accessible tests all candidates who applied for a job posting are displayed employer can view the profile of any candidate employer can select a candidate for a job posting this selection will send a notification to the candidate outputs all candidates who applied for a job posting are displayed employer can view the profile of any candidate employer can select a candidate for a job posting this selection will send a notification to the candidate
| 0
|
7,780
| 10,920,191,560
|
IssuesEvent
|
2019-11-21 20:41:47
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
DNS lookup adapter fills cache with failed lookup results
|
improvement processing triaged
|
If you misconfigure a dns lookup adapter (using a non-responding DNS server IP),
it seems to fill the cache with `null` results.
After fixing your error, the lookup will still return no results for a previously
queried and cached domain.
After purging the cache, everything is back to normal.
## Expected Behavior
Lookup failures should be treated differently than `NXDOMAIN` results.
## Current Behavior
I'm just guessing here, but it seems any error is treated as `NXDOMAIN`
## Possible Solution
Distinguish between those errors and don't insert lookup errors into the cache.
## Steps to Reproduce (for bugs)
1. Configure DNS data adapter to use a wrong DNS server (e.g. `1.2.3.4`)
2. Purge cache
3. Wait until input processing looked up some messages (with empty results)
4. Configure a valid DNS server
5. Messages for previously looked up domains are still created with empty results
* Graylog Version: 3.1-rc2
|
1.0
|
DNS lookup adapter fills cache with failed lookup results - If you misconfigure a dns lookup adapter (using a non-responding DNS server IP),
it seems to fill the cache with `null` results.
After fixing your error, the lookup will still return no results for a previously
queried and cached domain.
After purging the cache, everything is back to normal.
## Expected Behavior
Lookup failures should be treated differently than `NXDOMAIN` results.
## Current Behavior
I'm just guessing here, but it seems any error is treated as `NXDOMAIN`
## Possible Solution
Distinguish between those errors and don't insert lookup errors into the cache.
## Steps to Reproduce (for bugs)
1. Configure DNS data adapter to use a wrong DNS server (e.g. `1.2.3.4`)
2. Purge cache
3. Wait until input processing looked up some messages (with empty results)
4. Configure a valid DNS server
5. Messages for previously looked up domains are still created with empty results
* Graylog Version: 3.1-rc2
|
process
|
dns lookup adapter fills cache with failed lookup results if you misconfigure a dns lookup adapter using a non responding dns server ip it seems to fill the cache with null results after fixing your error the lookup will still return no results for a previously queried and cached domain after purging the cache everything is back to normal expected behavior lookup failures should be treated differently than nxdomain results current behavior i m just guessing here but it seems any error is treated as nxdomain possible solution distinguish between those errors and don t insert lookup errors into the cache steps to reproduce for bugs configure dns data adapter to use a wrong dns server e g purge cache wait until input processing looked up some messages with empty results configure a valid dns server messages for previously looked up domains are still created with empty results graylog version
| 1
|
12,596
| 14,994,548,670
|
IssuesEvent
|
2021-01-29 13:04:54
|
aliasadidev/vsocde-npm-gui
|
https://api.github.com/repos/aliasadidev/vsocde-npm-gui
|
closed
|
support several nuget servers ?
|
In Process
|
Hi,
The current status of the extension seems to allow the customization of the nuget source thanks to `nugetpackagemanagergui.nuget.searchPackage.url`.
In my ends, I am using 2 sources that are fed through `nuget.config`.
I was wondering if you plan on supporting several sources from your config or from `nuget.config` file (would be even better).
I could help on that by the way, I'm issuing in advance in case this is already plan/ongoing.
And thanks for this work so far !
Regards,
|
1.0
|
support several nuget servers ? - Hi,
The current status of the extension seems to allow the customization of the nuget source thanks to `nugetpackagemanagergui.nuget.searchPackage.url`.
In my ends, I am using 2 sources that are fed through `nuget.config`.
I was wondering if you plan on supporting several sources from your config or from `nuget.config` file (would be even better).
I could help on that by the way, I'm issuing in advance in case this is already plan/ongoing.
And thanks for this work so far !
Regards,
|
process
|
support several nuget servers hi the current status of the extension seems to allow the customization of the nuget source thanks to nugetpackagemanagergui nuget searchpackage url in my ends i am using sources that are fed through nuget config i was wondering if you plan on supporting several sources from your config or from nuget config file would be even better i could help on that by the way i m issuing in advance in case this is already plan ongoing and thanks for this work so far regards
| 1
|
4,587
| 7,430,984,744
|
IssuesEvent
|
2018-03-25 09:48:31
|
pwittchen/ReactiveBus
|
https://api.github.com/repos/pwittchen/ReactiveBus
|
closed
|
release 0.0.4
|
release process
|
**Initial release notes**:
- Replacing factory methods with builder pattern in `Event` class - issue #7, PR #8
**things to do**:
- [x] bump library version
- [x] publish artifact to sonatype
- [x] close and release artifact on sonatype
- [x] update changelog after maven sync
- [x] update download section after maven sync
- [x] publish new release on GitHub
|
1.0
|
release 0.0.4 - **Initial release notes**:
- Replacing factory methods with builder pattern in `Event` class - issue #7, PR #8
**things to do**:
- [x] bump library version
- [x] publish artifact to sonatype
- [x] close and release artifact on sonatype
- [x] update changelog after maven sync
- [x] update download section after maven sync
- [x] publish new release on GitHub
|
process
|
release initial release notes replacing factory methods with builder pattern in event class issue pr things to do bump library version publish artifact to sonatype close and release artifact on sonatype update changelog after maven sync update download section after maven sync publish new release on github
| 1
|
5,656
| 8,527,328,649
|
IssuesEvent
|
2018-11-02 19:07:20
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Revamp Python 2/3 mode selection
|
P1 team-Rules-Python type: process
|
This issue tracks discussion around, and implementation of, the proposal to change the Python mode configuration state from a tri-value to a boolean.
See also #6444 to track specific features/bugs relating to the Python mode.
[Links to come momentarily]
|
1.0
|
Revamp Python 2/3 mode selection - This issue tracks discussion around, and implementation of, the proposal to change the Python mode configuration state from a tri-value to a boolean.
See also #6444 to track specific features/bugs relating to the Python mode.
[Links to come momentarily]
|
process
|
revamp python mode selection this issue tracks discussion around and implementation of the proposal to change the python mode configuration state from a tri value to a boolean see also to track specific features bugs relating to the python mode
| 1
|
127,187
| 12,308,326,976
|
IssuesEvent
|
2020-05-12 06:59:29
|
liferay/clay
|
https://api.github.com/repos/liferay/clay
|
opened
|
Radio Group is missing it's Markup/CSS page
|
comp: clayui.com comp: documentation
|
Radio Group doesn't have it's Markup/CSS page.
@pat270 Do we have markup for the Radio Group component?
|
1.0
|
Radio Group is missing it's Markup/CSS page - Radio Group doesn't have it's Markup/CSS page.
@pat270 Do we have markup for the Radio Group component?
|
non_process
|
radio group is missing it s markup css page radio group doesn t have it s markup css page do we have markup for the radio group component
| 0
|
22,603
| 31,824,437,536
|
IssuesEvent
|
2023-09-14 06:25:25
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
RFC: Handling of unmaintained modules when having issues in release builds other than deprecate and removal
|
RFC Process
|
## Introduction
Removing earlier integrated modules when they are unmaintained and cause issue on release builds
is understandable from release managers point of view, but also would throw away a lot of work done.
Finding an "soft" solution as marking such modules as "untested since Zephyr x.y.z", "has known issues since " or similar
(a degraded state, deprecated I still consider full functional but bound to be removed) would open the opportunity for someone to fix it when needed instead of starting over from "zero".
This process will not work for core modules rather than for add-ons such as e.g. civetweb, lvgl etc.
### Problem description
Features and modules are added to Zephyr OS and possibly publicly advertised. At some point the initial maintainer
does no longer have the time to maintain.
The release manager have to deal with issues arising from such modules on their own.
By finally removing such modules, a lot of work will be lost and Zephyr OS feature richness will be reduced.
Finding a maintainer who can spend the time to fix things for each Zephyr OS release and be available for
feature requests etc. seems not easy.
Fixing such issues for particular Zephyr OS releases might be more likely than for every release, since Zephyr OS end user will
not follow every Zephyr OS release but stick to a particular version as long as there is no benefit of upgrading.
If an organization has the need for such a module, it will spend the time to fix it for a particular Zephyr OS release.
### Proposed change
Introduce a process how such "problematic" modules can be keep in the framework, but clearly visible to the users
that such marked "problematic" modules will not work out of the box.
## Detailed RFC
No detailed RFC, rather a trigger to a TSC discussion
### Proposed change (Detailed)
No Proposed change (Detailed), rather a trigger to a TSC discussion
### Dependencies
None.
### Concerns and Unresolved Questions
- Loss of reputation Zephyr OS as being a stable, feature rich OS (more than a scheduler)
- What are the criteria s to finally remove a module (in order not to end up with everything requiring to be fixed)
- Voting in TSC?
## Alternatives
--
## Background
Our company will require a web server sooner or later.
But we will not get the time to maintain it for every Zephyr OS release.
|
1.0
|
RFC: Handling of unmaintained modules when having issues in release builds other than deprecate and removal - ## Introduction
Removing earlier integrated modules when they are unmaintained and cause issue on release builds
is understandable from release managers point of view, but also would throw away a lot of work done.
Finding an "soft" solution as marking such modules as "untested since Zephyr x.y.z", "has known issues since " or similar
(a degraded state, deprecated I still consider full functional but bound to be removed) would open the opportunity for someone to fix it when needed instead of starting over from "zero".
This process will not work for core modules rather than for add-ons such as e.g. civetweb, lvgl etc.
### Problem description
Features and modules are added to Zephyr OS and possibly publicly advertised. At some point the initial maintainer
does no longer have the time to maintain.
The release manager have to deal with issues arising from such modules on their own.
By finally removing such modules, a lot of work will be lost and Zephyr OS feature richness will be reduced.
Finding a maintainer who can spend the time to fix things for each Zephyr OS release and be available for
feature requests etc. seems not easy.
Fixing such issues for particular Zephyr OS releases might be more likely than for every release, since Zephyr OS end user will
not follow every Zephyr OS release but stick to a particular version as long as there is no benefit of upgrading.
If an organization has the need for such a module, it will spend the time to fix it for a particular Zephyr OS release.
### Proposed change
Introduce a process how such "problematic" modules can be keep in the framework, but clearly visible to the users
that such marked "problematic" modules will not work out of the box.
## Detailed RFC
No detailed RFC, rather a trigger to a TSC discussion
### Proposed change (Detailed)
No Proposed change (Detailed), rather a trigger to a TSC discussion
### Dependencies
None.
### Concerns and Unresolved Questions
- Loss of reputation Zephyr OS as being a stable, feature rich OS (more than a scheduler)
- What are the criteria s to finally remove a module (in order not to end up with everything requiring to be fixed)
- Voting in TSC?
## Alternatives
--
## Background
Our company will require a web server sooner or later.
But we will not get the time to maintain it for every Zephyr OS release.
|
process
|
rfc handling of unmaintained modules when having issues in release builds other than deprecate and removal introduction removing earlier integrated modules when they are unmaintained and cause issue on release builds is understandable from release managers point of view but also would throw away a lot of work done finding an soft solution as marking such modules as untested since zephyr x y z has known issues since or similar a degraded state deprecated i still consider full functional but bound to be removed would open the opportunity for someone to fix it when needed instead of starting over from zero this process will not work for core modules rather than for add ons such as e g civetweb lvgl etc problem description features and modules are added to zephyr os and possibly publicly advertised at some point the initial maintainer does no longer have the time to maintain the release manager have to deal with issues arising from such modules on their own by finally removing such modules a lot of work will be lost and zephyr os feature richness will be reduced finding a maintainer who can spend the time to fix things for each zephyr os release and be available for feature requests etc seems not easy fixing such issues for particular zephyr os releases might be more likely than for every release since zephyr os end user will not follow every zephyr os release but stick to a particular version as long as there is no benefit of upgrading if an organization has the need for such a module it will spend the time to fix it for a particular zephyr os release proposed change introduce a process how such problematic modules can be keep in the framework but clearly visible to the users that such marked problematic modules will not work out of the box detailed rfc no detailed rfc rather a trigger to a tsc discussion proposed change detailed no proposed change detailed rather a trigger to a tsc discussion dependencies none concerns and unresolved questions loss of reputation zephyr os as being a stable feature rich os more than a scheduler what are the criteria s to finally remove a module in order not to end up with everything requiring to be fixed voting in tsc alternatives background our company will require a web server sooner or later but we will not get the time to maintain it for every zephyr os release
| 1
|
68,109
| 28,132,483,109
|
IssuesEvent
|
2023-04-01 02:18:03
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
Parse for hybridConnections is case sensitive, does not match reply from Az
|
bug service/web-app
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.6
### AzureRM Provider Version
3.45.0
### Affected Resource(s)/Data Source(s)
azurerm_web_app_hybrid_connection
### Terraform Configuration Files
```hcl
resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection" {
for_each = var.HYBRID_CONNECTION_TARGET.hybridconnections
web_app_id = azurerm_windows_web_app.app_service.id
relay_id = azurerm_relay_hybrid_connection.shipmanager-api-proxy-hybrid-connection[each.key].id
hostname = each.value.hybrid_connection_target_server
port = each.value.hybrid_connection_target_server_port
send_key_name = "defaultSender"
}
```
### Debug Output/Panic Output
```shell
Error: parsing "/subscriptions/****/resourceGroups/shipmanager-prod/providers/Microsoft.Relay/namespaces/xxxx-api-proxy-relay-prod/hybridconnections/xxxx-dev": parsing segment "staticHybridConnections": expected the segment "hybridconnections" to be "hybridConnections"
│
│ with module.app-service-with-hybrid-connection.azurerm_web_app_hybrid_connection.web_app_hybrid_connection["xxxx-dev"],
│ on ..\..\stacks\app-service-with-hybrid-connection\hybrid-connection.tf line 53, in resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection":
│ 53: resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection" {
│
│ parsing "/subscriptions/****/resourceGroups/shipmanager-prod/providers/Microsoft.Relay/namespaces/xxxx-api--proxy-relay-prod/hybridconnections/xxxx-dev": parsing
│ segment "staticHybridConnections": expected the segment "hybridconnections" to be "hybridConnections"
```
### Expected Behaviour
The terraform plan execution executes successfully and azurerm_web_app_hybrid_connection is able to validate the current config against latest state.
### Actual Behaviour
The parsing error occurs as it seems the provider is expecting hybrid**C**onnections in the resource string return by azure but instead azure is returning hybrid**c**onnections with lowercase c
### Steps to Reproduce
No pre-existing resource
-terraform apply
- creating web app with configured relay and hybrid connection is created without issues
-terraform plan
- the error occurs
### Important Factoids
_No response_
### References
_No response_
|
1.0
|
Parse for hybridConnections is case sensitive, does not match reply from Az - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.6
### AzureRM Provider Version
3.45.0
### Affected Resource(s)/Data Source(s)
azurerm_web_app_hybrid_connection
### Terraform Configuration Files
```hcl
resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection" {
for_each = var.HYBRID_CONNECTION_TARGET.hybridconnections
web_app_id = azurerm_windows_web_app.app_service.id
relay_id = azurerm_relay_hybrid_connection.shipmanager-api-proxy-hybrid-connection[each.key].id
hostname = each.value.hybrid_connection_target_server
port = each.value.hybrid_connection_target_server_port
send_key_name = "defaultSender"
}
```
### Debug Output/Panic Output
```shell
Error: parsing "/subscriptions/****/resourceGroups/shipmanager-prod/providers/Microsoft.Relay/namespaces/xxxx-api-proxy-relay-prod/hybridconnections/xxxx-dev": parsing segment "staticHybridConnections": expected the segment "hybridconnections" to be "hybridConnections"
│
│ with module.app-service-with-hybrid-connection.azurerm_web_app_hybrid_connection.web_app_hybrid_connection["xxxx-dev"],
│ on ..\..\stacks\app-service-with-hybrid-connection\hybrid-connection.tf line 53, in resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection":
│ 53: resource "azurerm_web_app_hybrid_connection" "web_app_hybrid_connection" {
│
│ parsing "/subscriptions/****/resourceGroups/shipmanager-prod/providers/Microsoft.Relay/namespaces/xxxx-api--proxy-relay-prod/hybridconnections/xxxx-dev": parsing
│ segment "staticHybridConnections": expected the segment "hybridconnections" to be "hybridConnections"
```
### Expected Behaviour
The terraform plan execution executes successfully and azurerm_web_app_hybrid_connection is able to validate the current config against latest state.
### Actual Behaviour
The parsing error occurs as it seems the provider is expecting hybrid**C**onnections in the resource string return by azure but instead azure is returning hybrid**c**onnections with lowercase c
### Steps to Reproduce
No pre-existing resource
-terraform apply
- creating web app with configured relay and hybrid connection is created without issues
-terraform plan
- the error occurs
### Important Factoids
_No response_
### References
_No response_
|
non_process
|
parse for hybridconnections is case sensitive does not match reply from az is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version azurerm provider version affected resource s data source s azurerm web app hybrid connection terraform configuration files hcl resource azurerm web app hybrid connection web app hybrid connection for each var hybrid connection target hybridconnections web app id azurerm windows web app app service id relay id azurerm relay hybrid connection shipmanager api proxy hybrid connection id hostname each value hybrid connection target server port each value hybrid connection target server port send key name defaultsender debug output panic output shell error parsing subscriptions resourcegroups shipmanager prod providers microsoft relay namespaces xxxx api proxy relay prod hybridconnections xxxx dev parsing segment statichybridconnections expected the segment hybridconnections to be hybridconnections │ │ with module app service with hybrid connection azurerm web app hybrid connection web app hybrid connection │ on stacks app service with hybrid connection hybrid connection tf line in resource azurerm web app hybrid connection web app hybrid connection │ resource azurerm web app hybrid connection web app hybrid connection │ │ parsing subscriptions resourcegroups shipmanager prod providers microsoft relay namespaces xxxx api proxy relay prod hybridconnections xxxx dev parsing │ segment statichybridconnections expected the segment hybridconnections to be hybridconnections expected behaviour the terraform plan execution executes successfully and azurerm web app hybrid connection is able to validate the current config against latest state actual behaviour the parsing error occurs as it seems the provider is expecting hybrid c onnections in the resource string return by azure but instead azure is returning hybrid c onnections with lowercase c steps to reproduce no pre existing resource terraform apply creating web app with configured relay and hybrid connection is created without issues terraform plan the error occurs important factoids no response references no response
| 0
|
398,933
| 27,217,382,810
|
IssuesEvent
|
2023-02-20 23:57:14
|
aws/aws-sdk-js-v3
|
https://api.github.com/repos/aws/aws-sdk-js-v3
|
closed
|
Get the Object URL of an uploaded file
|
response-requested documentation p3
|
Is there anywhere in the documentation where it explains (with code samples ideally) how to get the Object URL of an uploaded file?
Scenario: I'm uploading files to a public `S3` bucket via the SDK. Once the file is uploaded, I need to fire a request to another external service and pass the public Object URL of the uploaded file to it:

This is how I'm uploading the file:
```js
async function uploadFile(
fileBuffer,
remoteFileName,
bucketName,
region,
acl
) {
const s3 = new S3Client({ region })
return await s3.send(
new PutObjectCommand({
Bucket: bucketName,
Key: remoteFileName,
Body: fileBuffer,
ACL: acl,
})
)
}
```
The response from this command doesn't seem to contain the Object URL:
```json
{
"$metadata": {
"httpStatusCode": 200,
"extendedRequestId": "...",
"attempts": 1,
"totalRetryDelay": 0
},
"ETag": "\"...\""
}
```
I've even tried getting it by firing a subsequent `GetObjectCommand` for the previously uploaded file. Again, the Object URL doesn't seem to be in the response:
```json
{
"$metadata": {
"httpStatusCode": 200,
"extendedRequestId": "...",
"attempts": 1,
"totalRetryDelay": 0
},
"AcceptRanges": "bytes",
"ContentLength": 186530,
"ContentType": "application/octet-stream",
"ETag": "\"...\"",
"LastModified": "2021-04-23T20:31:08.000Z",
"Metadata": {}
}
```
Is this scenario even supported by the v3 SDK? If so, which _command_ should I use to get the Object URL of an uploaded file?
|
1.0
|
Get the Object URL of an uploaded file - Is there anywhere in the documentation where it explains (with code samples ideally) how to get the Object URL of an uploaded file?
Scenario: I'm uploading files to a public `S3` bucket via the SDK. Once the file is uploaded, I need to fire a request to another external service and pass the public Object URL of the uploaded file to it:

This is how I'm uploading the file:
```js
async function uploadFile(
fileBuffer,
remoteFileName,
bucketName,
region,
acl
) {
const s3 = new S3Client({ region })
return await s3.send(
new PutObjectCommand({
Bucket: bucketName,
Key: remoteFileName,
Body: fileBuffer,
ACL: acl,
})
)
}
```
The response from this command doesn't seem to contain the Object URL:
```json
{
"$metadata": {
"httpStatusCode": 200,
"extendedRequestId": "...",
"attempts": 1,
"totalRetryDelay": 0
},
"ETag": "\"...\""
}
```
I've even tried getting it by firing a subsequent `GetObjectCommand` for the previously uploaded file. Again, the Object URL doesn't seem to be in the response:
```json
{
"$metadata": {
"httpStatusCode": 200,
"extendedRequestId": "...",
"attempts": 1,
"totalRetryDelay": 0
},
"AcceptRanges": "bytes",
"ContentLength": 186530,
"ContentType": "application/octet-stream",
"ETag": "\"...\"",
"LastModified": "2021-04-23T20:31:08.000Z",
"Metadata": {}
}
```
Is this scenario even supported by the v3 SDK? If so, which _command_ should I use to get the Object URL of an uploaded file?
|
non_process
|
get the object url of an uploaded file is there anywhere in the documentation where it explains with code samples ideally how to get the object url of an uploaded file scenario i m uploading files to a public bucket via the sdk once the file is uploaded i need to fire a request to another external service and pass the public object url of the uploaded file to it this is how i m uploading the file js async function uploadfile filebuffer remotefilename bucketname region acl const new region return await send new putobjectcommand bucket bucketname key remotefilename body filebuffer acl acl the response from this command doesn t seem to contain the object url json metadata httpstatuscode extendedrequestid attempts totalretrydelay etag i ve even tried getting it by firing a subsequent getobjectcommand for the previously uploaded file again the object url doesn t seem to be in the response json metadata httpstatuscode extendedrequestid attempts totalretrydelay acceptranges bytes contentlength contenttype application octet stream etag lastmodified metadata is this scenario even supported by the sdk if so which command should i use to get the object url of an uploaded file
| 0
|
27,051
| 21,055,285,336
|
IssuesEvent
|
2022-04-01 02:08:11
|
microsoft/vcpkg
|
https://api.github.com/repos/microsoft/vcpkg
|
closed
|
Pull request CI ignores baseline records
|
category:infrastructure requires:vcpkg-tool-release
|
The PR CI now ignores the baseline records, i.e. it builds ports which are known to fail, resulting in unexpected passes for some ports and in unexpected failures (which should be cascade) in depending ports.
Evidence:
https://dev.azure.com/vcpkg/public/_build/results?buildId=69660
(qtinterfaceframework is "fail" since #23837)
https://dev.azure.com/vcpkg/public/_build/results?buildId=69648
(ompl:x64-osx is "fail", omplapp is cascade)
CC @BillyONeal.
|
1.0
|
Pull request CI ignores baseline records - The PR CI now ignores the baseline records, i.e. it builds ports which are known to fail, resulting in unexpected passes for some ports and in unexpected failures (which should be cascade) in depending ports.
Evidence:
https://dev.azure.com/vcpkg/public/_build/results?buildId=69660
(qtinterfaceframework is "fail" since #23837)
https://dev.azure.com/vcpkg/public/_build/results?buildId=69648
(ompl:x64-osx is "fail", omplapp is cascade)
CC @BillyONeal.
|
non_process
|
pull request ci ignores baseline records the pr ci now ignores the baseline records i e it builds ports which are known to fail resulting in unexpected passes for some ports and in unexpected failures which should be cascade in depending ports evidence qtinterfaceframework is fail since ompl osx is fail omplapp is cascade cc billyoneal
| 0
|
8,265
| 11,426,605,685
|
IssuesEvent
|
2020-02-03 22:19:47
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Is an Azure Automation resource zone redundant within a given Azure region?
|
Pri1 automation/svc cxp process-automation/subsvc product-question triaged
|
Is an Azure Automation resource zone redundant within a given Azure region? Wasn't able to find this in the docs.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ab68016c-9077-a521-2e05-86cdd9a42adc
* Version Independent ID: a31432ac-2b6a-2d4f-2e4f-72c55701b296
* Content: [Azure Automation Overview](https://docs.microsoft.com/en-us/azure/automation/automation-intro)
* Content Source: [articles/automation/automation-intro.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-intro.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Is an Azure Automation resource zone redundant within a given Azure region? - Is an Azure Automation resource zone redundant within a given Azure region? Wasn't able to find this in the docs.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ab68016c-9077-a521-2e05-86cdd9a42adc
* Version Independent ID: a31432ac-2b6a-2d4f-2e4f-72c55701b296
* Content: [Azure Automation Overview](https://docs.microsoft.com/en-us/azure/automation/automation-intro)
* Content Source: [articles/automation/automation-intro.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-intro.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
is an azure automation resource zone redundant within a given azure region is an azure automation resource zone redundant within a given azure region wasn t able to find this in the docs document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
648,373
| 21,184,359,022
|
IssuesEvent
|
2022-04-08 11:07:57
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
clipchamp.com - see bug description
|
browser-firefox priority-normal os-linux engine-gecko
|
<!-- @browser: Firefox 99.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/102190 -->
**URL**: https://clipchamp.com/en/
**Browser / Version**: Firefox 99.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Says "Unsupported browser" when logging-in
**Steps to Reproduce**:
When I try to login into my clipchamp account, the website says that I'm using an unsupported browser and they tell me to use MS Edge or Chrome.
It shows this when I'm trying to log in:
https://clipchamp.com/en/unsupported-browser/
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
clipchamp.com - see bug description - <!-- @browser: Firefox 99.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/102190 -->
**URL**: https://clipchamp.com/en/
**Browser / Version**: Firefox 99.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Says "Unsupported browser" when logging-in
**Steps to Reproduce**:
When I try to login into my clipchamp account, the website says that I'm using an unsupported browser and they tell me to use MS Edge or Chrome.
It shows this when I'm trying to log in:
https://clipchamp.com/en/unsupported-browser/
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
clipchamp com see bug description url browser version firefox operating system linux tested another browser yes chrome problem type something else description says unsupported browser when logging in steps to reproduce when i try to login into my clipchamp account the website says that i m using an unsupported browser and they tell me to use ms edge or chrome it shows this when i m trying to log in browser configuration none from with ❤️
| 0
|
12,791
| 15,169,329,450
|
IssuesEvent
|
2021-02-12 20:57:05
|
amor71/LiuAlgoTrader
|
https://api.github.com/repos/amor71/LiuAlgoTrader
|
closed
|
[ENH] Throttle API requests to support polygon's free plan for market miners?
|
bug in-process
|
Sorry for the spam of tickets, but I noticed this while running some tests.
**Is your feature request related to a problem? Please describe.**
Essentially it currently seems that I have setup everything correctly (YAY). I started the example setup with running the miner.toml under the examples section and the `swing-momentum/portfolio.py` like such:
```
[miners.portfolio]
filename = "swing-momentum/portfolio.py"
portfolio_size = 2000
debug = true
index = 'SP500'
rank_days = 90
atr_days = 20
risk_factor = 0.002
indicators = ['SMA100']
```
initially I was getting errors like the following:
```
[load_data()][9942]2021-01-21 22:43:05.755232:loading 200 days for symbol MMM (1/505)
[main()][9942]2021-01-21 22:43:07.067853:[ERROR] aborted w/ exception object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/.venv/lib/python3.8/site-packages/liualgotrader-0.0.86-py3.8.egg/EGG-INFO/scripts/market_miner", line 73, in main
await asyncio.gather(*task_list)
File "swing-momentum/portfolio.py", line 178, in run
await self.load_data(symbols)
File "swing-momentum/portfolio.py", line 67, in load_data
tlog(f"loaded {len(self.data_bars[symbol])} data-points")
TypeError: object of type 'NoneType' has no len()
```
which I now believed to have traced down on how polygon behaves under the free plan.
In the dashboard I can see that 5 requests per minute are getting answered correctly, but the rest is refused due to rate limitation of the free plan (see https://polygon.io/pricing).

From my end it seems if one is rate limited the call to [`daily_bars`](https://github.com/amor71/LiuAlgoTrader/blob/5ad4e46db126698399d338c4488aa4983385c2d8/examples/swing-momentum/portfolio.py#L61) will return `Ǹone` (and not throw an Exception) causing all sorts of issues.
**Describe the solution you'd like**
The 'perfect' solution I could imagine is either some rate limitation build in into LiuAlgoTrader, or using another free endpoint without rate limitation that allows to fetch daily data (e.g. yahoo finance).
Certainly if I know my strategies return the expenses needed to pay for the premium for polygon this problem would not exist...but I'm not there yet ;-)
|
1.0
|
[ENH] Throttle API requests to support polygon's free plan for market miners? - Sorry for the spam of tickets, but I noticed this while running some tests.
**Is your feature request related to a problem? Please describe.**
Essentially it currently seems that I have setup everything correctly (YAY). I started the example setup with running the miner.toml under the examples section and the `swing-momentum/portfolio.py` like such:
```
[miners.portfolio]
filename = "swing-momentum/portfolio.py"
portfolio_size = 2000
debug = true
index = 'SP500'
rank_days = 90
atr_days = 20
risk_factor = 0.002
indicators = ['SMA100']
```
initially I was getting errors like the following:
```
[load_data()][9942]2021-01-21 22:43:05.755232:loading 200 days for symbol MMM (1/505)
[main()][9942]2021-01-21 22:43:07.067853:[ERROR] aborted w/ exception object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/.venv/lib/python3.8/site-packages/liualgotrader-0.0.86-py3.8.egg/EGG-INFO/scripts/market_miner", line 73, in main
await asyncio.gather(*task_list)
File "swing-momentum/portfolio.py", line 178, in run
await self.load_data(symbols)
File "swing-momentum/portfolio.py", line 67, in load_data
tlog(f"loaded {len(self.data_bars[symbol])} data-points")
TypeError: object of type 'NoneType' has no len()
```
which I now believed to have traced down on how polygon behaves under the free plan.
In the dashboard I can see that 5 requests per minute are getting answered correctly, but the rest is refused due to rate limitation of the free plan (see https://polygon.io/pricing).

From my end it seems if one is rate limited the call to [`daily_bars`](https://github.com/amor71/LiuAlgoTrader/blob/5ad4e46db126698399d338c4488aa4983385c2d8/examples/swing-momentum/portfolio.py#L61) will return `Ǹone` (and not throw an Exception) causing all sorts of issues.
**Describe the solution you'd like**
The 'perfect' solution I could imagine is either some rate limitation build in into LiuAlgoTrader, or using another free endpoint without rate limitation that allows to fetch daily data (e.g. yahoo finance).
Certainly if I know my strategies return the expenses needed to pay for the premium for polygon this problem would not exist...but I'm not there yet ;-)
|
process
|
throttle api requests to support polygon s free plan for market miners sorry for the spam of tickets but i noticed this while running some tests is your feature request related to a problem please describe essentially it currently seems that i have setup everything correctly yay i started the example setup with running the miner toml under the examples section and the swing momentum portfolio py like such filename swing momentum portfolio py portfolio size debug true index rank days atr days risk factor indicators initially i was getting errors like the following loading days for symbol mmm aborted w exception object of type nonetype has no len traceback most recent call last file venv lib site packages liualgotrader egg egg info scripts market miner line in main await asyncio gather task list file swing momentum portfolio py line in run await self load data symbols file swing momentum portfolio py line in load data tlog f loaded len self data bars data points typeerror object of type nonetype has no len which i now believed to have traced down on how polygon behaves under the free plan in the dashboard i can see that requests per minute are getting answered correctly but the rest is refused due to rate limitation of the free plan see from my end it seems if one is rate limited the call to will return ǹone and not throw an exception causing all sorts of issues describe the solution you d like the perfect solution i could imagine is either some rate limitation build in into liualgotrader or using another free endpoint without rate limitation that allows to fetch daily data e g yahoo finance certainly if i know my strategies return the expenses needed to pay for the premium for polygon this problem would not exist but i m not there yet
| 1
|
642,686
| 20,910,303,193
|
IssuesEvent
|
2022-03-24 08:41:00
|
oceanprotocol/market
|
https://api.github.com/repos/oceanprotocol/market
|
closed
|
Support 'import' datatokens into Ocean Market
|
Type: Enhancement Type: Epic Priority: Low
|
More generally, anyone should be able to start a marketplace that loads all datasets from the chain.
A q from the community:
[3:56 PM] will we be able to import data tokens onto the Ocean Marketplace, or will it currently only support data tokens created through the GUI
[Thread in slack with answers](https://bigchaindb.slack.com/archives/C012Y7ZFFD4/p1601906252163400)
Matthias: interesting idea, we do not have a UI for it but should technically be possible since lib-js accepts an existing data token. We would need to look into what else is required to get this in market, probably would require a publish flow in UI still

Trent: Thanks. I'm thinking this too. It would also need the info that the Provider has. Is that possible to import?

Trent: And in general, for V3.0 we're ok, no one will expect 'import' functionality at that point.

Alex: we have 3 scenarios:
- Datatoken published without an asset (you can do DT create, mint, createPool). Question is: should we support that? Because DT has no value attached, just speculative
- Datatoken published with an asset, but aquarius is filtering the dataset (he somehow decided that not the case for our MP). Same questions as above
- Datatoken published with an asset and no filtering . This would work out of the box

Trent: Thanks Alex!

Trent: Of course there would also need to be simply the Gui support for "import". It's great to know that it might be easy to do post v3.0. But it feels like there will be a hitch or two.

Alex: it's easy to build if we want to. we need to detach the pool "options" from the asset and have a separate tab just for pools (edited)

Trent: OK! Let's see when there is actual demand.

Alex: yes, agreed, and not for this month

Matthias: yeah, “easy”. So the thing is if we want to do a proper UX for the publish part we could spend at least 2 weeks on it. Dissect the publishing into 2-3 big feedback steps (datatoken creation, asset creation, pool creation) and keep track of each step in some unified view. Then users have to be able to restart each failed step and such. This requires a lot of refactoring of react hooks and market. Once we have that, then importing is easy, cause we can plugin whatever is missing into this flow then

Alex: easy = doable. easy != 1 day 

Alex: definitely at least 2-3 weeks for a proper UI, with all the flows (add/remove, swaps, etc)

Matthias: And then it is also more clear to users why they have to click 9 damn times in MetaMask although we should get this down somehow too, like all those signatures

Alex: yeah, i'm gonna do that
|
1.0
|
Support 'import' datatokens into Ocean Market - More generally, anyone should be able to start a marketplace that loads all datasets from the chain.
A q from the community:
[3:56 PM] will we be able to import data tokens onto the Ocean Marketplace, or will it currently only support data tokens created through the GUI
[Thread in slack with answers](https://bigchaindb.slack.com/archives/C012Y7ZFFD4/p1601906252163400)
Matthias: interesting idea, we do not have a UI for it but should technically be possible since lib-js accepts an existing data token. We would need to look into what else is required to get this in market, probably would require a publish flow in UI still

Trent: Thanks. I'm thinking this too. It would also need the info that the Provider has. Is that possible to import?

Trent: And in general, for V3.0 we're ok, no one will expect 'import' functionality at that point.

Alex: we have 3 scenarios:
- Datatoken published without an asset (you can do DT create, mint, createPool). Question is: should we support that? Because DT has no value attached, just speculative
- Datatoken published with an asset, but aquarius is filtering the dataset (he somehow decided that not the case for our MP). Same questions as above
- Datatoken published with an asset and no filtering . This would work out of the box

Trent: Thanks Alex!

Trent: Of course there would also need to be simply the Gui support for "import". It's great to know that it might be easy to do post v3.0. But it feels like there will be a hitch or two.

Alex: it's easy to build if we want to. we need to detach the pool "options" from the asset and have a separate tab just for pools (edited)

Trent: OK! Let's see when there is actual demand.

Alex: yes, agreed, and not for this month

Matthias: yeah, “easy”. So the thing is if we want to do a proper UX for the publish part we could spend at least 2 weeks on it. Dissect the publishing into 2-3 big feedback steps (datatoken creation, asset creation, pool creation) and keep track of each step in some unified view. Then users have to be able to restart each failed step and such. This requires a lot of refactoring of react hooks and market. Once we have that, then importing is easy, cause we can plugin whatever is missing into this flow then

Alex: easy = doable. easy != 1 day 

Alex: definitely at least 2-3 weeks for a proper UI, with all the flows (add/remove, swaps, etc)

Matthias: And then it is also more clear to users why they have to click 9 damn times in MetaMask although we should get this down somehow too, like all those signatures

Alex: yeah, i'm gonna do that
|
non_process
|
support import datatokens into ocean market more generally anyone should be able to start a marketplace that loads all datasets from the chain a q from the community will we be able to import data tokens onto the ocean marketplace or will it currently only support data tokens created through the gui matthias interesting idea we do not have a ui for it but should technically be possible since lib js accepts an existing data token we would need to look into what else is required to get this in market probably would require a publish flow in ui still  trent thanks i m thinking this too it would also need the info that the provider has is that possible to import  trent and in general for we re ok no one will expect import functionality at that point  alex we have scenarios datatoken published without an asset you can do dt create mint createpool question is should we support that because dt has no value attached just speculative datatoken published with an asset but aquarius is filtering the dataset he somehow decided that not the case for our mp same questions as above datatoken published with an asset and no filtering this would work out of the box  trent thanks alex  trent of course there would also need to be simply the gui support for import it s great to know that it might be easy to do post but it feels like there will be a hitch or two  alex it s easy to build if we want to we need to detach the pool options from the asset and have a separate tab just for pools edited  trent ok let s see when there is actual demand  alex yes agreed and not for this month  matthias yeah “easy” so the thing is if we want to do a proper ux for the publish part we could spend at least weeks on it dissect the publishing into big feedback steps datatoken creation asset creation pool creation and keep track of each step in some unified view then users have to be able to restart each failed step and such this requires a lot of refactoring of react hooks and market once we have that then importing is easy cause we can plugin whatever is missing into this flow then  alex easy doable easy day   alex definitely at least weeks for a proper ui with all the flows add remove swaps etc  matthias and then it is also more clear to users why they have to click damn times in metamask although we should get this down somehow too like all those signatures  alex yeah i m gonna do that
| 0
|
4,510
| 7,353,650,073
|
IssuesEvent
|
2018-03-09 01:53:21
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
How to set Provisioning Mode
|
active-directory cxp doc-bug in-process triaged
|
When I follow the steps above and go to the Samanage app > Provisioning > I only get the Manual mode. How do I get the Automatic mode that this documentation shows I should be able to use?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 63881c47-4bd6-eebd-7f0d-6dcaa5317eb2
* Version Independent ID: 56fc1792-dd50-28cc-ea12-41aa199dd900
* Content: [Tutorial: Configure Samanage for automatic user provisioning with Azure Active Directory | Microsoft Docs](https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-samanage-provisioning-tutorial)
* Content Source: [articles/active-directory/active-directory-saas-samanage-provisioning-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/active-directory-saas-samanage-provisioning-tutorial.md)
* Service: **active-directory**
* GitHub Login: @asmalser-msft
* Microsoft Alias: **asmalser-msft**
|
1.0
|
How to set Provisioning Mode - When I follow the steps above and go to the Samanage app > Provisioning > I only get the Manual mode. How do I get the Automatic mode that this documentation shows I should be able to use?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 63881c47-4bd6-eebd-7f0d-6dcaa5317eb2
* Version Independent ID: 56fc1792-dd50-28cc-ea12-41aa199dd900
* Content: [Tutorial: Configure Samanage for automatic user provisioning with Azure Active Directory | Microsoft Docs](https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-samanage-provisioning-tutorial)
* Content Source: [articles/active-directory/active-directory-saas-samanage-provisioning-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/active-directory-saas-samanage-provisioning-tutorial.md)
* Service: **active-directory**
* GitHub Login: @asmalser-msft
* Microsoft Alias: **asmalser-msft**
|
process
|
how to set provisioning mode when i follow the steps above and go to the samanage app provisioning i only get the manual mode how do i get the automatic mode that this documentation shows i should be able to use document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id eebd version independent id content content source service active directory github login asmalser msft microsoft alias asmalser msft
| 1
|
22,354
| 31,030,430,964
|
IssuesEvent
|
2023-08-10 12:07:19
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Error when using column reference as the second argument of the contains function
|
Type:Bug Priority:P2 .Backend .Team/QueryProcessor :hammer_and_wrench:
|
### Describe the bug
It looks like Metabase doesn't accepts column reference as a source for the second argument of the `contains` function.
When creating a custom column with an expression like `case(contains([Display name], [Given name]), "yes", "no")`, the question returns an error `Input to update-string-value does not match schema: [0;33m [(named [(named (not (= :value :field)) :value) nil nil] value) nil] [0m`.
### To Reproduce
Fom a sample `users` table
1. Create a question
2. Add a custom column with an expression such as `case(contains([Display name], [Given name]), "yes", "no")`
3. Show the preview
4. Returns an error `Input to update-string-value does not match schema: [0;33m [(named [(named (not (= :value :field)) :value) nil nil] value) nil] [0m`
### Expected behavior
The expression should accept referenced columns for both contains parameters
### Logs
{:database_id 2,
:started_at #t "2023-06-02T10:42:44.943921Z[GMT]",
:error_type :invalid-query,
:json_query
{:database 2,
:query
{:source-table 15,
:expressions {:calc ["case" [[["contains" ["field" 1374 nil] ["field" 1372 nil]] "yes"]] {:default "no"}]},
:fields [["field" 1376 nil] ["field" 1374 nil] ["field" 1372 nil] ["expression" "calc" nil]],
:limit 10},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> driver.sql.query_processor$fn__63286$update_string_value__63293.invoke(query_processor.clj:1070)"
"driver.sql.query_processor$fn__63317.invokeStatic(query_processor.clj:1080)"
"driver.sql.query_processor$fn__63317.invoke(query_processor.clj:1078)"
"driver.sql.query_processor$fn__63080.invokeStatic(query_processor.clj:825)"
"driver.sql.query_processor$fn__63080.invoke(query_processor.clj:819)"
"driver.sql.query_processor$fn__62767.invokeStatic(query_processor.clj:522)"
"driver.sql.query_processor$fn__62767.invoke(query_processor.clj:519)"
"driver.sql.query_processor$as.invokeStatic(query_processor.clj:984)"
"driver.sql.query_processor$as.doInvoke(query_processor.clj:953)"
"driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267$fn__63268.invoke(query_processor.clj:1052)"
"driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267.invoke(query_processor.clj:1051)"
"driver.sql.query_processor$fn__63260.invokeStatic(query_processor.clj:1051)"
"driver.sql.query_processor$fn__63260.invoke(query_processor.clj:1049)"
"driver.sql.query_processor$apply_top_level_clauses$fn__63538.invoke(query_processor.clj:1372)"
"driver.sql.query_processor$apply_top_level_clauses.invokeStatic(query_processor.clj:1370)"
"driver.sql.query_processor$apply_top_level_clauses.invoke(query_processor.clj:1366)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1410)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)"
"driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1394)"
"driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1379)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1408)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1433)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1424)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1442)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1438)"
"driver.sql$fn__101522.invokeStatic(sql.clj:42)"
"driver.sql$fn__101522.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__68063.invoke(mbql_to_native.clj:21)"
"query_processor$fn__70691$combined_post_process__70696$combined_post_process_STAR___70697.invoke(query_processor.clj:243)"
"query_processor$fn__70691$combined_pre_process__70692$combined_pre_process_STAR___70693.invoke(query_processor.clj:240)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083$fn__69088.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:90)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__64953.invoke(fetch_source_query.clj:310)"
"query_processor.middleware.store$initialize_store$fn__65131$fn__65132.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:47)"
"query_processor.store$do_with_store.invoke(store.clj:41)"
"query_processor.middleware.store$initialize_store$fn__65131.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__69372.invoke(normalize_query.clj:25)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__66309.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__69308.invoke(process_userland_query.clj:150)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__69685.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___59455$thunk__59457.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:109)"
"query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:94)"
"query_processor.reducible$sync_qp$qp_STAR___59467.doInvoke(reducible.clj:129)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:362)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:358)"
"query_processor$fn__70739$process_query_and_save_execution_BANG___70748$fn__70751.invoke(query_processor.clj:373)"
"query_processor$fn__70739$process_query_and_save_execution_BANG___70748.invoke(query_processor.clj:366)"
"query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793$fn__70796.invoke(query_processor.clj:385)"
"query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793.invoke(query_processor.clj:378)"
"api.dataset$run_query_async$fn__86545.invoke(dataset.clj:73)"
"query_processor.streaming$streaming_response_STAR_$fn__54305$fn__54306.invoke(streaming.clj:166)"
"query_processor.streaming$streaming_response_STAR_$fn__54305.invoke(streaming.clj:165)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)"
"async.streaming_response$do_f_async$task__36922.invoke(streaming_response.clj:88)"],
:card_id nil,
:context :ad-hoc,
:error
"Input to update-string-value does not match schema: \n\n\t [(named [(named (not (= :value :field)) :value) nil nil] value) nil] \n\n",
:row_count 0,
:running_time 0,
:preprocessed
{:database 2,
:query
{:source-table 15,
:expressions {"calc" [:case [[[:contains [:field 1374 nil] [:field 1372 nil]] "yes"]] {:default "no"}]},
:fields [[:field 1376 nil] [:field 1374 nil] [:field 1372 nil] [:expression "calc"]],
:limit 10},
:type :query,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info {:executed-by 2, :context :ad-hoc}},
:ex-data
{:type :schema.core/error,
:value
[[:field
1372
{:metabase.query-processor.util.add-alias-info/source-table 15,
:metabase.query-processor.util.add-alias-info/source-alias "given_name",
:metabase.query-processor.util.add-alias-info/desired-alias "given_name",
:metabase.query-processor.util.add-alias-info/position 2}]
#object[metabase.driver.sql.query_processor$fn__63317$fn__63321 0x5d978eb "metabase.driver.sql.query_processor$fn__63317$fn__63321@5d978eb"]],
:error [(named [(named (not (= :value :field)) :value) nil nil] value) nil]},
:data {:rows [], :cols []}}
### Information about your Metabase installation
```JSON
{
"browser-info": {
"language": "en-US",
"platform": "Win32",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.19+7",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.19",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.19+7",
"os.name": "Linux",
"os.version": "5.10.164.1-1.cm1",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"postgres"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "11.18"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.5.1"
}
},
"run-mode": "prod",
"version": {
"date": "2023-04-28",
"tag": "v0.46.2",
"branch": "release-x.46.x",
"hash": "8967c94"
},
"settings": {
"report-timezone": null
}
}
}
```
### Severity
blocking for non-technical users
### Additional context
I found a very tricky workaround to make it work:
- Change the expression to `case(contains([Display name], "MYVALUE"), "yes", "no")`
- Convert the query to SQL
- From this query, replace `LIKE '%MYVALUE%'` with `LIKE CONCAT('%', "public"."users"."given_name", '%')`.
- Run it, it works!
Basically the trick is to:
- use SQL to be able to reference the `given_name` column properly with `"public"."users"."given_name"`
- surround the `given_name` with `%` so that it performs a search that match if the string is anywhere in the source column
|
1.0
|
Error when using column reference as the second argument of the contains function - ### Describe the bug
It looks like Metabase doesn't accepts column reference as a source for the second argument of the `contains` function.
When creating a custom column with an expression like `case(contains([Display name], [Given name]), "yes", "no")`, the question returns an error `Input to update-string-value does not match schema: [0;33m [(named [(named (not (= :value :field)) :value) nil nil] value) nil] [0m`.
### To Reproduce
Fom a sample `users` table
1. Create a question
2. Add a custom column with an expression such as `case(contains([Display name], [Given name]), "yes", "no")`
3. Show the preview
4. Returns an error `Input to update-string-value does not match schema: [0;33m [(named [(named (not (= :value :field)) :value) nil nil] value) nil] [0m`
### Expected behavior
The expression should accept referenced columns for both contains parameters
### Logs
{:database_id 2,
:started_at #t "2023-06-02T10:42:44.943921Z[GMT]",
:error_type :invalid-query,
:json_query
{:database 2,
:query
{:source-table 15,
:expressions {:calc ["case" [[["contains" ["field" 1374 nil] ["field" 1372 nil]] "yes"]] {:default "no"}]},
:fields [["field" 1376 nil] ["field" 1374 nil] ["field" 1372 nil] ["expression" "calc" nil]],
:limit 10},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> driver.sql.query_processor$fn__63286$update_string_value__63293.invoke(query_processor.clj:1070)"
"driver.sql.query_processor$fn__63317.invokeStatic(query_processor.clj:1080)"
"driver.sql.query_processor$fn__63317.invoke(query_processor.clj:1078)"
"driver.sql.query_processor$fn__63080.invokeStatic(query_processor.clj:825)"
"driver.sql.query_processor$fn__63080.invoke(query_processor.clj:819)"
"driver.sql.query_processor$fn__62767.invokeStatic(query_processor.clj:522)"
"driver.sql.query_processor$fn__62767.invoke(query_processor.clj:519)"
"driver.sql.query_processor$as.invokeStatic(query_processor.clj:984)"
"driver.sql.query_processor$as.doInvoke(query_processor.clj:953)"
"driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267$fn__63268.invoke(query_processor.clj:1052)"
"driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267.invoke(query_processor.clj:1051)"
"driver.sql.query_processor$fn__63260.invokeStatic(query_processor.clj:1051)"
"driver.sql.query_processor$fn__63260.invoke(query_processor.clj:1049)"
"driver.sql.query_processor$apply_top_level_clauses$fn__63538.invoke(query_processor.clj:1372)"
"driver.sql.query_processor$apply_top_level_clauses.invokeStatic(query_processor.clj:1370)"
"driver.sql.query_processor$apply_top_level_clauses.invoke(query_processor.clj:1366)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1410)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)"
"driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1394)"
"driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1379)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1408)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1433)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1424)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1442)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1438)"
"driver.sql$fn__101522.invokeStatic(sql.clj:42)"
"driver.sql$fn__101522.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__68063.invoke(mbql_to_native.clj:21)"
"query_processor$fn__70691$combined_post_process__70696$combined_post_process_STAR___70697.invoke(query_processor.clj:243)"
"query_processor$fn__70691$combined_pre_process__70692$combined_pre_process_STAR___70693.invoke(query_processor.clj:240)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083$fn__69088.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:90)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__64953.invoke(fetch_source_query.clj:310)"
"query_processor.middleware.store$initialize_store$fn__65131$fn__65132.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:47)"
"query_processor.store$do_with_store.invoke(store.clj:41)"
"query_processor.middleware.store$initialize_store$fn__65131.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__69372.invoke(normalize_query.clj:25)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__66309.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__69308.invoke(process_userland_query.clj:150)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__69685.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___59455$thunk__59457.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:109)"
"query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:94)"
"query_processor.reducible$sync_qp$qp_STAR___59467.doInvoke(reducible.clj:129)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:362)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:358)"
"query_processor$fn__70739$process_query_and_save_execution_BANG___70748$fn__70751.invoke(query_processor.clj:373)"
"query_processor$fn__70739$process_query_and_save_execution_BANG___70748.invoke(query_processor.clj:366)"
"query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793$fn__70796.invoke(query_processor.clj:385)"
"query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793.invoke(query_processor.clj:378)"
"api.dataset$run_query_async$fn__86545.invoke(dataset.clj:73)"
"query_processor.streaming$streaming_response_STAR_$fn__54305$fn__54306.invoke(streaming.clj:166)"
"query_processor.streaming$streaming_response_STAR_$fn__54305.invoke(streaming.clj:165)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)"
"async.streaming_response$do_f_async$task__36922.invoke(streaming_response.clj:88)"],
:card_id nil,
:context :ad-hoc,
:error
"Input to update-string-value does not match schema: \n\n\t [(named [(named (not (= :value :field)) :value) nil nil] value) nil] \n\n",
:row_count 0,
:running_time 0,
:preprocessed
{:database 2,
:query
{:source-table 15,
:expressions {"calc" [:case [[[:contains [:field 1374 nil] [:field 1372 nil]] "yes"]] {:default "no"}]},
:fields [[:field 1376 nil] [:field 1374 nil] [:field 1372 nil] [:expression "calc"]],
:limit 10},
:type :query,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info {:executed-by 2, :context :ad-hoc}},
:ex-data
{:type :schema.core/error,
:value
[[:field
1372
{:metabase.query-processor.util.add-alias-info/source-table 15,
:metabase.query-processor.util.add-alias-info/source-alias "given_name",
:metabase.query-processor.util.add-alias-info/desired-alias "given_name",
:metabase.query-processor.util.add-alias-info/position 2}]
#object[metabase.driver.sql.query_processor$fn__63317$fn__63321 0x5d978eb "metabase.driver.sql.query_processor$fn__63317$fn__63321@5d978eb"]],
:error [(named [(named (not (= :value :field)) :value) nil nil] value) nil]},
:data {:rows [], :cols []}}
### Information about your Metabase installation
```JSON
{
"browser-info": {
"language": "en-US",
"platform": "Win32",
"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.19+7",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.19",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.19+7",
"os.name": "Linux",
"os.version": "5.10.164.1-1.cm1",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"postgres"
],
"hosting-env": "unknown",
"application-database": "postgres",
"application-database-details": {
"database": {
"name": "PostgreSQL",
"version": "11.18"
},
"jdbc-driver": {
"name": "PostgreSQL JDBC Driver",
"version": "42.5.1"
}
},
"run-mode": "prod",
"version": {
"date": "2023-04-28",
"tag": "v0.46.2",
"branch": "release-x.46.x",
"hash": "8967c94"
},
"settings": {
"report-timezone": null
}
}
}
```
### Severity
blocking for non-technical users
### Additional context
I found a very tricky workaround to make it work:
- Change the expression to `case(contains([Display name], "MYVALUE"), "yes", "no")`
- Convert the query to SQL
- From this query, replace `LIKE '%MYVALUE%'` with `LIKE CONCAT('%', "public"."users"."given_name", '%')`.
- Run it, it works!
Basically the trick is to:
- use SQL to be able to reference the `given_name` column properly with `"public"."users"."given_name"`
- surround the `given_name` with `%` so that it performs a search that match if the string is anywhere in the source column
|
process
|
error when using column reference as the second argument of the contains function describe the bug it looks like metabase doesn t accepts column reference as a source for the second argument of the contains function when creating a custom column with an expression like case contains yes no the question returns an error input to update string value does not match schema value nil to reproduce fom a sample users table create a question add a custom column with an expression such as case contains yes no show the preview returns an error input to update string value does not match schema value nil expected behavior the expression should accept referenced columns for both contains parameters logs database id started at t error type invalid query json query database query source table expressions calc yes default no fields limit type query parameters middleware js int to string true add default userland constraints true native nil status failed class clojure lang exceptioninfo stacktrace driver sql query processor fn update string value invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor as invokestatic query processor clj driver sql query processor as doinvoke query processor clj driver sql query processor fn iter fn fn invoke query processor clj driver sql query processor fn iter fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor apply top level clauses fn invoke query processor clj driver sql query processor apply top level clauses invokestatic query processor clj driver sql query processor apply top level clauses invoke query processor clj driver sql query processor apply clauses invokestatic query processor clj driver sql query processor apply clauses invoke query processor clj driver sql query processor apply source query invokestatic query processor clj driver sql query processor apply source query invoke query processor clj driver sql query processor apply clauses invokestatic query processor clj driver sql query processor apply clauses invoke query processor clj driver sql query processor mbql gt honeysql invokestatic query processor clj driver sql query processor mbql gt honeysql invoke query processor clj driver sql query processor mbql gt native invokestatic query processor clj driver sql query processor mbql gt native invoke query processor clj driver sql fn invokestatic sql clj driver sql fn invoke sql clj query processor middleware mbql to native query gt native form invokestatic mbql to native clj query processor middleware mbql to native query gt native form invoke mbql to native clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star doinvoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset run query async fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async task invoke streaming response clj card id nil context ad hoc error input to update string value does not match schema n n t value nil n n row count running time preprocessed database query source table expressions calc yes default no fields limit type query middleware js int to string true add default userland constraints true info executed by context ad hoc ex data type schema core error value field metabase query processor util add alias info source table metabase query processor util add alias info source alias given name metabase query processor util add alias info desired alias given name metabase query processor util add alias info position object error value nil data rows cols information about your metabase installation json browser info language en us platform useragent mozilla windows nt applewebkit khtml like gecko chrome safari edg vendor google inc system info file encoding utf java runtime name openjdk runtime environment java runtime version java vendor eclipse adoptium java vendor url java version java vm name openjdk bit server vm java vm version os name linux os version user language en user timezone gmt metabase info databases postgres hosting env unknown application database postgres application database details database name postgresql version jdbc driver name postgresql jdbc driver version run mode prod version date tag branch release x x hash settings report timezone null severity blocking for non technical users additional context i found a very tricky workaround to make it work change the expression to case contains myvalue yes no convert the query to sql from this query replace like myvalue with like concat public users given name run it it works basically the trick is to use sql to be able to reference the given name column properly with public users given name surround the given name with so that it performs a search that match if the string is anywhere in the source column
| 1
|
800,156
| 28,354,038,098
|
IssuesEvent
|
2023-04-12 05:44:54
|
googleapis/release-please
|
https://api.github.com/repos/googleapis/release-please
|
opened
|
Merged release commit parsing fails when branch name is version-like
|
priority: p2 type: bug
|
#### Environment details
- `release-please` version: 15.10.3
#### Steps to reproduce
1. create repository
2. set branch `v3` as default
3. setup multipackage setup with merged release
4. release one version and get merge commit message `chore: release v3`
Now every commit in v3 branch will be broken since release-please will try to parse `chore: release v3` as "chore${scope}: release${component} ${version}" and `v3` matches `v[0-9].*` pattern.
Related: https://github.com/googleapis/release-please/blob/1d203c7884c3a48ee52bab65a3eb6861286be7f9/src/strategies/base.ts#L527-L537 -- order of titles parsing.
Quick fix: set `pull-request-title-pattern` to rubbish.
|
1.0
|
Merged release commit parsing fails when branch name is version-like - #### Environment details
- `release-please` version: 15.10.3
#### Steps to reproduce
1. create repository
2. set branch `v3` as default
3. setup multipackage setup with merged release
4. release one version and get merge commit message `chore: release v3`
Now every commit in v3 branch will be broken since release-please will try to parse `chore: release v3` as "chore${scope}: release${component} ${version}" and `v3` matches `v[0-9].*` pattern.
Related: https://github.com/googleapis/release-please/blob/1d203c7884c3a48ee52bab65a3eb6861286be7f9/src/strategies/base.ts#L527-L537 -- order of titles parsing.
Quick fix: set `pull-request-title-pattern` to rubbish.
|
non_process
|
merged release commit parsing fails when branch name is version like environment details release please version steps to reproduce create repository set branch as default setup multipackage setup with merged release release one version and get merge commit message chore release now every commit in branch will be broken since release please will try to parse chore release as chore scope release component version and matches v pattern related order of titles parsing quick fix set pull request title pattern to rubbish
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.