Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
11,213
13,965,936,536
IssuesEvent
2020-10-26 00:40:49
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
20.09 Release notes
0.kind: enhancement 6.topic: release process
This thread is for any release-worthy notes which may have not have made their way into https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2009.xml yet. Please leave a summary and any relevant links to the items. I will try and go through them before the release to ensure the notes are in order. List to make items easier to track: - [x] Python35 was removed - [x] Agda changes - [x] Amdvlk can be added to `hardware.opengl.extraPackages` - [x] Cinnamon desktop was added - [x] Fontconfig was bumped, and reworked on NixOS - [x] Nvidia Optimus/Prime is fully usable now - [ ] Core changes (gcc, glibc, linux kernel, mesa, openssl) #101444 - [ ] Desktop changes (plasma, kdeApplications, gnome, pantheon) #101444 - [ ] Maybe include some others? (cinnamon, xfce, lxqt)? #101444 - [ ] Capture signifcant changes https://github.com/NixOS/nixos-search/pull/206 #101444
1.0
20.09 Release notes - This thread is for any release-worthy notes which may have not have made their way into https://github.com/NixOS/nixpkgs/blob/master/nixos/doc/manual/release-notes/rl-2009.xml yet. Please leave a summary and any relevant links to the items. I will try and go through them before the release to ensure the notes are in order. List to make items easier to track: - [x] Python35 was removed - [x] Agda changes - [x] Amdvlk can be added to `hardware.opengl.extraPackages` - [x] Cinnamon desktop was added - [x] Fontconfig was bumped, and reworked on NixOS - [x] Nvidia Optimus/Prime is fully usable now - [ ] Core changes (gcc, glibc, linux kernel, mesa, openssl) #101444 - [ ] Desktop changes (plasma, kdeApplications, gnome, pantheon) #101444 - [ ] Maybe include some others? (cinnamon, xfce, lxqt)? #101444 - [ ] Capture signifcant changes https://github.com/NixOS/nixos-search/pull/206 #101444
process
release notes this thread is for any release worthy notes which may have not have made their way into yet please leave a summary and any relevant links to the items i will try and go through them before the release to ensure the notes are in order list to make items easier to track was removed agda changes amdvlk can be added to hardware opengl extrapackages cinnamon desktop was added fontconfig was bumped and reworked on nixos nvidia optimus prime is fully usable now core changes gcc glibc linux kernel mesa openssl desktop changes plasma kdeapplications gnome pantheon maybe include some others cinnamon xfce lxqt capture signifcant changes
1
248,602
18,858,095,699
IssuesEvent
2021-11-12 09:22:49
tzejit/pe
https://api.github.com/repos/tzejit/pe
opened
Broken link
severity.Low type.DocumentationBug
Link shown in the diagram on page 11 of the DG does not lead to a valid github page ![image.png](https://raw.githubusercontent.com/tzejit/pe/master/files/966db3c1-5d23-4840-946f-557794f27044.png) <!--session: 1636704460468-56fd96ad-880c-43e7-b185-87285ec0f872--> <!--Version: Web v3.4.1-->
1.0
Broken link - Link shown in the diagram on page 11 of the DG does not lead to a valid github page ![image.png](https://raw.githubusercontent.com/tzejit/pe/master/files/966db3c1-5d23-4840-946f-557794f27044.png) <!--session: 1636704460468-56fd96ad-880c-43e7-b185-87285ec0f872--> <!--Version: Web v3.4.1-->
non_process
broken link link shown in the diagram on page of the dg does not lead to a valid github page
0
12,687
15,052,413,183
IssuesEvent
2021-02-03 15:10:29
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Postgres date types `time` and `date` are not working
bug/2-confirmed kind/bug process/candidate status/needs-action team/migrations topic: native database types topic: types
For example: SQL ```sql CREATE TABLE `User` ( id integer PRIMARY KEY, time time, ); ``` Prisma Schema ```prisma model User { id Int @id time DateTime? } ``` Code ```ts const date = new Date('2015-01-01T00:00:00Z') const data: UserCreateInput = { id: 1, time: date, date: date, timestamp: date, } await client.user.create({ data, }) ``` which results in this error ``` ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: ToSql(0), cause: Some(WrongType { postgres: Type(Time), rust: "chrono::naive::datetime::NaiveDateTime" }) }) }) ``` The exact same thing happens for the `date` type, resulting in this error: ``` ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: ToSql(0), cause: Some(WrongType { postgres: Type(Date), rust: "chrono::naive::datetime::NaiveDateTime" }) }) }) ``` reproduction available at https://github.com/prisma/datetime-experiments
1.0
Postgres date types `time` and `date` are not working - For example: SQL ```sql CREATE TABLE `User` ( id integer PRIMARY KEY, time time, ); ``` Prisma Schema ```prisma model User { id Int @id time DateTime? } ``` Code ```ts const date = new Date('2015-01-01T00:00:00Z') const data: UserCreateInput = { id: 1, time: date, date: date, timestamp: date, } await client.user.create({ data, }) ``` which results in this error ``` ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: ToSql(0), cause: Some(WrongType { postgres: Type(Time), rust: "chrono::naive::datetime::NaiveDateTime" }) }) }) ``` The exact same thing happens for the `date` type, resulting in this error: ``` ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: ToSql(0), cause: Some(WrongType { postgres: Type(Date), rust: "chrono::naive::datetime::NaiveDateTime" }) }) }) ``` reproduction available at https://github.com/prisma/datetime-experiments
process
postgres date types time and date are not working for example sql sql create table user id integer primary key time time prisma schema prisma model user id int id time datetime code ts const date new date const data usercreateinput id time date date date timestamp date await client user create data which results in this error connectorerror connectorerror user facing error none kind queryerror error kind tosql cause some wrongtype postgres type time rust chrono naive datetime naivedatetime the exact same thing happens for the date type resulting in this error connectorerror connectorerror user facing error none kind queryerror error kind tosql cause some wrongtype postgres type date rust chrono naive datetime naivedatetime reproduction available at
1
120,302
4,787,800,973
IssuesEvent
2016-10-30 06:53:42
CS2103AUG2016-T17-C3/main
https://api.github.com/repos/CS2103AUG2016-T17-C3/main
closed
Multi Undo
priority.medium type.command.undo&redo type.enhancement
i.e. able to use command `undo 5` to undo 5 times consider: 1) if 2 available undos but `undo 4` command given, Give different feedback to user? Still proceed with undo? 2) Since can't show all undo steps, change feedback for multi undo?
1.0
Multi Undo - i.e. able to use command `undo 5` to undo 5 times consider: 1) if 2 available undos but `undo 4` command given, Give different feedback to user? Still proceed with undo? 2) Since can't show all undo steps, change feedback for multi undo?
non_process
multi undo i e able to use command undo to undo times consider if available undos but undo command given give different feedback to user still proceed with undo since can t show all undo steps change feedback for multi undo
0
3,187
6,259,059,621
IssuesEvent
2017-07-14 17:07:06
cyipt/cyipt
https://api.github.com/repos/cyipt/cyipt
closed
Share how quietness algorithm works
data preprocessing
Agreement that greater internal scrutiny of the algorithm is needed before it can be used in CyIPT. The algorithm is not open source but Martin to investigate ways it can be shared internally.
1.0
Share how quietness algorithm works - Agreement that greater internal scrutiny of the algorithm is needed before it can be used in CyIPT. The algorithm is not open source but Martin to investigate ways it can be shared internally.
process
share how quietness algorithm works agreement that greater internal scrutiny of the algorithm is needed before it can be used in cyipt the algorithm is not open source but martin to investigate ways it can be shared internally
1
5,646
8,507,342,569
IssuesEvent
2018-10-30 18:48:52
easy-software-ufal/annotations_repos
https://api.github.com/repos/easy-software-ufal/annotations_repos
opened
grimmi/TheBelt Configuration keys are named for the properties, not the attributes
C# RPV wrong processing
Issue: `https://github.com/grimmi/TheBelt/issues/1` PR: `https://github.com/grimmi/TheBelt/commit/ec0f1ac65cd771d29344bae6814d73910e69d04e`
1.0
grimmi/TheBelt Configuration keys are named for the properties, not the attributes - Issue: `https://github.com/grimmi/TheBelt/issues/1` PR: `https://github.com/grimmi/TheBelt/commit/ec0f1ac65cd771d29344bae6814d73910e69d04e`
process
grimmi thebelt configuration keys are named for the properties not the attributes issue pr
1
232,548
7,661,282,677
IssuesEvent
2018-05-11 13:46:37
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
closed
Admin Role does not have permission to modify node role when a role is specified in Allowed Node Roles field
Priority: Medium Type: Bug
In the web admin, Configuration -> System Configuration -> Admin Access, a new admin role was created with a role specified under Allowed node roles: ![image](https://user-images.githubusercontent.com/36165527/39597219-a6906034-4ee2-11e8-81c7-2f8656b8a33d.png) When a user belonging in this admin role tries to assign the role to a node, PF gives an error when clicking on Save: ![image](https://user-images.githubusercontent.com/36165527/39597306-f5dcb188-4ee2-11e8-896f-a1cb3c5edb1c.png) All actions for Nodes are defined: ![image](https://user-images.githubusercontent.com/36165527/39597404-3f5e5280-4ee3-11e8-98f3-8b010f8725f6.png)
1.0
Admin Role does not have permission to modify node role when a role is specified in Allowed Node Roles field - In the web admin, Configuration -> System Configuration -> Admin Access, a new admin role was created with a role specified under Allowed node roles: ![image](https://user-images.githubusercontent.com/36165527/39597219-a6906034-4ee2-11e8-81c7-2f8656b8a33d.png) When a user belonging in this admin role tries to assign the role to a node, PF gives an error when clicking on Save: ![image](https://user-images.githubusercontent.com/36165527/39597306-f5dcb188-4ee2-11e8-896f-a1cb3c5edb1c.png) All actions for Nodes are defined: ![image](https://user-images.githubusercontent.com/36165527/39597404-3f5e5280-4ee3-11e8-98f3-8b010f8725f6.png)
non_process
admin role does not have permission to modify node role when a role is specified in allowed node roles field in the web admin configuration system configuration admin access a new admin role was created with a role specified under allowed node roles when a user belonging in this admin role tries to assign the role to a node pf gives an error when clicking on save all actions for nodes are defined
0
21,890
30,341,397,875
IssuesEvent
2023-07-11 12:58:32
kitspace/kitspace-v2
https://api.github.com/repos/kitspace/kitspace-v2
closed
Add health check for https://partinfo.kitspace.org
enhancement processor
The processor depends on https://partinfo.kitspace.org, when it's down the processor error caused by it isn't clear.
1.0
Add health check for https://partinfo.kitspace.org - The processor depends on https://partinfo.kitspace.org, when it's down the processor error caused by it isn't clear.
process
add health check for the processor depends on when it s down the processor error caused by it isn t clear
1
10,134
13,044,162,402
IssuesEvent
2020-07-29 03:47:32
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `JsonStorageSizeSig` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `JsonStorageSizeSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @sticnarf ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `JsonStorageSizeSig` from TiDB - ## Description Port the scalar function `JsonStorageSizeSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @sticnarf ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function jsonstoragesizesig from tidb description port the scalar function jsonstoragesizesig from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
1
103,756
8,948,408,426
IssuesEvent
2019-01-25 02:15:02
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
UI links to CLI are broken
area/packaging kind/bug status/resolved status/to-test version/2.0
**Rancher versions:** rancher/server or rancher/rancher: v2.1.0 i.e. for Linux: ``` https://releases.rancher.com/cli/V2.0.5/rancher-linux-amd64-V2.0.5.tar.gz ``` Seems that CLI is published under `/cli2/` and we have an uppercase typo (`V` vs `v`) ``` https://releases.rancher.com/cli2/v2.0.5/rancher-linux-amd64-v2.0.5.tar.gz ```
1.0
UI links to CLI are broken - **Rancher versions:** rancher/server or rancher/rancher: v2.1.0 i.e. for Linux: ``` https://releases.rancher.com/cli/V2.0.5/rancher-linux-amd64-V2.0.5.tar.gz ``` Seems that CLI is published under `/cli2/` and we have an uppercase typo (`V` vs `v`) ``` https://releases.rancher.com/cli2/v2.0.5/rancher-linux-amd64-v2.0.5.tar.gz ```
non_process
ui links to cli are broken rancher versions rancher server or rancher rancher i e for linux seems that cli is published under and we have an uppercase typo v vs v
0
218,006
7,329,934,210
IssuesEvent
2018-03-05 07:59:29
metasfresh/metasfresh
https://api.github.com/repos/metasfresh/metasfresh
closed
Distribution Editor Move HU takes ages
priority:high type:bug
### Is this a bug or feature request? Bug ### What is the current behavior? When selecting a Distribution Orderline and starting the action "move HU" the dropdown list for HU does not come to an end (pending). #### Which are the steps to reproduce? Open, try and see. ### What is the expected or desired behavior? Shall work.
1.0
Distribution Editor Move HU takes ages - ### Is this a bug or feature request? Bug ### What is the current behavior? When selecting a Distribution Orderline and starting the action "move HU" the dropdown list for HU does not come to an end (pending). #### Which are the steps to reproduce? Open, try and see. ### What is the expected or desired behavior? Shall work.
non_process
distribution editor move hu takes ages is this a bug or feature request bug what is the current behavior when selecting a distribution orderline and starting the action move hu the dropdown list for hu does not come to an end pending which are the steps to reproduce open try and see what is the expected or desired behavior shall work
0
406,434
27,564,781,054
IssuesEvent
2023-03-08 02:18:07
hashicorp/terraform-provider-aws
https://api.github.com/repos/hashicorp/terraform-provider-aws
opened
[Docs]:
documentation needs-triage
### Documentation Link https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment ### Description Hi Team, I found one loopholes in aws_lb_target_group_attachment documentation, where target id defines instance id, but during launching new aws instance using resource block , we can't get id attribute as an output. Then how can we put target id in target attachment ? Please find below link --> 1. aws_instance: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#attributes-reference 2. aws_lb_target_group_attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment ### References _No response_ ### Would you like to implement a fix? None
1.0
[Docs]: - ### Documentation Link https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment ### Description Hi Team, I found one loopholes in aws_lb_target_group_attachment documentation, where target id defines instance id, but during launching new aws instance using resource block , we can't get id attribute as an output. Then how can we put target id in target attachment ? Please find below link --> 1. aws_instance: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#attributes-reference 2. aws_lb_target_group_attachment: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment ### References _No response_ ### Would you like to implement a fix? None
non_process
documentation link description hi team i found one loopholes in aws lb target group attachment documentation where target id defines instance id but during launching new aws instance using resource block we can t get id attribute as an output then how can we put target id in target attachment please find below link aws instance aws lb target group attachment references no response would you like to implement a fix none
0
4,613
7,459,524,240
IssuesEvent
2018-03-30 15:40:56
eobermuhlner/big-math
https://api.github.com/repos/eobermuhlner/big-math
closed
Prepare release 2.0.0
development process
- [x] rename release note - [x] change version in `gradle.build` - [x] build and commit javadoc - [ ] upload artifacts to maven central - [x] uncomment task `uploadArchives` in `gradle.build` - [x] run `./gradlew uploadArchives` - [x] go to https://oss.sonatype.org/ - [x] in tab 'Staging Repositories' locate own Repository (typically at the end of the list) - [x] verify content of own Repository (version number!) - [x] `Close` own Repository - [x] `Refresh` - [x] `Release` own Repository - [ ] create github release from same artifacts - [x] Create new draft release - [x] Copy content of release note into draft release - [x] Add artefacts from gradle build to draft release - [x] Publish release - [x] update readme - [x] update docs/index.md - [x] update dependent projects - [x] create empty release note for next release
1.0
Prepare release 2.0.0 - - [x] rename release note - [x] change version in `gradle.build` - [x] build and commit javadoc - [ ] upload artifacts to maven central - [x] uncomment task `uploadArchives` in `gradle.build` - [x] run `./gradlew uploadArchives` - [x] go to https://oss.sonatype.org/ - [x] in tab 'Staging Repositories' locate own Repository (typically at the end of the list) - [x] verify content of own Repository (version number!) - [x] `Close` own Repository - [x] `Refresh` - [x] `Release` own Repository - [ ] create github release from same artifacts - [x] Create new draft release - [x] Copy content of release note into draft release - [x] Add artefacts from gradle build to draft release - [x] Publish release - [x] update readme - [x] update docs/index.md - [x] update dependent projects - [x] create empty release note for next release
process
prepare release rename release note change version in gradle build build and commit javadoc upload artifacts to maven central uncomment task uploadarchives in gradle build run gradlew uploadarchives go to in tab staging repositories locate own repository typically at the end of the list verify content of own repository version number close own repository refresh release own repository create github release from same artifacts create new draft release copy content of release note into draft release add artefacts from gradle build to draft release publish release update readme update docs index md update dependent projects create empty release note for next release
1
64,928
18,960,828,811
IssuesEvent
2021-11-19 04:27:25
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Facebook's preview of Riot is massive
T-Defect P2 S-Minor S-Tolerable
I'm not sure if this is something that Riot can actually fix, but it is hilariously large for a preview: ![image](https://user-images.githubusercontent.com/1190097/45273895-f9336800-b471-11e8-9b5d-5b99efd94e26.png)
1.0
Facebook's preview of Riot is massive - I'm not sure if this is something that Riot can actually fix, but it is hilariously large for a preview: ![image](https://user-images.githubusercontent.com/1190097/45273895-f9336800-b471-11e8-9b5d-5b99efd94e26.png)
non_process
facebook s preview of riot is massive i m not sure if this is something that riot can actually fix but it is hilariously large for a preview
0
7,449
10,558,142,139
IssuesEvent
2019-10-04 08:23:28
prisma/photonjs
https://api.github.com/repos/prisma/photonjs
closed
Photon filters `in` prop missing `null` support
process/candidate
The Photon find filters like `StringFilter` have all properties optional with null support but the `in` and `notIn` have only `Enumerable<string>` optional type: ![image](https://user-images.githubusercontent.com/10618781/66115030-4f6ccf00-e5d0-11e9-8e01-6cb50a6749b1.png) This conflict with generated GraphQL types based on DMMF schema: ![image](https://user-images.githubusercontent.com/10618781/66115175-94910100-e5d0-11e9-88e1-c1846f4d6f6e.png) This apply to all `Enumerable` usage in Photon generated types: ![image](https://user-images.githubusercontent.com/10618781/66115258-c3a77280-e5d0-11e9-986d-2ba02f970a0d.png) It's a really big blocking problem for the integration of Prisma2 and GraphQL when we want to pass the nullable args to the photon query.
1.0
Photon filters `in` prop missing `null` support - The Photon find filters like `StringFilter` have all properties optional with null support but the `in` and `notIn` have only `Enumerable<string>` optional type: ![image](https://user-images.githubusercontent.com/10618781/66115030-4f6ccf00-e5d0-11e9-8e01-6cb50a6749b1.png) This conflict with generated GraphQL types based on DMMF schema: ![image](https://user-images.githubusercontent.com/10618781/66115175-94910100-e5d0-11e9-88e1-c1846f4d6f6e.png) This apply to all `Enumerable` usage in Photon generated types: ![image](https://user-images.githubusercontent.com/10618781/66115258-c3a77280-e5d0-11e9-986d-2ba02f970a0d.png) It's a really big blocking problem for the integration of Prisma2 and GraphQL when we want to pass the nullable args to the photon query.
process
photon filters in prop missing null support the photon find filters like stringfilter have all properties optional with null support but the in and notin have only enumerable optional type this conflict with generated graphql types based on dmmf schema this apply to all enumerable usage in photon generated types it s a really big blocking problem for the integration of and graphql when we want to pass the nullable args to the photon query
1
16,910
22,238,577,914
IssuesEvent
2022-06-09 00:57:31
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
opened
Metrics for latency of calls should not be counter label
bug P2 process
### Description The metrics for the latency of request calls is currently captured as a counter label. This is incorrect as latency ms values often differ. ### Steps to reproduce 1. Run the relay 2. Check `/metrics` ### Additional context _No response_ ### Hedera network other ### Version v0.2.0-SNAPSHOT ### Operating system _No response_
1.0
Metrics for latency of calls should not be counter label - ### Description The metrics for the latency of request calls is currently captured as a counter label. This is incorrect as latency ms values often differ. ### Steps to reproduce 1. Run the relay 2. Check `/metrics` ### Additional context _No response_ ### Hedera network other ### Version v0.2.0-SNAPSHOT ### Operating system _No response_
process
metrics for latency of calls should not be counter label description the metrics for the latency of request calls is currently captured as a counter label this is incorrect as latency ms values often differ steps to reproduce run the relay check metrics additional context no response hedera network other version snapshot operating system no response
1
36,925
8,198,668,139
IssuesEvent
2018-08-31 17:14:17
google/googletest
https://api.github.com/repos/google/googletest
closed
fuse script puts EXPECT_FATAL_FAILURE into .cc file
OpSys-All Priority-Medium Type-Defect Usability auto-migrated
``` EXPECT_FATAL_FAILURE is a define in gtest-spi.h but is put into gtest-all.cc file after fusing. This requires to include .cc file into file where EXPECT_FATAL_FAILURE is used, which is very strange. Also it makes impossible to use EXPECT_FATAL_FAILURE in two or more files, which can be a real problem :) * What steps will reproduce the problem? 1. Just use EXPECT_FATAL_FAILURE with fused files * What version of Google Test are you using? On what operating system? 1.5.0, Windows (but this doesn't figure) ``` Original issue reported on code.google.com by `vasily.v...@gmail.com` on 28 Feb 2011 at 4:39
1.0
fuse script puts EXPECT_FATAL_FAILURE into .cc file - ``` EXPECT_FATAL_FAILURE is a define in gtest-spi.h but is put into gtest-all.cc file after fusing. This requires to include .cc file into file where EXPECT_FATAL_FAILURE is used, which is very strange. Also it makes impossible to use EXPECT_FATAL_FAILURE in two or more files, which can be a real problem :) * What steps will reproduce the problem? 1. Just use EXPECT_FATAL_FAILURE with fused files * What version of Google Test are you using? On what operating system? 1.5.0, Windows (but this doesn't figure) ``` Original issue reported on code.google.com by `vasily.v...@gmail.com` on 28 Feb 2011 at 4:39
non_process
fuse script puts expect fatal failure into cc file expect fatal failure is a define in gtest spi h but is put into gtest all cc file after fusing this requires to include cc file into file where expect fatal failure is used which is very strange also it makes impossible to use expect fatal failure in two or more files which can be a real problem what steps will reproduce the problem just use expect fatal failure with fused files what version of google test are you using on what operating system windows but this doesn t figure original issue reported on code google com by vasily v gmail com on feb at
0
23
2,496,263,365
IssuesEvent
2015-01-06 18:14:51
vivo-isf/vivo-isf-ontology
https://api.github.com/repos/vivo-isf/vivo-isf-ontology
closed
Cellular Pluripotency
biological_process imported
_From [rgar...@eagle-i.org](https://code.google.com/u/111247205719752845822/) on March 25, 2013 08:52:48_ \<b>**** Use the form below to request a new term ****</b> \<b>**** Scroll down to see a term request example ****</b> &#13; \<b>Please indicate the label for the proposed term:</b> Cellular Pluripotency (child of cell potency)&#13; &#13; \<b>Please provide a textual definition (with source):</b> Cells not fixed as to developmental potentialities (<a href="http://www.merriam-webster.com/dictionary/pluripotent" rel="nofollow">http://www.merriam-webster.com/dictionary/pluripotent</a> )&#13; &#13; \<b>Please add an example of usage for proposed term:</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[ ] Instrument</b> [X] Biological process&#13; \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info:</b> &#13; &#13; &#13; &#13; \<b>*** Term request example ****</b> &#13; \<b>Please indicate the label for the proposed term: four-terminal resistance</b> \<b>sensor</b> &#13; &#13; Please provide a textual definition (with source): "Four-terminal&#13; \<b>resistance sensors are electrical impedance measuring instruments that use</b> \<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b> \<b>accurate measurements that can be used to compute a material's electrical</b> resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>&#13; &#13; &#13; \<b>Please add an example of usage for proposed term: Measuring the inherent</b> \<b>(per square) resistance of doped silicon.</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[X] Instrument</b> \<b>[ ] Biological process</b> \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b> &#13; _Original issue: http://code.google.com/p/eagle-i/issues/detail?id=204_
1.0
Cellular Pluripotency - _From [rgar...@eagle-i.org](https://code.google.com/u/111247205719752845822/) on March 25, 2013 08:52:48_ \<b>**** Use the form below to request a new term ****</b> \<b>**** Scroll down to see a term request example ****</b> &#13; \<b>Please indicate the label for the proposed term:</b> Cellular Pluripotency (child of cell potency)&#13; &#13; \<b>Please provide a textual definition (with source):</b> Cells not fixed as to developmental potentialities (<a href="http://www.merriam-webster.com/dictionary/pluripotent" rel="nofollow">http://www.merriam-webster.com/dictionary/pluripotent</a> )&#13; &#13; \<b>Please add an example of usage for proposed term:</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[ ] Instrument</b> [X] Biological process&#13; \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info:</b> &#13; &#13; &#13; &#13; \<b>*** Term request example ****</b> &#13; \<b>Please indicate the label for the proposed term: four-terminal resistance</b> \<b>sensor</b> &#13; &#13; Please provide a textual definition (with source): "Four-terminal&#13; \<b>resistance sensors are electrical impedance measuring instruments that use</b> \<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b> \<b>accurate measurements that can be used to compute a material's electrical</b> resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>&#13; &#13; &#13; \<b>Please add an example of usage for proposed term: Measuring the inherent</b> \<b>(per square) resistance of doped silicon.</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[X] Instrument</b> \<b>[ ] Biological process</b> \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b> &#13; _Original issue: http://code.google.com/p/eagle-i/issues/detail?id=204_
process
cellular pluripotency from on march use the form below to request a new term scroll down to see a term request example please indicate the label for the proposed term cellular pluripotency child of cell potency please provide a textual definition with source cells not fixed as to developmental potentialities please add an example of usage for proposed term please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info term request example please indicate the label for the proposed term four terminal resistance sensor please provide a textual definition with source four terminal resistance sensors are electrical impedance measuring instruments that use separate pairs of current carrying and voltage sensing electrodes to make accurate measurements that can be used to compute a material s electrical resistance please add an example of usage for proposed term measuring the inherent per square resistance of doped silicon please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info aka sensors wire sensor or point probe original issue
1
27,342
5,340,798,105
IssuesEvent
2017-02-17 00:13:38
scikit-image/scikit-image
https://api.github.com/repos/scikit-image/scikit-image
closed
Active contour does not document that snake length depends on input
difficulty: novice documentation
The output snake length is the same as the input boundary given--no points are added. This should be explicitly documented.
1.0
Active contour does not document that snake length depends on input - The output snake length is the same as the input boundary given--no points are added. This should be explicitly documented.
non_process
active contour does not document that snake length depends on input the output snake length is the same as the input boundary given no points are added this should be explicitly documented
0
36,381
12,405,027,226
IssuesEvent
2020-05-21 16:32:12
wrbejar/Nova8JavaVulnerable
https://api.github.com/repos/wrbejar/Nova8JavaVulnerable
opened
CVE-2018-1000632 (High) detected in dom4j-1.6.1.jar
security vulnerability
## CVE-2018-1000632 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-1.6.1.jar</b></p></summary> <p>dom4j: the flexible XML framework for Java</p> <p>Library home page: <a href="http://dom4j.org">http://dom4j.org</a></p> <p>Path to vulnerable library: /Nova8JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,_depth_0/Nova8JavaVulnerable/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,/root/.m2/repository/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar,_depth_0/Nova8JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,/root/.m2/repository/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar</p> <p> Dependency Hierarchy: - :x: **dom4j-1.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/wrbejar/Nova8JavaVulnerable/commit/1387452ca7bf6063988c143381dd8c9938b0ab3f">1387452ca7bf6063988c143381dd8c9938b0ab3f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later. <p>Publish Date: 2018-08-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632>CVE-2018-1000632</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632</a></p> <p>Release Date: 2018-08-20</p> <p>Fix Resolution: org.dom4j:dom4j:2.0.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"dom4j","packageName":"dom4j","packageVersion":"1.6.1","isTransitiveDependency":false,"dependencyTree":"dom4j:dom4j:1.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.dom4j:dom4j:2.0.3"}],"vulnerabilityIdentifier":"CVE-2018-1000632","vulnerabilityDetails":"dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-1000632 (High) detected in dom4j-1.6.1.jar - ## CVE-2018-1000632 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-1.6.1.jar</b></p></summary> <p>dom4j: the flexible XML framework for Java</p> <p>Library home page: <a href="http://dom4j.org">http://dom4j.org</a></p> <p>Path to vulnerable library: /Nova8JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,_depth_0/Nova8JavaVulnerable/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,/root/.m2/repository/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar,_depth_0/Nova8JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/dom4j-1.6.1.jar,/root/.m2/repository/dom4j/dom4j/1.6.1/dom4j-1.6.1.jar</p> <p> Dependency Hierarchy: - :x: **dom4j-1.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/wrbejar/Nova8JavaVulnerable/commit/1387452ca7bf6063988c143381dd8c9938b0ab3f">1387452ca7bf6063988c143381dd8c9938b0ab3f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later. <p>Publish Date: 2018-08-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632>CVE-2018-1000632</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632</a></p> <p>Release Date: 2018-08-20</p> <p>Fix Resolution: org.dom4j:dom4j:2.0.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"dom4j","packageName":"dom4j","packageVersion":"1.6.1","isTransitiveDependency":false,"dependencyTree":"dom4j:dom4j:1.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.dom4j:dom4j:2.0.3"}],"vulnerabilityIdentifier":"CVE-2018-1000632","vulnerabilityDetails":"dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jar cve high severity vulnerability vulnerable library jar the flexible xml framework for java library home page a href path to vulnerable library target javavulnerablelab web inf lib jar depth target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib jar root repository jar depth target javavulnerablelab web inf lib jar root repository jar dependency hierarchy x jar vulnerable library found in head commit a href vulnerability details version prior to version contains a cwe xml injection vulnerability in class element methods addelement addattribute that can result in an attacker tampering with xml documents through xml injection this attack appear to be exploitable via an attacker specifying attributes or elements in the xml document this vulnerability appears to have been fixed in or later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails version prior to version contains a cwe xml injection vulnerability in class element methods addelement addattribute that can result in an attacker tampering with xml documents through xml injection this attack appear to be exploitable via an attacker specifying attributes or elements in the xml document this vulnerability appears to have been fixed in or later vulnerabilityurl
0
51,745
12,800,839,450
IssuesEvent
2020-07-02 17:54:01
tomopy/tomopy
https://api.github.com/repos/tomopy/tomopy
closed
Building TomoPy from source on Windows
build question
I'm trying to build TomoPy from source on a fresh python 3.7 install (from python.org) in a Win10 environment. Running into some build issues and wondering if anybody can give me some tips? Here is where I am now: 1. I have installed MinGW and placed the binaries in my PATH following this procedure: [https://www.hellocodies.com/how-to-install-mingw-compiler-on-windows/](https://www.hellocodies.com/how-to-install-mingw-compiler-on-windows/ ) 2. I copied in `make` following the suggestion here: [https://stackoverflow.com/questions/32127524/how-to-install-and-use-make-in-windows](https://stackoverflow.com/questions/32127524/how-to-install-and-use-make-in-windows) 3. Going through the win-37.yml file, I was able to `pip install`: - h5py - mkl - mkl-devel - nose - numexpr - numpy - libopencv - pywavelets - scikit-build - scikit-image - scipy - setup_tools_scm - setup_tools_scm_git_archive - six 4. Since `dxchange` is not available via PyPl, I cloned its repo and successfully built/installed from source. Git... that... dirt off your shoulder... -- So far, so good! But now things start to go awry... -- 5. The package `mkl_fft` does not seem to be available for python 3.7 via PyPl and doesn't seem to be supported beyond 3.6 at the moment: https://stackoverflow.com/questions/59854306/python3-7-missing-pip-package Trying to build this one from source throws errors, so right now mkl_fft is not installed. Still a WIP... the errors here are a topic for a different channel I suppose, but if there's a non-conda shortcut for mkl_fft I am all ears. 6. The dependency `vs2017_win-64` appears to be an Anaconda package. I do have Visual Studio 2017 installed that I'm using for some other dev projects... what does tomopy need from there and can I serve it up via environment variable instead? At the moment, when I run `pip install .` in the cloned tomopy directory the build fails with the attached output. The errors seem to be thrown by `cmake` though I also see failures here: ``` -- Performing Test c_fp_model_precise - Failed -- Performing Test cxx_fp_model_precise - Failed ``` Full build output is here: [tomopy-build-output.txt](https://github.com/tomopy/tomopy/files/4737479/tomopy-build-output.txt) Is this purely an issue of missing dependencies (see 5 & 6 above), or is there a configuration step I'm missing here? In the meantime, I'll try making a parallel python 3.6 installation to see if that helps resolve some missing dependencies (and, consequently, facilitate the tomopy build).
1.0
Building TomoPy from source on Windows - I'm trying to build TomoPy from source on a fresh python 3.7 install (from python.org) in a Win10 environment. Running into some build issues and wondering if anybody can give me some tips? Here is where I am now: 1. I have installed MinGW and placed the binaries in my PATH following this procedure: [https://www.hellocodies.com/how-to-install-mingw-compiler-on-windows/](https://www.hellocodies.com/how-to-install-mingw-compiler-on-windows/ ) 2. I copied in `make` following the suggestion here: [https://stackoverflow.com/questions/32127524/how-to-install-and-use-make-in-windows](https://stackoverflow.com/questions/32127524/how-to-install-and-use-make-in-windows) 3. Going through the win-37.yml file, I was able to `pip install`: - h5py - mkl - mkl-devel - nose - numexpr - numpy - libopencv - pywavelets - scikit-build - scikit-image - scipy - setup_tools_scm - setup_tools_scm_git_archive - six 4. Since `dxchange` is not available via PyPl, I cloned its repo and successfully built/installed from source. Git... that... dirt off your shoulder... -- So far, so good! But now things start to go awry... -- 5. The package `mkl_fft` does not seem to be available for python 3.7 via PyPl and doesn't seem to be supported beyond 3.6 at the moment: https://stackoverflow.com/questions/59854306/python3-7-missing-pip-package Trying to build this one from source throws errors, so right now mkl_fft is not installed. Still a WIP... the errors here are a topic for a different channel I suppose, but if there's a non-conda shortcut for mkl_fft I am all ears. 6. The dependency `vs2017_win-64` appears to be an Anaconda package. I do have Visual Studio 2017 installed that I'm using for some other dev projects... what does tomopy need from there and can I serve it up via environment variable instead? At the moment, when I run `pip install .` in the cloned tomopy directory the build fails with the attached output. The errors seem to be thrown by `cmake` though I also see failures here: ``` -- Performing Test c_fp_model_precise - Failed -- Performing Test cxx_fp_model_precise - Failed ``` Full build output is here: [tomopy-build-output.txt](https://github.com/tomopy/tomopy/files/4737479/tomopy-build-output.txt) Is this purely an issue of missing dependencies (see 5 & 6 above), or is there a configuration step I'm missing here? In the meantime, I'll try making a parallel python 3.6 installation to see if that helps resolve some missing dependencies (and, consequently, facilitate the tomopy build).
non_process
building tomopy from source on windows i m trying to build tomopy from source on a fresh python install from python org in a environment running into some build issues and wondering if anybody can give me some tips here is where i am now i have installed mingw and placed the binaries in my path following this procedure i copied in make following the suggestion here going through the win yml file i was able to pip install mkl mkl devel nose numexpr numpy libopencv pywavelets scikit build scikit image scipy setup tools scm setup tools scm git archive six since dxchange is not available via pypl i cloned its repo and successfully built installed from source git that dirt off your shoulder so far so good but now things start to go awry the package mkl fft does not seem to be available for python via pypl and doesn t seem to be supported beyond at the moment trying to build this one from source throws errors so right now mkl fft is not installed still a wip the errors here are a topic for a different channel i suppose but if there s a non conda shortcut for mkl fft i am all ears the dependency win appears to be an anaconda package i do have visual studio installed that i m using for some other dev projects what does tomopy need from there and can i serve it up via environment variable instead at the moment when i run pip install in the cloned tomopy directory the build fails with the attached output the errors seem to be thrown by cmake though i also see failures here performing test c fp model precise failed performing test cxx fp model precise failed full build output is here is this purely an issue of missing dependencies see above or is there a configuration step i m missing here in the meantime i ll try making a parallel python installation to see if that helps resolve some missing dependencies and consequently facilitate the tomopy build
0
32,148
15,241,993,840
IssuesEvent
2021-02-19 09:15:40
wave-harmonic/crest
https://api.github.com/repos/wave-harmonic/crest
closed
Bad performance on PS4
performance
**Describe the bug** Dear Crest community, I've ran a few performance tests today (main scene) on the major consoles and found out that Crest performs quite bad on the PS4. Even worse than on the Switch. Tested a few different versions, URP and HDRP and the verdict stays the same (check the screenshot). I remember having issues to implement async gpu readback on PS4 in our last game, it was super slow and I had to split my requests over frames into multiple readbacks/requests. This happened only on the PS4 and according to the internal PS4 forum, it has something to do with the memory access management of the device. Unfortunately I'm not allowed to give here more insights due to NDA. I wonder if something similar could be the case here and if there is any way to get a better performance on PS4. **Screenshots / video** ![image](https://user-images.githubusercontent.com/9056181/90875222-20567c00-e3a1-11ea-8fbd-b9920f8bba5d.png) **Additional notes** You can get all the profiler logs here: [crest-profiler-data.zip](http://goga.ch/r/crest-profiler-data.zip)
True
Bad performance on PS4 - **Describe the bug** Dear Crest community, I've ran a few performance tests today (main scene) on the major consoles and found out that Crest performs quite bad on the PS4. Even worse than on the Switch. Tested a few different versions, URP and HDRP and the verdict stays the same (check the screenshot). I remember having issues to implement async gpu readback on PS4 in our last game, it was super slow and I had to split my requests over frames into multiple readbacks/requests. This happened only on the PS4 and according to the internal PS4 forum, it has something to do with the memory access management of the device. Unfortunately I'm not allowed to give here more insights due to NDA. I wonder if something similar could be the case here and if there is any way to get a better performance on PS4. **Screenshots / video** ![image](https://user-images.githubusercontent.com/9056181/90875222-20567c00-e3a1-11ea-8fbd-b9920f8bba5d.png) **Additional notes** You can get all the profiler logs here: [crest-profiler-data.zip](http://goga.ch/r/crest-profiler-data.zip)
non_process
bad performance on describe the bug dear crest community i ve ran a few performance tests today main scene on the major consoles and found out that crest performs quite bad on the even worse than on the switch tested a few different versions urp and hdrp and the verdict stays the same check the screenshot i remember having issues to implement async gpu readback on in our last game it was super slow and i had to split my requests over frames into multiple readbacks requests this happened only on the and according to the internal forum it has something to do with the memory access management of the device unfortunately i m not allowed to give here more insights due to nda i wonder if something similar could be the case here and if there is any way to get a better performance on screenshots video additional notes you can get all the profiler logs here
0
74,645
9,795,286,889
IssuesEvent
2019-06-11 02:58:48
ushahidi/platform
https://api.github.com/repos/ushahidi/platform
closed
Data model diagram
Archived documentation
@rjmackay asked me to create a diagram of our data model as I see it now to discuss at the retreat. ![diagram](https://user-images.githubusercontent.com/29209303/34419109-3fe617d8-ebbf-11e7-96ad-b4386da0e34e.png) Also in PDF form: [diagram.pdf](https://github.com/ushahidi/platform/files/1591799/diagram.pdf) I tried to represent all the things data can "be" in the platform and how they relate to each-other. Square things are user-configurable. Circular things are not. Please let me know if I've forgotten or misconstrued anything. may be of interest to @kinstelli @jshorland @willdoran @aoduor
1.0
Data model diagram - @rjmackay asked me to create a diagram of our data model as I see it now to discuss at the retreat. ![diagram](https://user-images.githubusercontent.com/29209303/34419109-3fe617d8-ebbf-11e7-96ad-b4386da0e34e.png) Also in PDF form: [diagram.pdf](https://github.com/ushahidi/platform/files/1591799/diagram.pdf) I tried to represent all the things data can "be" in the platform and how they relate to each-other. Square things are user-configurable. Circular things are not. Please let me know if I've forgotten or misconstrued anything. may be of interest to @kinstelli @jshorland @willdoran @aoduor
non_process
data model diagram rjmackay asked me to create a diagram of our data model as i see it now to discuss at the retreat also in pdf form i tried to represent all the things data can be in the platform and how they relate to each other square things are user configurable circular things are not please let me know if i ve forgotten or misconstrued anything may be of interest to kinstelli jshorland willdoran aoduor
0
153,508
13,507,323,137
IssuesEvent
2020-09-14 05:41:17
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
closed
error in platform checkbox exampes
bug documentation v.0.21.0
#### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. go to http://localhost:4200/fundamental-ngx#/platform/checkbox and check the console: you will see ``` core.js:4196 ERROR Error: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: '{}'. Current value: '{ "red": false, "blue": true }'. at throwErrorIfNoChangesMode (core.js:5466) at bindingUpdated (core.js:13111) at interpolation1 (core.js:13218) at Module.ɵɵtextInterpolate1 (core.js:16632) at PlatformCompactChekboxExampleComponent_Template (platform-binary-checkbox.component.html:63) at executeTemplate (core.js:7446) at refreshView (core.js:7315) at refreshComponent (core.js:8453) at refreshChildComponents (core.js:7108) at refreshView (core.js:7365) ``` #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) latest
1.0
error in platform checkbox exampes - #### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. go to http://localhost:4200/fundamental-ngx#/platform/checkbox and check the console: you will see ``` core.js:4196 ERROR Error: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: '{}'. Current value: '{ "red": false, "blue": true }'. at throwErrorIfNoChangesMode (core.js:5466) at bindingUpdated (core.js:13111) at interpolation1 (core.js:13218) at Module.ɵɵtextInterpolate1 (core.js:16632) at PlatformCompactChekboxExampleComponent_Template (platform-binary-checkbox.component.html:63) at executeTemplate (core.js:7446) at refreshView (core.js:7315) at refreshComponent (core.js:8453) at refreshChildComponents (core.js:7108) at refreshView (core.js:7365) ``` #### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.) latest
non_process
error in platform checkbox exampes is this a bug enhancement or feature request bug briefly describe your proposal go to and check the console you will see core js error error expressionchangedafterithasbeencheckederror expression has changed after it was checked previous value current value red false blue true at throwerrorifnochangesmode core js at bindingupdated core js at core js at module core js at platformcompactchekboxexamplecomponent template platform binary checkbox component html at executetemplate core js at refreshview core js at refreshcomponent core js at refreshchildcomponents core js at refreshview core js which versions of angular and fundamental library for angular are affected if this is a feature request use current version latest
0
15,744
19,910,559,692
IssuesEvent
2022-01-25 16:45:19
input-output-hk/high-assurance-legacy
https://api.github.com/repos/input-output-hk/high-assurance-legacy
closed
Formally prove lemma `sidetrack_addition`
type: enhancement language: isabelle topic: process calculus
Our goal is to formally prove the `sidetrack_addition` lemma described in #65.
1.0
Formally prove lemma `sidetrack_addition` - Our goal is to formally prove the `sidetrack_addition` lemma described in #65.
process
formally prove lemma sidetrack addition our goal is to formally prove the sidetrack addition lemma described in
1
264,861
8,320,394,535
IssuesEvent
2018-09-25 20:04:59
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
internetpf.itau.com.br - see bug description
browser-firefox priority-normal
<!-- @browser: Firefox 63.0b9 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://internetpf.itau.com.br/banklinepf//GRIPNET/bklcom.dll **Browser / Version**: Firefox 63.0b9 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: I can not type the password in the window. The window turns red and the keyboard does not work in this position. **Steps to Reproduce**: I filled in the data and in the final window to fill the password the keyboard stops working. [![Screenshot Description](https://webcompat.com/uploads/2018/9/38a899a4-71f3-4540-b951-510fd9452c9c-thumb.jpeg)](https://webcompat.com/uploads/2018/9/38a899a4-71f3-4540-b951-510fd9452c9c.jpeg) [![Screenshot Description](https://webcompat.com/uploads/2018/9/043b3131-28bc-4ba7-94cf-4de1a5a45864-thumb.jpeg)](https://webcompat.com/uploads/2018/9/043b3131-28bc-4ba7-94cf-4de1a5a45864.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>buildID: 20180924202103</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
internetpf.itau.com.br - see bug description - <!-- @browser: Firefox 63.0b9 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://internetpf.itau.com.br/banklinepf//GRIPNET/bklcom.dll **Browser / Version**: Firefox 63.0b9 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: I can not type the password in the window. The window turns red and the keyboard does not work in this position. **Steps to Reproduce**: I filled in the data and in the final window to fill the password the keyboard stops working. [![Screenshot Description](https://webcompat.com/uploads/2018/9/38a899a4-71f3-4540-b951-510fd9452c9c-thumb.jpeg)](https://webcompat.com/uploads/2018/9/38a899a4-71f3-4540-b951-510fd9452c9c.jpeg) [![Screenshot Description](https://webcompat.com/uploads/2018/9/043b3131-28bc-4ba7-94cf-4de1a5a45864-thumb.jpeg)](https://webcompat.com/uploads/2018/9/043b3131-28bc-4ba7-94cf-4de1a5a45864.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>buildID: 20180924202103</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
internetpf itau com br see bug description url browser version firefox operating system windows tested another browser yes problem type something else description i can not type the password in the window the window turns red and the keyboard does not work in this position steps to reproduce i filled in the data and in the final window to fill the password the keyboard stops working browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel beta from with ❤️
0
19,539
25,853,148,324
IssuesEvent
2022-12-13 11:51:14
Altinn/altinn-storage
https://api.github.com/repos/Altinn/altinn-storage
closed
Extend advanced search to support search on status
kind/user-story solution/sbl area/search area/process area/platform-sbl-integration Epic tm/no
## Description Currently we only support "search in": inbox, archive and trash. But there are also other statuses we can filter searches on. We should look into supporting these ## Screenshots ![image](https://user-images.githubusercontent.com/47737608/105331740-16e17200-5bd4-11eb-8667-33fd836cd7e8.png) - Exclude "Venter på andre" (Current user not authorized to perform task.) Must be a separate issue. - Could potentially use altinn task type: feedback - Exclude "Til signering". Signing is currently not a process step. ## Acceptance criteria TBD ## Specification tasks - [ ] Development tasks are defined - [ ] Test design / decide test need ## Development tasks - [ ] Documentation - [ ] QA - [ ] Manual test - [ ] Automated tes
1.0
Extend advanced search to support search on status - ## Description Currently we only support "search in": inbox, archive and trash. But there are also other statuses we can filter searches on. We should look into supporting these ## Screenshots ![image](https://user-images.githubusercontent.com/47737608/105331740-16e17200-5bd4-11eb-8667-33fd836cd7e8.png) - Exclude "Venter på andre" (Current user not authorized to perform task.) Must be a separate issue. - Could potentially use altinn task type: feedback - Exclude "Til signering". Signing is currently not a process step. ## Acceptance criteria TBD ## Specification tasks - [ ] Development tasks are defined - [ ] Test design / decide test need ## Development tasks - [ ] Documentation - [ ] QA - [ ] Manual test - [ ] Automated tes
process
extend advanced search to support search on status description currently we only support search in inbox archive and trash but there are also other statuses we can filter searches on we should look into supporting these screenshots exclude venter på andre current user not authorized to perform task must be a separate issue could potentially use altinn task type feedback exclude til signering signing is currently not a process step acceptance criteria tbd specification tasks development tasks are defined test design decide test need development tasks documentation qa manual test automated tes
1
235,789
7,743,018,371
IssuesEvent
2018-05-29 11:27:47
Gloirin/m2gTest
https://api.github.com/repos/Gloirin/m2gTest
closed
0000258: redesign contact edit dialog
Addressbook Feature Request low priority
**Reported by pschuele on 13 Aug 2008 15:47** - display personal information in 3 rows - birthday below image - relocate display name to the center
1.0
0000258: redesign contact edit dialog - **Reported by pschuele on 13 Aug 2008 15:47** - display personal information in 3 rows - birthday below image - relocate display name to the center
non_process
redesign contact edit dialog reported by pschuele on aug display personal information in rows birthday below image relocate display name to the center
0
9,734
12,731,012,515
IssuesEvent
2020-06-25 08:20:48
Politiwatch/privacyspy
https://api.github.com/repos/Politiwatch/privacyspy
closed
Hi Igor (test issue to demonstrate how @privacyspy-bot handles our process)
problem process test
This is a test issue that should (hopefully) demonstrate the bot.
1.0
Hi Igor (test issue to demonstrate how @privacyspy-bot handles our process) - This is a test issue that should (hopefully) demonstrate the bot.
process
hi igor test issue to demonstrate how privacyspy bot handles our process this is a test issue that should hopefully demonstrate the bot
1
14,091
16,980,548,719
IssuesEvent
2021-06-30 08:17:54
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Time preprocessor uses numpy array instead of dask array
preprocessor
The line is: https://github.com/ESMValGroup/ESMValCore/blob/bcb384b4417a670d0b7e40c78be8bae05389df7d/esmvalcore/preprocessor/_time.py#L178 It is: ``` ones = np.ones_like(cube.data) ``` but it should be: ``` ones = da.ones(cube.data.shape) ``` Minor change, but this change helps me run a heavy recipe on jasmin sci1.
1.0
Time preprocessor uses numpy array instead of dask array - The line is: https://github.com/ESMValGroup/ESMValCore/blob/bcb384b4417a670d0b7e40c78be8bae05389df7d/esmvalcore/preprocessor/_time.py#L178 It is: ``` ones = np.ones_like(cube.data) ``` but it should be: ``` ones = da.ones(cube.data.shape) ``` Minor change, but this change helps me run a heavy recipe on jasmin sci1.
process
time preprocessor uses numpy array instead of dask array the line is it is ones np ones like cube data but it should be ones da ones cube data shape minor change but this change helps me run a heavy recipe on jasmin
1
2,440
5,219,714,801
IssuesEvent
2017-01-26 19:50:25
wpninjas/ninja-forms
https://api.github.com/repos/wpninjas/ninja-forms
closed
New radio event before form submission.
Feature Request FRONT: Processing
The before:submit radio message is sent before fields are validated. I think that the expectation should be that this fires only after fields are valid and the form is ready to submit. This should be moved to after the validation calls, and a new before:validation radio message should be added in the place that before:submit currently occupies.
1.0
New radio event before form submission. - The before:submit radio message is sent before fields are validated. I think that the expectation should be that this fires only after fields are valid and the form is ready to submit. This should be moved to after the validation calls, and a new before:validation radio message should be added in the place that before:submit currently occupies.
process
new radio event before form submission the before submit radio message is sent before fields are validated i think that the expectation should be that this fires only after fields are valid and the form is ready to submit this should be moved to after the validation calls and a new before validation radio message should be added in the place that before submit currently occupies
1
225,931
17,931,142,071
IssuesEvent
2021-09-10 09:23:50
input-output-hk/cardano-wallet
https://api.github.com/repos/input-output-hk/cardano-wallet
opened
Flaky test: `/CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01`
Test failure
### Please ensure: - [X] This is actually a flaky test already present in the code and not caused by your PR. ### Context https://github.com/input-output-hk/cardano-wallet/pull/2885#issuecomment-916761510 ### Job name https://buildkite.com/input-output-hk/cardano-wallet/builds/16551#ed15249e-3da0-4700-b514-80607214777a ### Test case name(s) /CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01 ### Error message ```shell src/Test/Integration/Scenario/CLI/Shelley/Transactions.hs:781:9: 1) CLI Specifications, SHELLEY_CLI_TRANSACTIONS, TRANS_DELETE_01 - Cannot forget pending transaction when not pending anymore via CLI "Ok.\n" does not contain "The transaction with id: d9d176cbf97fe8dd08956b3f2ab276d7e32be0aa2dafa8040d126f51af74077b cannot be forgotten as it is already in the ledger." To rerun use: --match "/CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01 - Cannot forget pending transaction when not pending anymore via CLI/" Randomized with seed 1455226691 ``` ### Build link https://buildkite.com/input-output-hk/cardano-wallet/builds/16551#ed15249e-3da0-4700-b514-80607214777a
1.0
Flaky test: `/CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01` - ### Please ensure: - [X] This is actually a flaky test already present in the code and not caused by your PR. ### Context https://github.com/input-output-hk/cardano-wallet/pull/2885#issuecomment-916761510 ### Job name https://buildkite.com/input-output-hk/cardano-wallet/builds/16551#ed15249e-3da0-4700-b514-80607214777a ### Test case name(s) /CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01 ### Error message ```shell src/Test/Integration/Scenario/CLI/Shelley/Transactions.hs:781:9: 1) CLI Specifications, SHELLEY_CLI_TRANSACTIONS, TRANS_DELETE_01 - Cannot forget pending transaction when not pending anymore via CLI "Ok.\n" does not contain "The transaction with id: d9d176cbf97fe8dd08956b3f2ab276d7e32be0aa2dafa8040d126f51af74077b cannot be forgotten as it is already in the ledger." To rerun use: --match "/CLI Specifications/SHELLEY_CLI_TRANSACTIONS/TRANS_DELETE_01 - Cannot forget pending transaction when not pending anymore via CLI/" Randomized with seed 1455226691 ``` ### Build link https://buildkite.com/input-output-hk/cardano-wallet/builds/16551#ed15249e-3da0-4700-b514-80607214777a
non_process
flaky test cli specifications shelley cli transactions trans delete please ensure this is actually a flaky test already present in the code and not caused by your pr context job name test case name s cli specifications shelley cli transactions trans delete error message shell src test integration scenario cli shelley transactions hs cli specifications shelley cli transactions trans delete cannot forget pending transaction when not pending anymore via cli ok n does not contain the transaction with id cannot be forgotten as it is already in the ledger to rerun use match cli specifications shelley cli transactions trans delete cannot forget pending transaction when not pending anymore via cli randomized with seed build link
0
49,724
7,531,249,155
IssuesEvent
2018-04-15 02:57:02
awyand/tactical-mdm
https://api.github.com/repos/awyand/tactical-mdm
opened
Complete mockup/wireframe
documentation
Includes: 1. Screen-by-screen design layouts with annotations 2. Describe all UI/UX components and all data relevant to the screen
1.0
Complete mockup/wireframe - Includes: 1. Screen-by-screen design layouts with annotations 2. Describe all UI/UX components and all data relevant to the screen
non_process
complete mockup wireframe includes screen by screen design layouts with annotations describe all ui ux components and all data relevant to the screen
0
17,011
22,386,215,533
IssuesEvent
2022-06-17 00:51:25
figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
https://api.github.com/repos/figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
closed
Review Error de lógica en ResortPricingCalulator
task process
Cada product owner debe seguir como guía los escenarios descritos en Gherkin Esfuerzo en HS-P (por persona): Estimado: 1 Real: 1 (@matiasmolinolo )
1.0
Review Error de lógica en ResortPricingCalulator - Cada product owner debe seguir como guía los escenarios descritos en Gherkin Esfuerzo en HS-P (por persona): Estimado: 1 Real: 1 (@matiasmolinolo )
process
review error de lógica en resortpricingcalulator cada product owner debe seguir como guía los escenarios descritos en gherkin esfuerzo en hs p por persona estimado real matiasmolinolo
1
6,877
10,014,606,770
IssuesEvent
2019-07-15 17:57:18
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
opened
Logging: systest teardown flakes with 404.
api: logging flaky testing type: process
From: https://source.cloud.google.com/results/invocations/91034a85-0df7-4b53-bba7-258e166bb3fe/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Flogging/log ```python ______________________ TestLogging.test_log_root_handler _______________________ self = <test_system.TestLogging testMethod=test_log_root_handler> def tearDown(self): retry = RetryErrors((NotFound, TooManyRequests, RetryError), max_tries=9) for doomed in self.to_delete: try: > retry(doomed.delete)() tests/system/test_system.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:108: in wrapped_function return to_wrap(*args, **kwargs) google/cloud/logging/logger.py:223: in delete client.logging_api.logger_delete(self.project, self.name) google/cloud/logging/_gapic.py:139: in logger_delete self._gapic_api.delete_log(path) google/cloud/logging_v2/gapic/logging_service_v2_client.py:326: in delete_log request, retry=retry, timeout=timeout, metadata=metadata ../api_core/google/api_core/gapic_v1/method.py:143: in __call__ return wrapped_func(*args, **kwargs) ../api_core/google/api_core/retry.py:273: in retry_wrapped_func on_error=on_error, ../api_core/google/api_core/retry.py:182: in retry_target return target() ../api_core/google/api_core/timeout.py:214: in func_with_timeout return func(*args, **kwargs) ../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = NotFound(u'Log handler_root-1563211829508 does not exist',) from_value = <_Rendezvous of RPC that terminated with: status = StatusCode.NOT_FOUND deta...pc_message":"Log handler_root-1563211829508 does not exist","grpc_status":5}" > def raise_from(value, from_value): > raise value E NotFound: 404 Log handler_root-1563211829508 does not exist .nox/system-2-7/lib/python2.7/site-packages/six.py:737: NotFound ``` Superficially similar to #5632, but in this case, the log being deleted has a unique name.
1.0
Logging: systest teardown flakes with 404. - From: https://source.cloud.google.com/results/invocations/91034a85-0df7-4b53-bba7-258e166bb3fe/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Flogging/log ```python ______________________ TestLogging.test_log_root_handler _______________________ self = <test_system.TestLogging testMethod=test_log_root_handler> def tearDown(self): retry = RetryErrors((NotFound, TooManyRequests, RetryError), max_tries=9) for doomed in self.to_delete: try: > retry(doomed.delete)() tests/system/test_system.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:108: in wrapped_function return to_wrap(*args, **kwargs) google/cloud/logging/logger.py:223: in delete client.logging_api.logger_delete(self.project, self.name) google/cloud/logging/_gapic.py:139: in logger_delete self._gapic_api.delete_log(path) google/cloud/logging_v2/gapic/logging_service_v2_client.py:326: in delete_log request, retry=retry, timeout=timeout, metadata=metadata ../api_core/google/api_core/gapic_v1/method.py:143: in __call__ return wrapped_func(*args, **kwargs) ../api_core/google/api_core/retry.py:273: in retry_wrapped_func on_error=on_error, ../api_core/google/api_core/retry.py:182: in retry_target return target() ../api_core/google/api_core/timeout.py:214: in func_with_timeout return func(*args, **kwargs) ../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = NotFound(u'Log handler_root-1563211829508 does not exist',) from_value = <_Rendezvous of RPC that terminated with: status = StatusCode.NOT_FOUND deta...pc_message":"Log handler_root-1563211829508 does not exist","grpc_status":5}" > def raise_from(value, from_value): > raise value E NotFound: 404 Log handler_root-1563211829508 does not exist .nox/system-2-7/lib/python2.7/site-packages/six.py:737: NotFound ``` Superficially similar to #5632, but in this case, the log being deleted has a unique name.
process
logging systest teardown flakes with from python testlogging test log root handler self def teardown self retry retryerrors notfound toomanyrequests retryerror max tries for doomed in self to delete try retry doomed delete tests system test system py test utils test utils retry py in wrapped function return to wrap args kwargs google cloud logging logger py in delete client logging api logger delete self project self name google cloud logging gapic py in logger delete self gapic api delete log path google cloud logging gapic logging service client py in delete log request retry retry timeout timeout metadata metadata api core google api core gapic method py in call return wrapped func args kwargs api core google api core retry py in retry wrapped func on error on error api core google api core retry py in retry target return target api core google api core timeout py in func with timeout return func args kwargs api core google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value notfound u log handler root does not exist from value rendezvous of rpc that terminated with status statuscode not found deta pc message log handler root does not exist grpc status def raise from value from value raise value e notfound log handler root does not exist nox system lib site packages six py notfound superficially similar to but in this case the log being deleted has a unique name
1
15,900
20,106,264,525
IssuesEvent
2022-02-07 10:47:46
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Comparing several single model ensembles
enhancement preprocessor
Hi all, two very common things that users will want to do in ESMValTool are: 1. Compare single model ensemble means. For instance, I have a HadGEM2-ES 4 member ensemble and UKESM 12 member ensemble. I want to make some time series plots showing the time development of the two single-model ensemble means. How do I do that? Is there a way to use the multi_model_statistics preprocessor so that it only compares single models? Or perhaps we need a single_model_ensemble_statistics preprocessor? 2. Calculate the differences between two specific time periods in the preprocessor stage. For instance, I want to look at average surface temperature in the years 1975-2000 minus the average surface temperature between 1875-1900. I don't know of a way to do this in the preprocessor, it seems to be such a common job that it would be great to get it as a preprocessor instead of in the diagnostic stage. Any ideas? Lee
1.0
Comparing several single model ensembles - Hi all, two very common things that users will want to do in ESMValTool are: 1. Compare single model ensemble means. For instance, I have a HadGEM2-ES 4 member ensemble and UKESM 12 member ensemble. I want to make some time series plots showing the time development of the two single-model ensemble means. How do I do that? Is there a way to use the multi_model_statistics preprocessor so that it only compares single models? Or perhaps we need a single_model_ensemble_statistics preprocessor? 2. Calculate the differences between two specific time periods in the preprocessor stage. For instance, I want to look at average surface temperature in the years 1975-2000 minus the average surface temperature between 1875-1900. I don't know of a way to do this in the preprocessor, it seems to be such a common job that it would be great to get it as a preprocessor instead of in the diagnostic stage. Any ideas? Lee
process
comparing several single model ensembles hi all two very common things that users will want to do in esmvaltool are compare single model ensemble means for instance i have a es member ensemble and ukesm member ensemble i want to make some time series plots showing the time development of the two single model ensemble means how do i do that is there a way to use the multi model statistics preprocessor so that it only compares single models or perhaps we need a single model ensemble statistics preprocessor calculate the differences between two specific time periods in the preprocessor stage for instance i want to look at average surface temperature in the years minus the average surface temperature between i don t know of a way to do this in the preprocessor it seems to be such a common job that it would be great to get it as a preprocessor instead of in the diagnostic stage any ideas lee
1
626,320
19,807,691,371
IssuesEvent
2022-01-19 08:53:39
IATI/ckanext-iati
https://api.github.com/repos/IATI/ckanext-iati
opened
Registry API package_search does not return data for xm-dac-3-1
High priority Q1
It was reported that the **package_search** does not return data when searching for publisher xm-dac-3-1 For other publishers data is returned. Registry API endpoint returning data: https://iatiregistry.org/api/3/action/package_search?q=publisher_iati_id:dk-cvr-26487013 Registry API endpoint that does not return data: https://iatiregistry.org/api/3/action/package_search?q=publisher_iati_id:xm-dac-3-1
1.0
Registry API package_search does not return data for xm-dac-3-1 - It was reported that the **package_search** does not return data when searching for publisher xm-dac-3-1 For other publishers data is returned. Registry API endpoint returning data: https://iatiregistry.org/api/3/action/package_search?q=publisher_iati_id:dk-cvr-26487013 Registry API endpoint that does not return data: https://iatiregistry.org/api/3/action/package_search?q=publisher_iati_id:xm-dac-3-1
non_process
registry api package search does not return data for xm dac it was reported that the package search does not return data when searching for publisher xm dac for other publishers data is returned registry api endpoint returning data registry api endpoint that does not return data
0
714,568
24,566,692,817
IssuesEvent
2022-10-13 04:11:52
encorelab/ck-board
https://api.github.com/repos/encorelab/ck-board
opened
Create "Learner Model" UI
enhancement high priority
The Learner Model UI displays graphs of student data (some data manually entered by the teacher and some gathered from the CK Board). 1. For instance, we may add the Learner Model UI as part of the CK Student Monitor UI (below the task monitoring tools) <img width="212" alt="Screen Shot 2022-10-12 at 11 29 13 PM" src="https://user-images.githubusercontent.com/6416247/195492500-b9e2390c-5f3d-43b4-8ea2-49478ad2156d.png"> 2. By selecting either View by Content or View by SEL, the teacher gets an overview of all students by that metric <img width="794" alt="Screen Shot 2022-10-12 at 11 30 01 PM" src="https://user-images.githubusercontent.com/6416247/195492582-dd373af5-fe5c-4763-91c6-1794250c18ff.png"> 3. By selecting a student's name, the teacher can view or modify data for each student. Displayed data includes: (1) Content knowledge - entered by the teacher based on diagnostic and formative assessments, (2) Social-emotional learning (SEL) data - entered by the teacher based on diagnostic and re-administration of SEL survey, and (3) dynamic system data - including goals set by the teacher <img width="795" alt="Screen Shot 2022-10-12 at 11 32 19 PM" src="https://user-images.githubusercontent.com/6416247/195492850-3df71e94-26eb-40a2-8d76-5970e47fbaee.png"> 4. TBD To see demo code of the javascript for the above visualizations or to explore other interactive demos, please explore the resources below: - [Highcharts demo code.zip](https://github.com/encorelab/ck-board/files/9770363/Highcharts.demo.code.zip) - https://www.highcharts.com/demo/gauge-activity
1.0
Create "Learner Model" UI - The Learner Model UI displays graphs of student data (some data manually entered by the teacher and some gathered from the CK Board). 1. For instance, we may add the Learner Model UI as part of the CK Student Monitor UI (below the task monitoring tools) <img width="212" alt="Screen Shot 2022-10-12 at 11 29 13 PM" src="https://user-images.githubusercontent.com/6416247/195492500-b9e2390c-5f3d-43b4-8ea2-49478ad2156d.png"> 2. By selecting either View by Content or View by SEL, the teacher gets an overview of all students by that metric <img width="794" alt="Screen Shot 2022-10-12 at 11 30 01 PM" src="https://user-images.githubusercontent.com/6416247/195492582-dd373af5-fe5c-4763-91c6-1794250c18ff.png"> 3. By selecting a student's name, the teacher can view or modify data for each student. Displayed data includes: (1) Content knowledge - entered by the teacher based on diagnostic and formative assessments, (2) Social-emotional learning (SEL) data - entered by the teacher based on diagnostic and re-administration of SEL survey, and (3) dynamic system data - including goals set by the teacher <img width="795" alt="Screen Shot 2022-10-12 at 11 32 19 PM" src="https://user-images.githubusercontent.com/6416247/195492850-3df71e94-26eb-40a2-8d76-5970e47fbaee.png"> 4. TBD To see demo code of the javascript for the above visualizations or to explore other interactive demos, please explore the resources below: - [Highcharts demo code.zip](https://github.com/encorelab/ck-board/files/9770363/Highcharts.demo.code.zip) - https://www.highcharts.com/demo/gauge-activity
non_process
create learner model ui the learner model ui displays graphs of student data some data manually entered by the teacher and some gathered from the ck board for instance we may add the learner model ui as part of the ck student monitor ui below the task monitoring tools img width alt screen shot at pm src by selecting either view by content or view by sel the teacher gets an overview of all students by that metric img width alt screen shot at pm src by selecting a student s name the teacher can view or modify data for each student displayed data includes content knowledge entered by the teacher based on diagnostic and formative assessments social emotional learning sel data entered by the teacher based on diagnostic and re administration of sel survey and dynamic system data including goals set by the teacher img width alt screen shot at pm src tbd to see demo code of the javascript for the above visualizations or to explore other interactive demos please explore the resources below
0
478,895
13,787,752,326
IssuesEvent
2020-10-09 05:44:34
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
[ISSUE] Script issues while source oracle scripts against oracle 12c and 19c with 5.9.0 latest wum
Priority/Highest Severity/Critical bug
**Environment** wso2is-5.9.0+1585323544240.full **Steps to Reproduce** Source dbscripts/oracle.sql against oracle 12c and oracle 19c **Observation** Below errors were noticed ``` table REG_CLUSTER_LOCK created. table REG_LOG created. sequence REG_LOG_SEQUENCE created. index REG_LOG_IND_BY_REGLOG created. TRIGGER REG_LOG_TRIGGER compiled table REG_PATH created. Error starting at line 40 in command: CREATE INDEX REG_PATH_IND_BY_PATH_VALUE ON REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error at Command Line:40 Column:53 Error report: SQL Error: ORA-01408: such column list already indexed 01408. 00000 - "such column list already indexed" *Cause: *Action: index REG_PATH_IND_BY_PARENT_ID created. sequence REG_PATH_SEQUENCE created. ``` ``` TRIGGER UM_HYBRID_REMEMBER_ME_TRIGGER compiled table UM_SYSTEM_ROLE created. sequence UM_SYSTEM_ROLE_SEQUENCE created. Error starting at line 784 in command: CREATE INDEX SYSTEM_ROLE_IND_BY_RN_TI ON UM_SYSTEM_ROLE(UM_ROLE_NAME, UM_TENANT_ID) Error at Command Line:784 Column:57 Error report: SQL Error: ORA-01408: such column list already indexed 01408. 00000 - "such column list already indexed" *Cause: *Action: TRIGGER UM_SYSTEM_ROLE_TRIGGER compiled table UM_SYSTEM_USER_ROLE created. sequence UM_SYSTEM_USER_ROLE_SEQUENCE created. TRIGGER UM_SYSTEM_USER_ROLE_TRIGGER compiled ``` Identity/uma/oracle.sql While executing the uma oracle script noticed below errors ``` table IDN_UMA_RESOURCE created. Error starting at line 14 in command: CREATE SEQUENCE IDN_UMA_RESOURCE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE / CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_SEQ.nextval INTO :NEW.ID FROM dual Error at Command Line:15 Column:3 Error report: SQL Error: ORA-00933: SQL command not properly ended 00933. 00000 - "SQL command not properly ended" *Cause: *Action: Error starting at line 24 in command: END Error report: Unknown Command index IDX_RID created. index IDX_USER created. index IDX_USER_RID created. Error starting at line 36 in command: CREATE TABLE IDN_UMA_RESOURCE_META_DATA ( ID INTEGER, RESOURCE_IDENTITY INTEGER NOT NULL, PROPERTY_KEY VARCHAR2(40), PROPERTY_VALUE VARCHAR2(255), PRIMARY KEY (ID), FOREIGN KEY (RESOURCE_IDENTITY) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_RESOURCE_META_DATA_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:44 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 49 in command: CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_METADATA_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE_META_DATA REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_META_DATA_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 59 in command: CREATE TABLE IDN_UMA_RESOURCE_SCOPE ( ID INTEGER, RESOURCE_IDENTITY INTEGER NOT NULL, SCOPE_NAME VARCHAR2(255), PRIMARY KEY (ID), FOREIGN KEY (RESOURCE_IDENTITY) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_RESOURCE_SCOPE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:66 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 71 in command: CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_SCOPE_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE_SCOPE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_SCOPE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 81 in command: CREATE INDEX IDX_RS ON IDN_UMA_RESOURCE_SCOPE (SCOPE_NAME) Error at Command Line:81 Column:24 Error report: SQL Error: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 84 in command: CREATE TABLE IDN_UMA_PERMISSION_TICKET ( ID INTEGER, PT VARCHAR2(255) NOT NULL, TIME_CREATED TIMESTAMP NOT NULL, EXPIRY_TIME TIMESTAMP NOT NULL, TICKET_STATE VARCHAR2(25) DEFAULT 'ACTIVE', TENANT_ID INTEGER DEFAULT -1234, TOKEN_ID VARCHAR(255), PRIMARY KEY (ID) ) / CREATE SEQUENCE IDN_UMA_PERMISSION_TICKET_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:94 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 99 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PERMISSION_TICKET_TRIG BEFORE INSERT ON IDN_UMA_PERMISSION_TICKET REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PERMISSION_TICKET_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 109 in command: CREATE INDEX IDX_PT ON IDN_UMA_PERMISSION_TICKET (PT) Error at Command Line:109 Column:24 Error report: SQL Error: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 112 in command: CREATE TABLE IDN_UMA_PT_RESOURCE ( ID INTEGER, PT_RESOURCE_ID INTEGER NOT NULL, PT_ID INTEGER NOT NULL, PRIMARY KEY (ID), FOREIGN KEY (PT_ID) REFERENCES IDN_UMA_PERMISSION_TICKET (ID) ON DELETE CASCADE, FOREIGN KEY (PT_RESOURCE_ID) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_PT_RESOURCE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:120 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 125 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PT_RESOURCE_TRIG BEFORE INSERT ON IDN_UMA_PT_RESOURCE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PT_RESOURCE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 135 in command: CREATE TABLE IDN_UMA_PT_RESOURCE_SCOPE ( ID INTEGER, PT_RESOURCE_ID INTEGER NOT NULL, PT_SCOPE_ID INTEGER NOT NULL, PRIMARY KEY (ID), FOREIGN KEY (PT_RESOURCE_ID) REFERENCES IDN_UMA_PT_RESOURCE (ID) ON DELETE CASCADE, FOREIGN KEY (PT_SCOPE_ID) REFERENCES IDN_UMA_RESOURCE_SCOPE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_PT_RESOURCE_SCOPE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:143 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 148 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PT_RESOURCE_SCOPE_TRIG BEFORE INSERT ON IDN_UMA_PT_RESOURCE_SCOPE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PT_RESOURCE_SCOPE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: ```
1.0
[ISSUE] Script issues while source oracle scripts against oracle 12c and 19c with 5.9.0 latest wum - **Environment** wso2is-5.9.0+1585323544240.full **Steps to Reproduce** Source dbscripts/oracle.sql against oracle 12c and oracle 19c **Observation** Below errors were noticed ``` table REG_CLUSTER_LOCK created. table REG_LOG created. sequence REG_LOG_SEQUENCE created. index REG_LOG_IND_BY_REGLOG created. TRIGGER REG_LOG_TRIGGER compiled table REG_PATH created. Error starting at line 40 in command: CREATE INDEX REG_PATH_IND_BY_PATH_VALUE ON REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error at Command Line:40 Column:53 Error report: SQL Error: ORA-01408: such column list already indexed 01408. 00000 - "such column list already indexed" *Cause: *Action: index REG_PATH_IND_BY_PARENT_ID created. sequence REG_PATH_SEQUENCE created. ``` ``` TRIGGER UM_HYBRID_REMEMBER_ME_TRIGGER compiled table UM_SYSTEM_ROLE created. sequence UM_SYSTEM_ROLE_SEQUENCE created. Error starting at line 784 in command: CREATE INDEX SYSTEM_ROLE_IND_BY_RN_TI ON UM_SYSTEM_ROLE(UM_ROLE_NAME, UM_TENANT_ID) Error at Command Line:784 Column:57 Error report: SQL Error: ORA-01408: such column list already indexed 01408. 00000 - "such column list already indexed" *Cause: *Action: TRIGGER UM_SYSTEM_ROLE_TRIGGER compiled table UM_SYSTEM_USER_ROLE created. sequence UM_SYSTEM_USER_ROLE_SEQUENCE created. TRIGGER UM_SYSTEM_USER_ROLE_TRIGGER compiled ``` Identity/uma/oracle.sql While executing the uma oracle script noticed below errors ``` table IDN_UMA_RESOURCE created. Error starting at line 14 in command: CREATE SEQUENCE IDN_UMA_RESOURCE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE / CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_SEQ.nextval INTO :NEW.ID FROM dual Error at Command Line:15 Column:3 Error report: SQL Error: ORA-00933: SQL command not properly ended 00933. 00000 - "SQL command not properly ended" *Cause: *Action: Error starting at line 24 in command: END Error report: Unknown Command index IDX_RID created. index IDX_USER created. index IDX_USER_RID created. Error starting at line 36 in command: CREATE TABLE IDN_UMA_RESOURCE_META_DATA ( ID INTEGER, RESOURCE_IDENTITY INTEGER NOT NULL, PROPERTY_KEY VARCHAR2(40), PROPERTY_VALUE VARCHAR2(255), PRIMARY KEY (ID), FOREIGN KEY (RESOURCE_IDENTITY) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_RESOURCE_META_DATA_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:44 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 49 in command: CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_METADATA_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE_META_DATA REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_META_DATA_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 59 in command: CREATE TABLE IDN_UMA_RESOURCE_SCOPE ( ID INTEGER, RESOURCE_IDENTITY INTEGER NOT NULL, SCOPE_NAME VARCHAR2(255), PRIMARY KEY (ID), FOREIGN KEY (RESOURCE_IDENTITY) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_RESOURCE_SCOPE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:66 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 71 in command: CREATE OR REPLACE TRIGGER IDN_UMA_RESOURCE_SCOPE_TRIG BEFORE INSERT ON IDN_UMA_RESOURCE_SCOPE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_RESOURCE_SCOPE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 81 in command: CREATE INDEX IDX_RS ON IDN_UMA_RESOURCE_SCOPE (SCOPE_NAME) Error at Command Line:81 Column:24 Error report: SQL Error: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 84 in command: CREATE TABLE IDN_UMA_PERMISSION_TICKET ( ID INTEGER, PT VARCHAR2(255) NOT NULL, TIME_CREATED TIMESTAMP NOT NULL, EXPIRY_TIME TIMESTAMP NOT NULL, TICKET_STATE VARCHAR2(25) DEFAULT 'ACTIVE', TENANT_ID INTEGER DEFAULT -1234, TOKEN_ID VARCHAR(255), PRIMARY KEY (ID) ) / CREATE SEQUENCE IDN_UMA_PERMISSION_TICKET_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:94 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 99 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PERMISSION_TICKET_TRIG BEFORE INSERT ON IDN_UMA_PERMISSION_TICKET REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PERMISSION_TICKET_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 109 in command: CREATE INDEX IDX_PT ON IDN_UMA_PERMISSION_TICKET (PT) Error at Command Line:109 Column:24 Error report: SQL Error: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 112 in command: CREATE TABLE IDN_UMA_PT_RESOURCE ( ID INTEGER, PT_RESOURCE_ID INTEGER NOT NULL, PT_ID INTEGER NOT NULL, PRIMARY KEY (ID), FOREIGN KEY (PT_ID) REFERENCES IDN_UMA_PERMISSION_TICKET (ID) ON DELETE CASCADE, FOREIGN KEY (PT_RESOURCE_ID) REFERENCES IDN_UMA_RESOURCE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_PT_RESOURCE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:120 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 125 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PT_RESOURCE_TRIG BEFORE INSERT ON IDN_UMA_PT_RESOURCE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PT_RESOURCE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: Error starting at line 135 in command: CREATE TABLE IDN_UMA_PT_RESOURCE_SCOPE ( ID INTEGER, PT_RESOURCE_ID INTEGER NOT NULL, PT_SCOPE_ID INTEGER NOT NULL, PRIMARY KEY (ID), FOREIGN KEY (PT_RESOURCE_ID) REFERENCES IDN_UMA_PT_RESOURCE (ID) ON DELETE CASCADE, FOREIGN KEY (PT_SCOPE_ID) REFERENCES IDN_UMA_RESOURCE_SCOPE (ID) ON DELETE CASCADE ) / CREATE SEQUENCE IDN_UMA_PT_RESOURCE_SCOPE_SEQ START WITH 1 INCREMENT BY 1 NOCACHE Error at Command Line:143 Column:3 Error report: SQL Error: ORA-00922: missing or invalid option 00922. 00000 - "missing or invalid option" *Cause: *Action: Error starting at line 148 in command: CREATE OR REPLACE TRIGGER IDN_UMA_PT_RESOURCE_SCOPE_TRIG BEFORE INSERT ON IDN_UMA_PT_RESOURCE_SCOPE REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT IDN_UMA_PT_RESOURCE_SCOPE_SEQ.nextval INTO :NEW.ID FROM dual; END; Error report: ORA-00942: table or view does not exist 00942. 00000 - "table or view does not exist" *Cause: *Action: ```
non_process
script issues while source oracle scripts against oracle and with latest wum environment full steps to reproduce source dbscripts oracle sql against oracle and oracle observation below errors were noticed table reg cluster lock created table reg log created sequence reg log sequence created index reg log ind by reglog created trigger reg log trigger compiled table reg path created error starting at line in command create index reg path ind by path value on reg path reg path value reg tenant id error at command line column error report sql error ora such column list already indexed such column list already indexed cause action index reg path ind by parent id created sequence reg path sequence created trigger um hybrid remember me trigger compiled table um system role created sequence um system role sequence created error starting at line in command create index system role ind by rn ti on um system role um role name um tenant id error at command line column error report sql error ora such column list already indexed such column list already indexed cause action trigger um system role trigger compiled table um system user role created sequence um system user role sequence created trigger um system user role trigger compiled identity uma oracle sql while executing the uma oracle script noticed below errors table idn uma resource created error starting at line in command create sequence idn uma resource seq start with increment by nocache create or replace trigger idn uma resource trig before insert on idn uma resource referencing new as new for each row begin select idn uma resource seq nextval into new id from dual error at command line column error report sql error ora sql command not properly ended sql command not properly ended cause action error starting at line in command end error report unknown command index idx rid created index idx user created index idx user rid created error starting at line in command create table idn uma resource meta data id integer resource identity integer not null property key property value primary key id foreign key resource identity references idn uma resource id on delete cascade create sequence idn uma resource meta data seq start with increment by nocache error at command line column error report sql error ora missing or invalid option missing or invalid option cause action error starting at line in command create or replace trigger idn uma resource metadata trig before insert on idn uma resource meta data referencing new as new for each row begin select idn uma resource meta data seq nextval into new id from dual end error report ora table or view does not exist table or view does not exist cause action error starting at line in command create table idn uma resource scope id integer resource identity integer not null scope name primary key id foreign key resource identity references idn uma resource id on delete cascade create sequence idn uma resource scope seq start with increment by nocache error at command line column error report sql error ora missing or invalid option missing or invalid option cause action error starting at line in command create or replace trigger idn uma resource scope trig before insert on idn uma resource scope referencing new as new for each row begin select idn uma resource scope seq nextval into new id from dual end error report ora table or view does not exist table or view does not exist cause action error starting at line in command create index idx rs on idn uma resource scope scope name error at command line column error report sql error ora table or view does not exist table or view does not exist cause action error starting at line in command create table idn uma permission ticket id integer pt not null time created timestamp not null expiry time timestamp not null ticket state default active tenant id integer default token id varchar primary key id create sequence idn uma permission ticket seq start with increment by nocache error at command line column error report sql error ora missing or invalid option missing or invalid option cause action error starting at line in command create or replace trigger idn uma permission ticket trig before insert on idn uma permission ticket referencing new as new for each row begin select idn uma permission ticket seq nextval into new id from dual end error report ora table or view does not exist table or view does not exist cause action error starting at line in command create index idx pt on idn uma permission ticket pt error at command line column error report sql error ora table or view does not exist table or view does not exist cause action error starting at line in command create table idn uma pt resource id integer pt resource id integer not null pt id integer not null primary key id foreign key pt id references idn uma permission ticket id on delete cascade foreign key pt resource id references idn uma resource id on delete cascade create sequence idn uma pt resource seq start with increment by nocache error at command line column error report sql error ora missing or invalid option missing or invalid option cause action error starting at line in command create or replace trigger idn uma pt resource trig before insert on idn uma pt resource referencing new as new for each row begin select idn uma pt resource seq nextval into new id from dual end error report ora table or view does not exist table or view does not exist cause action error starting at line in command create table idn uma pt resource scope id integer pt resource id integer not null pt scope id integer not null primary key id foreign key pt resource id references idn uma pt resource id on delete cascade foreign key pt scope id references idn uma resource scope id on delete cascade create sequence idn uma pt resource scope seq start with increment by nocache error at command line column error report sql error ora missing or invalid option missing or invalid option cause action error starting at line in command create or replace trigger idn uma pt resource scope trig before insert on idn uma pt resource scope referencing new as new for each row begin select idn uma pt resource scope seq nextval into new id from dual end error report ora table or view does not exist table or view does not exist cause action
0
4,026
6,961,369,508
IssuesEvent
2017-12-08 09:12:11
nlbdev/pipeline
https://api.github.com/repos/nlbdev/pipeline
closed
braille CSS: translator / oversetter
bug pre-processing Priority:2 - Medium
(norwegian) *from Trello-board* NLB-CSS: title page: - «Oversatt av» skal kun stå når det er oversetter. (Nå kommer det med også på bøker som ikke er oversatt)
1.0
braille CSS: translator / oversetter - (norwegian) *from Trello-board* NLB-CSS: title page: - «Oversatt av» skal kun stå når det er oversetter. (Nå kommer det med også på bøker som ikke er oversatt)
process
braille css translator oversetter norwegian from trello board nlb css title page «oversatt av» skal kun stå når det er oversetter nå kommer det med også på bøker som ikke er oversatt
1
21,207
6,132,364,777
IssuesEvent
2017-06-25 01:10:20
ganeti/ganeti
https://api.github.com/repos/ganeti/ganeti
closed
gnt-instance modify --disk add:[...] always like --no-wait-for-sync
imported_from_google_code Status:WontFix
Originally reported of Google Code with ID 768. ``` What software version are you running? Please provide the output of "gnt- cluster --version", "gnt-cluster version", and "hspace --version". <b>What distribution are you using?</b> root@node1 ~ # gnt-cluster --version gnt-cluster (ganeti v2.9.5) 2.9.5 root@node1 ~ # gnt-cluster version Software version: 2.9.5 Internode protocol: 2090000 Configuration format: 2090000 OS api version: 20 Export interface: 0 VCS version: v2.9.5 root@node1 ~ # hspace --version hspace (ganeti) version v2.9.5 compiled with ghc 7.4 running on linux x86_64 root@node1 ~ # cat /etc/debian_version 7.4 root@node1 ~ # dpkg -l ganeti [...] ii ganeti 2.9.5-1~bpo70+1 <b>What steps will reproduce the problem?</b> 1. create a vm with drbd-disks 2. gnt-instance modify --disk add:size=32g vm17 Modified instance vm17 - disk/1 -> add:size=32768,mode=rw Please don't forget that most parameters take effect only at the next (re)start of the instance initiated by ganeti; restarting from within the instance will not be enough. <b>What is the expected output? What do you see instead?</b> < disk is created (no sync output) > disk is created with sync output (and option for --no-wait-for-sync) <b>Please provide any additional information below.</b> # cat /proc/drbd [...] 19: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:453056 nr:0 dw:0 dr:462024 al:0 bm:26 lo:1 pe:142 ua:128 ap:0 ep:1 wo:f oos:33110528 [>....................] sync'ed: 1.4% (32332/32768)Mfinish: 0:26:05 speed: 21,136 (21,136) K/sec ``` Originally added on 2014-03-20 16:03:17 +0000 UTC.
1.0
gnt-instance modify --disk add:[...] always like --no-wait-for-sync - Originally reported of Google Code with ID 768. ``` What software version are you running? Please provide the output of "gnt- cluster --version", "gnt-cluster version", and "hspace --version". <b>What distribution are you using?</b> root@node1 ~ # gnt-cluster --version gnt-cluster (ganeti v2.9.5) 2.9.5 root@node1 ~ # gnt-cluster version Software version: 2.9.5 Internode protocol: 2090000 Configuration format: 2090000 OS api version: 20 Export interface: 0 VCS version: v2.9.5 root@node1 ~ # hspace --version hspace (ganeti) version v2.9.5 compiled with ghc 7.4 running on linux x86_64 root@node1 ~ # cat /etc/debian_version 7.4 root@node1 ~ # dpkg -l ganeti [...] ii ganeti 2.9.5-1~bpo70+1 <b>What steps will reproduce the problem?</b> 1. create a vm with drbd-disks 2. gnt-instance modify --disk add:size=32g vm17 Modified instance vm17 - disk/1 -> add:size=32768,mode=rw Please don't forget that most parameters take effect only at the next (re)start of the instance initiated by ganeti; restarting from within the instance will not be enough. <b>What is the expected output? What do you see instead?</b> < disk is created (no sync output) > disk is created with sync output (and option for --no-wait-for-sync) <b>Please provide any additional information below.</b> # cat /proc/drbd [...] 19: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:453056 nr:0 dw:0 dr:462024 al:0 bm:26 lo:1 pe:142 ua:128 ap:0 ep:1 wo:f oos:33110528 [>....................] sync'ed: 1.4% (32332/32768)Mfinish: 0:26:05 speed: 21,136 (21,136) K/sec ``` Originally added on 2014-03-20 16:03:17 +0000 UTC.
non_process
gnt instance modify disk add always like no wait for sync originally reported of google code with id what software version are you running please provide the output of gnt cluster version gnt cluster version and hspace version what distribution are you using root gnt cluster version gnt cluster ganeti root gnt cluster version software version internode protocol configuration format os api version export interface vcs version root hspace version hspace ganeti version compiled with ghc running on linux root cat etc debian version root dpkg l ganeti ii ganeti what steps will reproduce the problem create a vm with drbd disks gnt instance modify disk add size modified instance disk add size mode rw please don t forget that most parameters take effect only at the next re start of the instance initiated by ganeti restarting from within the instance will not be enough what is the expected output what do you see instead disk is created no sync output disk is created with sync output and option for no wait for sync please provide any additional information below cat proc drbd cs syncsource ro primary secondary ds uptodate inconsistent c r ns nr dw dr al bm lo pe ua ap ep wo f oos sync ed mfinish speed k sec originally added on utc
0
18,249
24,330,986,452
IssuesEvent
2022-09-30 19:23:27
benthosdev/benthos
https://api.github.com/repos/benthosdev/benthos
closed
bug: http_client sends duplicat content-type header
bug processors inputs outputs
Just noticed this while testing that when I have this line in my configuration: ``` output: http_client: headers: Content-Type: application/json ``` I am seeing this on the receiving end: ``` 2022-09-29T21:35:31-04:00 DBG Got request: 2022-09-29T21:35:31-04:00 DBG POST / HTTP/1.1 2022-09-29T21:35:31-04:00 DBG user-agent: Go-http-client/1.1 2022-09-29T21:35:31-04:00 DBG content-length: 613 2022-09-29T21:35:31-04:00 DBG content-type: application/json 2022-09-29T21:35:31-04:00 DBG content-type: application/json 2022-09-29T21:35:31-04:00 DBG accept-encoding: gzip 2022-09-29T21:35:31-04:00 DBG ```
1.0
bug: http_client sends duplicat content-type header - Just noticed this while testing that when I have this line in my configuration: ``` output: http_client: headers: Content-Type: application/json ``` I am seeing this on the receiving end: ``` 2022-09-29T21:35:31-04:00 DBG Got request: 2022-09-29T21:35:31-04:00 DBG POST / HTTP/1.1 2022-09-29T21:35:31-04:00 DBG user-agent: Go-http-client/1.1 2022-09-29T21:35:31-04:00 DBG content-length: 613 2022-09-29T21:35:31-04:00 DBG content-type: application/json 2022-09-29T21:35:31-04:00 DBG content-type: application/json 2022-09-29T21:35:31-04:00 DBG accept-encoding: gzip 2022-09-29T21:35:31-04:00 DBG ```
process
bug http client sends duplicat content type header just noticed this while testing that when i have this line in my configuration output http client headers content type application json i am seeing this on the receiving end dbg got request dbg post http dbg user agent go http client dbg content length dbg content type application json dbg content type application json dbg accept encoding gzip dbg
1
1,108
3,587,971,235
IssuesEvent
2016-01-30 18:09:20
mesosphere/kubernetes-mesos
https://api.github.com/repos/mesosphere/kubernetes-mesos
closed
testing for v0.7.2-v1.1.5 release
process/release
- [x] test ubercontainer failover onto another host when the original host fails (see below) - [x] pods should keep running on other hosts - [x] pods on killed host should be restarted (if backed by rc) - [x] dns resolution should keep working: in busybox container `nslookup kube-ui.kube-system.svc.cluster.local` - [x] dns resolution should work for new services: launch nginx pod and service. Test in busybox container `nslookup nginx.default.svc.cluster.local`. - [x] deploy `examples/guestbook`. Try posting something in the web UI. - [x] execute `dcos kubectl version` - [x] do 2h+ soak testing w/ [hack/resizeFrontend.sh](https://github.com/mesosphere/kubernetes-mesos/blob/master/hack/resizingFrontend.sh) - [x] smoke test Mesos-DNS via a busybox test container executing `nslookup kubernetes.mesos`. - [x] smoke test roles support using a simple nginx pod (see **[1]**). Install with `dcos kubectl create -f nginxpub.yaml`, wait ~1min. and open the link "Master IP addresses:" available in CCM. One should see the nginx landing page. **ubercontainer failover** Find out which slave ID runs the kubernetes ubercontainer in the DCOS UI, then ssh into the node: ``` $ dcos node ssh --master-proxy --slave=xxxxxx-yyyy-4cab-b1fb-3c92a11cdd2d-S1 $ sudo systemctl stop dcos-mesos-slave.service ``` **[1]** Sample roles enabled public nginx pod `nginxpub.yaml`: ``` apiVersion: v1 kind: Pod metadata: name: nginxpub annotations: k8s.mesosphere.io/roles: "slave_public" labels: app: nginxpub spec: containers: - name: nginxpub image: nginx ports: - containerPort: 80 hostPort: 80 ```
1.0
testing for v0.7.2-v1.1.5 release - - [x] test ubercontainer failover onto another host when the original host fails (see below) - [x] pods should keep running on other hosts - [x] pods on killed host should be restarted (if backed by rc) - [x] dns resolution should keep working: in busybox container `nslookup kube-ui.kube-system.svc.cluster.local` - [x] dns resolution should work for new services: launch nginx pod and service. Test in busybox container `nslookup nginx.default.svc.cluster.local`. - [x] deploy `examples/guestbook`. Try posting something in the web UI. - [x] execute `dcos kubectl version` - [x] do 2h+ soak testing w/ [hack/resizeFrontend.sh](https://github.com/mesosphere/kubernetes-mesos/blob/master/hack/resizingFrontend.sh) - [x] smoke test Mesos-DNS via a busybox test container executing `nslookup kubernetes.mesos`. - [x] smoke test roles support using a simple nginx pod (see **[1]**). Install with `dcos kubectl create -f nginxpub.yaml`, wait ~1min. and open the link "Master IP addresses:" available in CCM. One should see the nginx landing page. **ubercontainer failover** Find out which slave ID runs the kubernetes ubercontainer in the DCOS UI, then ssh into the node: ``` $ dcos node ssh --master-proxy --slave=xxxxxx-yyyy-4cab-b1fb-3c92a11cdd2d-S1 $ sudo systemctl stop dcos-mesos-slave.service ``` **[1]** Sample roles enabled public nginx pod `nginxpub.yaml`: ``` apiVersion: v1 kind: Pod metadata: name: nginxpub annotations: k8s.mesosphere.io/roles: "slave_public" labels: app: nginxpub spec: containers: - name: nginxpub image: nginx ports: - containerPort: 80 hostPort: 80 ```
process
testing for release test ubercontainer failover onto another host when the original host fails see below pods should keep running on other hosts pods on killed host should be restarted if backed by rc dns resolution should keep working in busybox container nslookup kube ui kube system svc cluster local dns resolution should work for new services launch nginx pod and service test in busybox container nslookup nginx default svc cluster local deploy examples guestbook try posting something in the web ui execute dcos kubectl version do soak testing w smoke test mesos dns via a busybox test container executing nslookup kubernetes mesos smoke test roles support using a simple nginx pod see install with dcos kubectl create f nginxpub yaml wait and open the link master ip addresses available in ccm one should see the nginx landing page ubercontainer failover find out which slave id runs the kubernetes ubercontainer in the dcos ui then ssh into the node dcos node ssh master proxy slave xxxxxx yyyy sudo systemctl stop dcos mesos slave service sample roles enabled public nginx pod nginxpub yaml apiversion kind pod metadata name nginxpub annotations mesosphere io roles slave public labels app nginxpub spec containers name nginxpub image nginx ports containerport hostport
1
15,565
19,703,504,302
IssuesEvent
2022-01-12 19:08:03
googleapis/java-talent
https://api.github.com/repos/googleapis/java-talent
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'talent' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'talent' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname talent invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
1,118
3,592,220,050
IssuesEvent
2016-02-01 15:18:53
coala-analyzer/coala
https://api.github.com/repos/coala-analyzer/coala
closed
File in SourcePosition should use `abspath`
difficulty/newcomer process/wip type/bug
The file in SourcePosition currently doesnt getnormalized using abspath - this is bad because when comparison wouldnt work beterrn two SourceRanges because the path given may not be the same (although the files are same) Eg: paths like `/a/b/c` and `/a/d/../b/c` are the same but SourceRange doesn't think they are the same
1.0
File in SourcePosition should use `abspath` - The file in SourcePosition currently doesnt getnormalized using abspath - this is bad because when comparison wouldnt work beterrn two SourceRanges because the path given may not be the same (although the files are same) Eg: paths like `/a/b/c` and `/a/d/../b/c` are the same but SourceRange doesn't think they are the same
process
file in sourceposition should use abspath the file in sourceposition currently doesnt getnormalized using abspath this is bad because when comparison wouldnt work beterrn two sourceranges because the path given may not be the same although the files are same eg paths like a b c and a d b c are the same but sourcerange doesn t think they are the same
1
11,667
14,529,126,048
IssuesEvent
2020-12-14 17:23:54
googleapis/google-api-dotnet-client
https://api.github.com/repos/googleapis/google-api-dotnet-client
closed
Broken User schema on latest Nuget version
priority: p1 type: process
The new nuget package [Google.Apis.Admin.Directory.directory_v1 version 1.49.0.2161](https://www.nuget.org/packages/Google.Apis.Admin.Directory.directory_v1/1.49.0.2161) for the directory API seems to have a broken User class: https://github.com/googleapis/google-api-dotnet-client/blob/master/Src/Generated/Google.Apis.Admin.Directory.directory_v1/Google.Apis.Admin.Directory.directory_v1.cs#L11289 version 1.49.0.1982 `````csharp [JsonProperty("organizations")] public virtual IList<UserOrganization> Organizations { get; set; } ````` version 1.49.0.2161 - **broken** `````csharp [JsonProperty("organizations")] public virtual object Organizations { get; set; } `````
1.0
Broken User schema on latest Nuget version - The new nuget package [Google.Apis.Admin.Directory.directory_v1 version 1.49.0.2161](https://www.nuget.org/packages/Google.Apis.Admin.Directory.directory_v1/1.49.0.2161) for the directory API seems to have a broken User class: https://github.com/googleapis/google-api-dotnet-client/blob/master/Src/Generated/Google.Apis.Admin.Directory.directory_v1/Google.Apis.Admin.Directory.directory_v1.cs#L11289 version 1.49.0.1982 `````csharp [JsonProperty("organizations")] public virtual IList<UserOrganization> Organizations { get; set; } ````` version 1.49.0.2161 - **broken** `````csharp [JsonProperty("organizations")] public virtual object Organizations { get; set; } `````
process
broken user schema on latest nuget version the new nuget package for the directory api seems to have a broken user class version csharp public virtual ilist organizations get set version broken csharp public virtual object organizations get set
1
21,722
30,229,791,743
IssuesEvent
2023-07-06 05:45:36
MikaylaFischler/cc-mek-scada
https://api.github.com/repos/MikaylaFischler/cc-mek-scada
closed
Process Fuel Self Limiting
enhancement supervisor process control
The system should monitor fuel percentages and ramp down the setpoint until the fuel input rate is net positive.
1.0
Process Fuel Self Limiting - The system should monitor fuel percentages and ramp down the setpoint until the fuel input rate is net positive.
process
process fuel self limiting the system should monitor fuel percentages and ramp down the setpoint until the fuel input rate is net positive
1
52,982
6,668,794,025
IssuesEvent
2017-10-03 17:00:30
Esri/solutions-geoevent-java
https://api.github.com/repos/Esri/solutions-geoevent-java
closed
Transfer labels
4 - Done A-bug A-feature A-question B-high B-low B-moderate C-L C-M C-S C-XL C-XS E-as designed E-duplicate E-invalid E-no count E-non reproducible E-verified E-won't fix FT-Workflows G-Design G-Development G-Documentation G-Documentation Review G-Research G-Testing HP-Candidate HP-HotFix HP-Patch priority - Showstopper
_From @lfunkhouser on October 2, 2017 15:24_ This issue is used to transfer issues to another repo _Copied from original issue: Esri/solutions-grg-widget#119_
2.0
Transfer labels - _From @lfunkhouser on October 2, 2017 15:24_ This issue is used to transfer issues to another repo _Copied from original issue: Esri/solutions-grg-widget#119_
non_process
transfer labels from lfunkhouser on october this issue is used to transfer issues to another repo copied from original issue esri solutions grg widget
0
19,824
4,443,140,749
IssuesEvent
2016-08-19 15:42:57
TEIC/TEI
https://api.github.com/repos/TEIC/TEI
closed
Make @select a bit more generalized
Status: Go TEI: Guidelines & Documentation TEI: Schema Type: FeatureRequest
The note on the `@select` attribute at <http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-att.global.linking.html> says >This attribute should be placed on an element which is superordinate to all of the alternants from which the selection is being made. It would be useful, however, to be able to use `@select` as the converse of `@exclude`. Specifically, it would be nice to be able to use it in a critical apparatus for variant words which are linked, but separated by non-varying words. This is particularly common in Latin, e.g., where case agreement means non-contiguous words may vary in tandem. For example, > aspice me quanto rapiat fortuna periclo and in the app. crit.: > *quanta...procella* Markland It would be nice to be able to do: ~~~~~ aspice me <app> <lem xml:id="lem1" select="#lem2">quanto</lem> <rdg source="#Markland" xml:id="rdg1" select="#rdg2">quanta</rdg> </app> rapiat fortuna <app> <lem xml:id="lem2" select="#lem1">periclo</lem> <rdg source="#Markland" xml:id="rdg2" select="#rdg1">procella</rdg> </app> ~~~~~ and so use `@select` to show that the lemma of the first app implies the lemma of the second, and the reading of the first implies the reading of the second (and vice versa). The only thing that prevents this usage now is the note, which requires that the attribute be on an ancestor of the selected element. I suggest the note simply be removed.
1.0
Make @select a bit more generalized - The note on the `@select` attribute at <http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-att.global.linking.html> says >This attribute should be placed on an element which is superordinate to all of the alternants from which the selection is being made. It would be useful, however, to be able to use `@select` as the converse of `@exclude`. Specifically, it would be nice to be able to use it in a critical apparatus for variant words which are linked, but separated by non-varying words. This is particularly common in Latin, e.g., where case agreement means non-contiguous words may vary in tandem. For example, > aspice me quanto rapiat fortuna periclo and in the app. crit.: > *quanta...procella* Markland It would be nice to be able to do: ~~~~~ aspice me <app> <lem xml:id="lem1" select="#lem2">quanto</lem> <rdg source="#Markland" xml:id="rdg1" select="#rdg2">quanta</rdg> </app> rapiat fortuna <app> <lem xml:id="lem2" select="#lem1">periclo</lem> <rdg source="#Markland" xml:id="rdg2" select="#rdg1">procella</rdg> </app> ~~~~~ and so use `@select` to show that the lemma of the first app implies the lemma of the second, and the reading of the first implies the reading of the second (and vice versa). The only thing that prevents this usage now is the note, which requires that the attribute be on an ancestor of the selected element. I suggest the note simply be removed.
non_process
make select a bit more generalized the note on the select attribute at says this attribute should be placed on an element which is superordinate to all of the alternants from which the selection is being made it would be useful however to be able to use select as the converse of exclude specifically it would be nice to be able to use it in a critical apparatus for variant words which are linked but separated by non varying words this is particularly common in latin e g where case agreement means non contiguous words may vary in tandem for example aspice me quanto rapiat fortuna periclo and in the app crit quanta procella markland it would be nice to be able to do aspice me quanto quanta rapiat fortuna periclo procella and so use select to show that the lemma of the first app implies the lemma of the second and the reading of the first implies the reading of the second and vice versa the only thing that prevents this usage now is the note which requires that the attribute be on an ancestor of the selected element i suggest the note simply be removed
0
314,138
23,508,128,003
IssuesEvent
2022-08-18 14:15:37
timescale/docs
https://api.github.com/repos/timescale/docs
closed
[Docs RFC] Update example of finding job_id in decompress section
documentation enhancement community
# Describe change in content, appearance, or functionality The example given to find a job ID in the decompression section is outdated. Update accordingly. # Subject matter expert (SME) @noctarius # Deadline [When does this need to be addressed] # Any further info [Link to Community Slack thread](https://timescaledb.slack.com/archives/C4GT3N90X/p1660206753018829) ## Contributing to documentation We welcome documentation contributions! * For information about how to suggest a change, see the [contributing guide](https://github.com/timescale/docs/blob/latest/CONTRIBUTING.md) in our GitHub repository. * For information on style and word usage, see the [style guide](https://docs.timescale.com/timescaledb/latest/contribute-to-docs)
1.0
[Docs RFC] Update example of finding job_id in decompress section - # Describe change in content, appearance, or functionality The example given to find a job ID in the decompression section is outdated. Update accordingly. # Subject matter expert (SME) @noctarius # Deadline [When does this need to be addressed] # Any further info [Link to Community Slack thread](https://timescaledb.slack.com/archives/C4GT3N90X/p1660206753018829) ## Contributing to documentation We welcome documentation contributions! * For information about how to suggest a change, see the [contributing guide](https://github.com/timescale/docs/blob/latest/CONTRIBUTING.md) in our GitHub repository. * For information on style and word usage, see the [style guide](https://docs.timescale.com/timescaledb/latest/contribute-to-docs)
non_process
update example of finding job id in decompress section describe change in content appearance or functionality the example given to find a job id in the decompression section is outdated update accordingly subject matter expert sme noctarius deadline any further info contributing to documentation we welcome documentation contributions for information about how to suggest a change see the in our github repository for information on style and word usage see the
0
116,347
11,907,854,379
IssuesEvent
2020-03-30 23:19:27
maryoohhh/homeproj1
https://api.github.com/repos/maryoohhh/homeproj1
closed
Update README.md to create instructions on how to clone a repository
documentation
You can also explain what is happening when you clone a repository to your local machine. Recap: Why did we create a `dev` directory? - to keep things organized what is the `mkdir` command? - command in Unix used to make a new directory. what is the `cd` command? - command in Unix used to make a change to a new directory.
1.0
Update README.md to create instructions on how to clone a repository - You can also explain what is happening when you clone a repository to your local machine. Recap: Why did we create a `dev` directory? - to keep things organized what is the `mkdir` command? - command in Unix used to make a new directory. what is the `cd` command? - command in Unix used to make a change to a new directory.
non_process
update readme md to create instructions on how to clone a repository you can also explain what is happening when you clone a repository to your local machine recap why did we create a dev directory to keep things organized what is the mkdir command command in unix used to make a new directory what is the cd command command in unix used to make a change to a new directory
0
67,330
20,961,606,045
IssuesEvent
2022-03-27 21:48:15
abedmaatalla/sipdroid
https://api.github.com/repos/abedmaatalla/sipdroid
closed
Contact integration
Priority-Medium Type-Defect auto-migrated
``` I would like sipdroid , to have a menu toggle point where the user can choose if he want or do not want dialer contact list integration or not Most of the time it is troublesome because dialer list is mostly local mobile work friend and family , while sipdroid contacts are people in other countries and viber have this option and i find it useful , as you do not have to press select on what phone you want to use you use dialer, it is mobile phone people you use viber it is for viber friends and if you use sipdroid it is people overseas are there other people who would like totally disconnected contact lists ? ``` Original issue reported on code.google.com by `121k...@gmail.com` on 14 Nov 2013 at 7:33
1.0
Contact integration - ``` I would like sipdroid , to have a menu toggle point where the user can choose if he want or do not want dialer contact list integration or not Most of the time it is troublesome because dialer list is mostly local mobile work friend and family , while sipdroid contacts are people in other countries and viber have this option and i find it useful , as you do not have to press select on what phone you want to use you use dialer, it is mobile phone people you use viber it is for viber friends and if you use sipdroid it is people overseas are there other people who would like totally disconnected contact lists ? ``` Original issue reported on code.google.com by `121k...@gmail.com` on 14 Nov 2013 at 7:33
non_process
contact integration i would like sipdroid to have a menu toggle point where the user can choose if he want or do not want dialer contact list integration or not most of the time it is troublesome because dialer list is mostly local mobile work friend and family while sipdroid contacts are people in other countries and viber have this option and i find it useful as you do not have to press select on what phone you want to use you use dialer it is mobile phone people you use viber it is for viber friends and if you use sipdroid it is people overseas are there other people who would like totally disconnected contact lists original issue reported on code google com by gmail com on nov at
0
8,832
11,943,877,762
IssuesEvent
2020-04-03 00:38:19
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
closed
Update shared conformance tests
api: storage testing type: process
~~We need to add a test suite which exercises the cross-language tests and fix anything where we don't pass.~~ We need to *update* the shared conformance tests to match the current spec: - [Proto spec for the testcase file](https://github.com/googleapis/conformance-tests/blob/master/storage/v1/proto/google/cloud/conformance/storage/v1/tests.proto) - [Testcase file](https://github.com/googleapis/conformance-tests/blob/master/storage/v1/v4_signatures.json) /cc @frankyn
1.0
Update shared conformance tests - ~~We need to add a test suite which exercises the cross-language tests and fix anything where we don't pass.~~ We need to *update* the shared conformance tests to match the current spec: - [Proto spec for the testcase file](https://github.com/googleapis/conformance-tests/blob/master/storage/v1/proto/google/cloud/conformance/storage/v1/tests.proto) - [Testcase file](https://github.com/googleapis/conformance-tests/blob/master/storage/v1/v4_signatures.json) /cc @frankyn
process
update shared conformance tests we need to add a test suite which exercises the cross language tests and fix anything where we don t pass we need to update the shared conformance tests to match the current spec cc frankyn
1
17,469
23,294,937,068
IssuesEvent
2022-08-06 12:14:18
apache/arrow-datafusion
https://api.github.com/repos/apache/arrow-datafusion
closed
Lint check fails on master: apache-rat license violation: js/test.ts
bug development-process
**Describe the bug** A CI check stated failing on master: https://github.com/apache/arrow-datafusion/runs/7698061920?check_suite_focus=true ``` 8s Run archery lint --rat INFO:archery:Running apache-rat linter apache-rat license violation: js/test.ts Error: Process completed with exit code [1](https://github.com/apache/arrow-datafusion/runs/7698061920?check_suite_focus=true#step:5:1). ``` **To Reproduce** Started failing on https://github.com/apache/arrow-datafusion/commit/581934d73dfca7b99e6cc66767b3af3bbad7755f but I don't think it is relevant to that **Expected behavior** CI should pass **Additional context** Add any other context about the problem here.
1.0
Lint check fails on master: apache-rat license violation: js/test.ts - **Describe the bug** A CI check stated failing on master: https://github.com/apache/arrow-datafusion/runs/7698061920?check_suite_focus=true ``` 8s Run archery lint --rat INFO:archery:Running apache-rat linter apache-rat license violation: js/test.ts Error: Process completed with exit code [1](https://github.com/apache/arrow-datafusion/runs/7698061920?check_suite_focus=true#step:5:1). ``` **To Reproduce** Started failing on https://github.com/apache/arrow-datafusion/commit/581934d73dfca7b99e6cc66767b3af3bbad7755f but I don't think it is relevant to that **Expected behavior** CI should pass **Additional context** Add any other context about the problem here.
process
lint check fails on master apache rat license violation js test ts describe the bug a ci check stated failing on master run archery lint rat info archery running apache rat linter apache rat license violation js test ts error process completed with exit code to reproduce started failing on but i don t think it is relevant to that expected behavior ci should pass additional context add any other context about the problem here
1
3,619
2,694,875,089
IssuesEvent
2015-04-01 23:03:56
mozilla/webmaker-app
https://api.github.com/repos/mozilla/webmaker-app
closed
Create a help / FAQ / Step by Step
design needs discussion
From Bangladesh, - Users would like a section where there is help, a step by step approach, a FAQ. " I would love if it guides me when I make my app " @flukeout @thisandagain @k88hudson
1.0
Create a help / FAQ / Step by Step - From Bangladesh, - Users would like a section where there is help, a step by step approach, a FAQ. " I would love if it guides me when I make my app " @flukeout @thisandagain @k88hudson
non_process
create a help faq step by step from bangladesh users would like a section where there is help a step by step approach a faq i would love if it guides me when i make my app flukeout thisandagain
0
14,018
2,789,853,620
IssuesEvent
2015-05-08 21:56:02
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
opened
Motion chart hangs when you select few countries and play with trails enabled.
Priority-Medium Type-Defect
Original [issue 308](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=308) created by orwant on 2010-06-07T05:50:57.000Z: Hi, The issue is with Google Motion Chart trails option. Here i have more than 3000 rows for datatable. steps: Select few countries. say 5 countries in motion chart. Enable trails option. Play the chart. When you play the chart the browser hangs. (IE, firefox). Is the above issue because of 3000 data rows. (Whats the maximum data that can be handled by motion chart?) Could you please look into this issue. Using the following code from the below website. http://lotusdominoweb.blogspot.com/2010/06/google-motion-chart-using-xml.html Windows Vista OS <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
Motion chart hangs when you select few countries and play with trails enabled. - Original [issue 308](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=308) created by orwant on 2010-06-07T05:50:57.000Z: Hi, The issue is with Google Motion Chart trails option. Here i have more than 3000 rows for datatable. steps: Select few countries. say 5 countries in motion chart. Enable trails option. Play the chart. When you play the chart the browser hangs. (IE, firefox). Is the above issue because of 3000 data rows. (Whats the maximum data that can be handled by motion chart?) Could you please look into this issue. Using the following code from the below website. http://lotusdominoweb.blogspot.com/2010/06/google-motion-chart-using-xml.html Windows Vista OS <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
non_process
motion chart hangs when you select few countries and play with trails enabled original created by orwant on hi the issue is with google motion chart trails option here i have more than rows for datatable steps select few countries say countries in motion chart enable trails option play the chart when you play the chart the browser hangs ie firefox is the above issue because of data rows whats the maximum data that can be handled by motion chart could you please look into this issue using the following code from the below website windows vista os for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
0
8,811
6,660,493,041
IssuesEvent
2017-10-02 00:46:01
apache/incubator-mxnet
https://api.github.com/repos/apache/incubator-mxnet
closed
ImageIter is very slow when preprocessing cifar10 in windows
Data-loading Performance
## Environment info Operating System:Windows 10 Package used (Python/R/Scala/Julia):Python MXNet version:0.11(20170930 from [this](https://github.com/yajiedesign/mxnet/releases)) Python version and distribution:Python 3.6.2 Anaconda 4.4.0 CPU: Core i5 6500 GPU: 1080Ti ## Error Message: ### ImageIter is very slow when preprocessing cifar10, multi-thread image preprocessing seems not work. I want to training a simple resnet model from this [page](https://github.com/tensorflow/models/tree/master/research/resnet), here is the mxnet data loading and preprocessing code: ```Python os.environ["MXNET_CPU_WORKER_NTHREADS"] = "4" path = 'E:/Projects/deeplearning/datasets/images/cifar10_mxnet' data = mx.sym.var('data', shape=[0, 3, 32, 32]) label = mx.sym.var('label', shape=[0]) train_iter = mx.image.ImageIter(128, (3, 32, 32), path_imgrec =path + '/cifar10/cifar/train.rec', path_imglist=path + '/cifar10/cifar/train.lst', rand_crop=True, rand_mirror=True, std=np.array([255.0, 255.0, 255.0]),brightness=63. / 255., contrast=0.8, saturation=0.5, data_name=data.name, label_name=label.name) val_iter = mx.image.ImageIter(128, (3, 32, 32), path_imgrec =path + '/cifar10/cifar/test.rec', path_imglist=path + '/cifar10/cifar/test.lst', rand_crop=False, rand_mirror=False, std=np.array([255.0, 255.0, 255.0]), data_name=data.name, label_name=label.name) ``` and training code(model definition code is too long to list here): ```Python mod = mx.mod.Module( symbol=cost, context=mx.gpu(), # data_names=[d[0] for d in train_iter.data], # label_names=[l[0] for l in train_iter.label] data_names=[data.name], label_names=[label.name]) opt = mx.optimizer.Adam(0.001, wd=2e-4, rescale_grad=1.0/128.0) mod.fit(train_iter, eval_data=val_iter, optimizer=opt, eval_metric=['ce', 'acc'], num_epoch=2, # batch_end_callback=mx.callback.Speedometer(128, 200), epoch_end_callback=mx.callback.do_checkpoint(save_path + '/model'), ) ``` When training without ImageIter and preprocessing(using NDArrayIter), mxnet code is 2x faster than my tensorflow code. But when add above image preprocessing, mxnet is alomst 2x slower than tensorflow(tensorflow using ndarray iter and same preprocessing tensor, not read from file). Setting MXNET_CPU_WORKER_NTHREADS from 1 to 4 leads to same training performance, and the GPU load is only 20%(tensorflow is 50%), CPU load is 58%, mechanical hard disk load is less than 10% and sometime is zero.
True
ImageIter is very slow when preprocessing cifar10 in windows - ## Environment info Operating System:Windows 10 Package used (Python/R/Scala/Julia):Python MXNet version:0.11(20170930 from [this](https://github.com/yajiedesign/mxnet/releases)) Python version and distribution:Python 3.6.2 Anaconda 4.4.0 CPU: Core i5 6500 GPU: 1080Ti ## Error Message: ### ImageIter is very slow when preprocessing cifar10, multi-thread image preprocessing seems not work. I want to training a simple resnet model from this [page](https://github.com/tensorflow/models/tree/master/research/resnet), here is the mxnet data loading and preprocessing code: ```Python os.environ["MXNET_CPU_WORKER_NTHREADS"] = "4" path = 'E:/Projects/deeplearning/datasets/images/cifar10_mxnet' data = mx.sym.var('data', shape=[0, 3, 32, 32]) label = mx.sym.var('label', shape=[0]) train_iter = mx.image.ImageIter(128, (3, 32, 32), path_imgrec =path + '/cifar10/cifar/train.rec', path_imglist=path + '/cifar10/cifar/train.lst', rand_crop=True, rand_mirror=True, std=np.array([255.0, 255.0, 255.0]),brightness=63. / 255., contrast=0.8, saturation=0.5, data_name=data.name, label_name=label.name) val_iter = mx.image.ImageIter(128, (3, 32, 32), path_imgrec =path + '/cifar10/cifar/test.rec', path_imglist=path + '/cifar10/cifar/test.lst', rand_crop=False, rand_mirror=False, std=np.array([255.0, 255.0, 255.0]), data_name=data.name, label_name=label.name) ``` and training code(model definition code is too long to list here): ```Python mod = mx.mod.Module( symbol=cost, context=mx.gpu(), # data_names=[d[0] for d in train_iter.data], # label_names=[l[0] for l in train_iter.label] data_names=[data.name], label_names=[label.name]) opt = mx.optimizer.Adam(0.001, wd=2e-4, rescale_grad=1.0/128.0) mod.fit(train_iter, eval_data=val_iter, optimizer=opt, eval_metric=['ce', 'acc'], num_epoch=2, # batch_end_callback=mx.callback.Speedometer(128, 200), epoch_end_callback=mx.callback.do_checkpoint(save_path + '/model'), ) ``` When training without ImageIter and preprocessing(using NDArrayIter), mxnet code is 2x faster than my tensorflow code. But when add above image preprocessing, mxnet is alomst 2x slower than tensorflow(tensorflow using ndarray iter and same preprocessing tensor, not read from file). Setting MXNET_CPU_WORKER_NTHREADS from 1 to 4 leads to same training performance, and the GPU load is only 20%(tensorflow is 50%), CPU load is 58%, mechanical hard disk load is less than 10% and sometime is zero.
non_process
imageiter is very slow when preprocessing in windows environment info operating system windows package used python r scala julia python mxnet version from python version and distribution python anaconda cpu core gpu error message imageiter is very slow when preprocessing multi thread image preprocessing seems not work i want to training a simple resnet model from this here is the mxnet data loading and preprocessing code python os environ path e projects deeplearning datasets images mxnet data mx sym var data shape label mx sym var label shape train iter mx image imageiter path imgrec path cifar train rec path imglist path cifar train lst rand crop true rand mirror true std np array brightness contrast saturation data name data name label name label name val iter mx image imageiter path imgrec path cifar test rec path imglist path cifar test lst rand crop false rand mirror false std np array data name data name label name label name and training code model definition code is too long to list here python mod mx mod module symbol cost context mx gpu data names for d in train iter data label names for l in train iter label data names label names opt mx optimizer adam wd rescale grad mod fit train iter eval data val iter optimizer opt eval metric num epoch batch end callback mx callback speedometer epoch end callback mx callback do checkpoint save path model when training without imageiter and preprocessing using ndarrayiter mxnet code is faster than my tensorflow code but when add above image preprocessing mxnet is alomst slower than tensorflow tensorflow using ndarray iter and same preprocessing tensor not read from file setting mxnet cpu worker nthreads from to leads to same training performance and the gpu load is only tensorflow is cpu load is mechanical hard disk load is less than and sometime is zero
0
18,083
12,532,980,025
IssuesEvent
2020-06-04 16:48:31
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[ML] Deleting multiple custom-rules requires lots of clicks
:ml Feature:Anomaly Detection usability v7.9.0
**Kibana version:** 6.4.0 **Describe the bug:** The "Edit Rule" panel closes after deleting one rule, giving a poor user experience when trying to delete multiple rules. **Steps to reproduce:** 1. Create job e.g. mean(metricvalue) by metricname 2. Add multiple rules to a single detector 3. Then try to delete some rules **Screenshots (if relevant):** ![image](https://user-images.githubusercontent.com/4185750/45442083-ee1e4900-b6b8-11e8-9e7c-55619dc5211b.png) **Any additional context:** As a workaround, you can send an empty custom-rules array to the Update Job endpoint. This will remove all rules. Also, you can close the job and edit the job config to remove rules (although this would require re-running the job).
True
[ML] Deleting multiple custom-rules requires lots of clicks - **Kibana version:** 6.4.0 **Describe the bug:** The "Edit Rule" panel closes after deleting one rule, giving a poor user experience when trying to delete multiple rules. **Steps to reproduce:** 1. Create job e.g. mean(metricvalue) by metricname 2. Add multiple rules to a single detector 3. Then try to delete some rules **Screenshots (if relevant):** ![image](https://user-images.githubusercontent.com/4185750/45442083-ee1e4900-b6b8-11e8-9e7c-55619dc5211b.png) **Any additional context:** As a workaround, you can send an empty custom-rules array to the Update Job endpoint. This will remove all rules. Also, you can close the job and edit the job config to remove rules (although this would require re-running the job).
non_process
deleting multiple custom rules requires lots of clicks kibana version describe the bug the edit rule panel closes after deleting one rule giving a poor user experience when trying to delete multiple rules steps to reproduce create job e g mean metricvalue by metricname add multiple rules to a single detector then try to delete some rules screenshots if relevant any additional context as a workaround you can send an empty custom rules array to the update job endpoint this will remove all rules also you can close the job and edit the job config to remove rules although this would require re running the job
0
9,856
12,856,115,211
IssuesEvent
2020-07-09 06:59:21
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Add a Rescale rasters alg
Feature Request Processing
Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [18208](https://issues.qgis.org/issues/18208) Redmine category:analysis_library --- It would be useful to have a new alg rescaling raster values to percent of max. This would be useful e.g. for probability models, so that we can easily draw lines encompassing a certain percentage of total probability. See #17761
1.0
Add a Rescale rasters alg - Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [18208](https://issues.qgis.org/issues/18208) Redmine category:analysis_library --- It would be useful to have a new alg rescaling raster values to percent of max. This would be useful e.g. for probability models, so that we can easily draw lines encompassing a certain percentage of total probability. See #17761
process
add a rescale rasters alg author name paolo cavallini pcav original redmine issue redmine category analysis library it would be useful to have a new alg rescaling raster values to percent of max this would be useful e g for probability models so that we can easily draw lines encompassing a certain percentage of total probability see
1
3,201
6,262,322,689
IssuesEvent
2017-07-15 09:21:42
nodejs/node
https://api.github.com/repos/nodejs/node
closed
RFE: A way to spawn a foreground process
child_process feature request
I have a module called `foreground-child` that will spawn a child process with inherited stdio, and proxy signals to it, and exit appropriately when the child exits. However, since it's not _actually_ possible to do an `execvp` without a `fork` in Node, it's always going to be a little bit of a kludge. Because it's not actually running in the same process space, sending a `SIGKILL` will always kill the parent without killing the child. (Unless the parent is the leader of its process group.) It would be great to be able to do something like this: ```javascript var child = spawn(program, args, { env: { env: 'pairs' }, stdio: [ ... ], foreground: true }) // similar to doing ^Z,bg in a bash shell child.background() // move a background process into foreground, // like doing fg in bash shell child.foreground() // detach the child process // like doing ^Z,disown in a bash shell child.detach() ``` Should this be opened as a node-eps issue, or is that just for much higher-level stuff?
1.0
RFE: A way to spawn a foreground process - I have a module called `foreground-child` that will spawn a child process with inherited stdio, and proxy signals to it, and exit appropriately when the child exits. However, since it's not _actually_ possible to do an `execvp` without a `fork` in Node, it's always going to be a little bit of a kludge. Because it's not actually running in the same process space, sending a `SIGKILL` will always kill the parent without killing the child. (Unless the parent is the leader of its process group.) It would be great to be able to do something like this: ```javascript var child = spawn(program, args, { env: { env: 'pairs' }, stdio: [ ... ], foreground: true }) // similar to doing ^Z,bg in a bash shell child.background() // move a background process into foreground, // like doing fg in bash shell child.foreground() // detach the child process // like doing ^Z,disown in a bash shell child.detach() ``` Should this be opened as a node-eps issue, or is that just for much higher-level stuff?
process
rfe a way to spawn a foreground process i have a module called foreground child that will spawn a child process with inherited stdio and proxy signals to it and exit appropriately when the child exits however since it s not actually possible to do an execvp without a fork in node it s always going to be a little bit of a kludge because it s not actually running in the same process space sending a sigkill will always kill the parent without killing the child unless the parent is the leader of its process group it would be great to be able to do something like this javascript var child spawn program args env env pairs stdio foreground true similar to doing z bg in a bash shell child background move a background process into foreground like doing fg in bash shell child foreground detach the child process like doing z disown in a bash shell child detach should this be opened as a node eps issue or is that just for much higher level stuff
1
724,432
24,930,624,150
IssuesEvent
2022-10-31 11:17:52
openhab/openhab-android
https://api.github.com/repos/openhab/openhab-android
closed
"Info ssl client cert" tooltip has bright background in dark themes
bug Priority: Low
<!-- Please search the issue, if there is one with your issue --> ### Actual behaviour "Info ssl client cert" tooltip has bright background in dark themes, which makes the tooltip hard to read. ### Expected behaviour "Info ssl client cert" tooltip should have a dark background. ### Environment data #### Client * Android version: 5.1.1
1.0
"Info ssl client cert" tooltip has bright background in dark themes - <!-- Please search the issue, if there is one with your issue --> ### Actual behaviour "Info ssl client cert" tooltip has bright background in dark themes, which makes the tooltip hard to read. ### Expected behaviour "Info ssl client cert" tooltip should have a dark background. ### Environment data #### Client * Android version: 5.1.1
non_process
info ssl client cert tooltip has bright background in dark themes actual behaviour info ssl client cert tooltip has bright background in dark themes which makes the tooltip hard to read expected behaviour info ssl client cert tooltip should have a dark background environment data client android version
0
11,056
13,889,350,474
IssuesEvent
2020-10-19 07:46:31
zerolab-fe/awesome-nodejs
https://api.github.com/repos/zerolab-fe/awesome-nodejs
closed
supervisor
Process management
在👆 Title 处填写包名,并补充下面信息: ```json { "repoUrl": "https://github.com/petruisfan/node-supervisor", "description": "监听文件变化并自动重启" } ```
1.0
supervisor - 在👆 Title 处填写包名,并补充下面信息: ```json { "repoUrl": "https://github.com/petruisfan/node-supervisor", "description": "监听文件变化并自动重启" } ```
process
supervisor 在👆 title 处填写包名,并补充下面信息: json repourl description 监听文件变化并自动重启
1
6,988
10,134,428,729
IssuesEvent
2019-08-02 07:30:07
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
opened
Add Cypress to GitHub package registry
process: release stage: ready for work type: chore
Apparently it's a thing. How many people will use it 🤷‍♀ Instructions: https://help.github.com/en/articles/configuring-npm-for-use-with-github-package-registry
1.0
Add Cypress to GitHub package registry - Apparently it's a thing. How many people will use it 🤷‍♀ Instructions: https://help.github.com/en/articles/configuring-npm-for-use-with-github-package-registry
process
add cypress to github package registry apparently it s a thing how many people will use it 🤷‍♀ instructions
1
142,464
19,090,531,198
IssuesEvent
2021-11-29 11:34:14
sultanabubaker/NuGet_Project_SDK_NonSDK
https://api.github.com/repos/sultanabubaker/NuGet_Project_SDK_NonSDK
opened
CVE-2017-0247 (High) detected in system.text.encodings.web.4.0.0.nupkg
security vulnerability
## CVE-2017-0247 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.text.encodings.web.4.0.0.nupkg</b></p></summary> <p>Provides types for encoding and escaping strings for use in JavaScript, HyperText Markup Language (H...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.encodings.web.4.0.0.nupkg">https://api.nuget.org/packages/system.text.encodings.web.4.0.0.nupkg</a></p> <p>Path to dependency file: NuGet_Project_SDK_NonSDK/SDK/SDK.csproj</p> <p>Path to vulnerable library: stem.text.encodings.web/4.0.0/system.text.encodings.web.4.0.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.encodings.web.4.0.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sultanabubaker/NuGet_Project_SDK_NonSDK/commit/2cdcbe42d2efe636b5e9b1d4c29c9da6e2c9b927">2cdcbe42d2efe636b5e9b1d4c29c9da6e2c9b927</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A denial of service vulnerability exists when the ASP.NET Core fails to properly validate web requests. NOTE: Microsoft has not commented on third-party claims that the issue is that the TextEncoder.EncodeCore function in the System.Text.Encodings.Web package in ASP.NET Core Mvc before 1.0.4 and 1.1.x before 1.1.3 allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of 4-byte characters in the Unicode Non-Character range. <p>Publish Date: 2017-05-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0247>CVE-2017-0247</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/aspnet/Announcements/issues/239">https://github.com/aspnet/Announcements/issues/239</a></p> <p>Release Date: 2017-05-12</p> <p>Fix Resolution: System.Text.Encodings.Web - 4.0.1,4.3.1;System.Net.Http - 4.1.2,4.3.2;System.Net.Http.WinHttpHandler - 4.0.2,4.5.4;System.Net.Security - 4.0.1,4.3.1;System.Net.WebSockets.Client - 4.0.1,4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Nuget","packageName":"System.Text.Encodings.Web","packageVersion":"4.0.0","packageFilePaths":["/SDK/SDK.csproj"],"isTransitiveDependency":false,"dependencyTree":"System.Text.Encodings.Web:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"System.Text.Encodings.Web - 4.0.1,4.3.1;System.Net.Http - 4.1.2,4.3.2;System.Net.Http.WinHttpHandler - 4.0.2,4.5.4;System.Net.Security - 4.0.1,4.3.1;System.Net.WebSockets.Client - 4.0.1,4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-0247","vulnerabilityDetails":"A denial of service vulnerability exists when the ASP.NET Core fails to properly validate web requests. NOTE: Microsoft has not commented on third-party claims that the issue is that the TextEncoder.EncodeCore function in the System.Text.Encodings.Web package in ASP.NET Core Mvc before 1.0.4 and 1.1.x before 1.1.3 allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of 4-byte characters in the Unicode Non-Character range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0247","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-0247 (High) detected in system.text.encodings.web.4.0.0.nupkg - ## CVE-2017-0247 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.text.encodings.web.4.0.0.nupkg</b></p></summary> <p>Provides types for encoding and escaping strings for use in JavaScript, HyperText Markup Language (H...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.encodings.web.4.0.0.nupkg">https://api.nuget.org/packages/system.text.encodings.web.4.0.0.nupkg</a></p> <p>Path to dependency file: NuGet_Project_SDK_NonSDK/SDK/SDK.csproj</p> <p>Path to vulnerable library: stem.text.encodings.web/4.0.0/system.text.encodings.web.4.0.0.nupkg</p> <p> Dependency Hierarchy: - :x: **system.text.encodings.web.4.0.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sultanabubaker/NuGet_Project_SDK_NonSDK/commit/2cdcbe42d2efe636b5e9b1d4c29c9da6e2c9b927">2cdcbe42d2efe636b5e9b1d4c29c9da6e2c9b927</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A denial of service vulnerability exists when the ASP.NET Core fails to properly validate web requests. NOTE: Microsoft has not commented on third-party claims that the issue is that the TextEncoder.EncodeCore function in the System.Text.Encodings.Web package in ASP.NET Core Mvc before 1.0.4 and 1.1.x before 1.1.3 allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of 4-byte characters in the Unicode Non-Character range. <p>Publish Date: 2017-05-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0247>CVE-2017-0247</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/aspnet/Announcements/issues/239">https://github.com/aspnet/Announcements/issues/239</a></p> <p>Release Date: 2017-05-12</p> <p>Fix Resolution: System.Text.Encodings.Web - 4.0.1,4.3.1;System.Net.Http - 4.1.2,4.3.2;System.Net.Http.WinHttpHandler - 4.0.2,4.5.4;System.Net.Security - 4.0.1,4.3.1;System.Net.WebSockets.Client - 4.0.1,4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Nuget","packageName":"System.Text.Encodings.Web","packageVersion":"4.0.0","packageFilePaths":["/SDK/SDK.csproj"],"isTransitiveDependency":false,"dependencyTree":"System.Text.Encodings.Web:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"System.Text.Encodings.Web - 4.0.1,4.3.1;System.Net.Http - 4.1.2,4.3.2;System.Net.Http.WinHttpHandler - 4.0.2,4.5.4;System.Net.Security - 4.0.1,4.3.1;System.Net.WebSockets.Client - 4.0.1,4.3.1;Microsoft.AspNetCore.Mvc - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Abstractions - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ApiExplorer - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor.Host - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Razor - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-0247","vulnerabilityDetails":"A denial of service vulnerability exists when the ASP.NET Core fails to properly validate web requests. NOTE: Microsoft has not commented on third-party claims that the issue is that the TextEncoder.EncodeCore function in the System.Text.Encodings.Web package in ASP.NET Core Mvc before 1.0.4 and 1.1.x before 1.1.3 allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of 4-byte characters in the Unicode Non-Character range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0247","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in system text encodings web nupkg cve high severity vulnerability vulnerable library system text encodings web nupkg provides types for encoding and escaping strings for use in javascript hypertext markup language h library home page a href path to dependency file nuget project sdk nonsdk sdk sdk csproj path to vulnerable library stem text encodings web system text encodings web nupkg dependency hierarchy x system text encodings web nupkg vulnerable library found in head commit a href found in base branch master vulnerability details a denial of service vulnerability exists when the asp net core fails to properly validate web requests note microsoft has not commented on third party claims that the issue is that the textencoder encodecore function in the system text encodings web package in asp net core mvc before and x before allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of byte characters in the unicode non character range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text encodings web system net http system net http winhttphandler system net security system net websockets client microsoft aspnetcore mvc microsoft aspnetcore mvc core microsoft aspnetcore mvc abstractions microsoft aspnetcore mvc apiexplorer microsoft aspnetcore mvc cors microsoft aspnetcore mvc dataannotations microsoft aspnetcore mvc formatters json microsoft aspnetcore mvc formatters xml microsoft aspnetcore mvc localization microsoft aspnetcore mvc razor host microsoft aspnetcore mvc razor microsoft aspnetcore mvc taghelpers microsoft aspnetcore mvc viewfeatures microsoft aspnetcore mvc webapicompatshim isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree system text encodings web isminimumfixversionavailable true minimumfixversion system text encodings web system net http system net http winhttphandler system net security system net websockets client microsoft aspnetcore mvc microsoft aspnetcore mvc core microsoft aspnetcore mvc abstractions microsoft aspnetcore mvc apiexplorer microsoft aspnetcore mvc cors microsoft aspnetcore mvc dataannotations microsoft aspnetcore mvc formatters json microsoft aspnetcore mvc formatters xml microsoft aspnetcore mvc localization microsoft aspnetcore mvc razor host microsoft aspnetcore mvc razor microsoft aspnetcore mvc taghelpers microsoft aspnetcore mvc viewfeatures microsoft aspnetcore mvc webapicompatshim isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a denial of service vulnerability exists when the asp net core fails to properly validate web requests note microsoft has not commented on third party claims that the issue is that the textencoder encodecore function in the system text encodings web package in asp net core mvc before and x before allows remote attackers to cause a denial of service by leveraging failure to properly calculate the length of byte characters in the unicode non character range vulnerabilityurl
0
7,273
10,426,808,618
IssuesEvent
2019-09-16 18:27:19
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
pgdump is no longer a recognized format for the GDAL/OGR convert format algorithm
Feature Request Processing
The option to define the output format is normally given by the extension of the output file. It is no longer possible to specify 'PgDump' as the output format (-f PGDump).
1.0
pgdump is no longer a recognized format for the GDAL/OGR convert format algorithm - The option to define the output format is normally given by the extension of the output file. It is no longer possible to specify 'PgDump' as the output format (-f PGDump).
process
pgdump is no longer a recognized format for the gdal ogr convert format algorithm the option to define the output format is normally given by the extension of the output file it is no longer possible to specify pgdump as the output format f pgdump
1
53,085
13,260,877,196
IssuesEvent
2020-08-20 18:54:58
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
SLALIB/C needs a real makefile (Trac #677)
Migrated from Trac defect tools/ports
makefile is currently a PoS. needs to be re-done with proper make constructs, and variables. (ie: CC, CFLAGS) <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/677">https://code.icecube.wisc.edu/projects/icecube/ticket/677</a>, reported by negaand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-29T21:39:28", "_ts": "1338327568000000", "description": "makefile is currently a PoS. needs to be re-done with proper make constructs, and variables. (ie: CC, CFLAGS)", "reporter": "nega", "cc": "", "resolution": "fixed", "time": "2012-05-29T19:00:40", "component": "tools/ports", "summary": "SLALIB/C needs a real makefile", "priority": "normal", "keywords": "slalib", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
SLALIB/C needs a real makefile (Trac #677) - makefile is currently a PoS. needs to be re-done with proper make constructs, and variables. (ie: CC, CFLAGS) <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/677">https://code.icecube.wisc.edu/projects/icecube/ticket/677</a>, reported by negaand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-29T21:39:28", "_ts": "1338327568000000", "description": "makefile is currently a PoS. needs to be re-done with proper make constructs, and variables. (ie: CC, CFLAGS)", "reporter": "nega", "cc": "", "resolution": "fixed", "time": "2012-05-29T19:00:40", "component": "tools/ports", "summary": "SLALIB/C needs a real makefile", "priority": "normal", "keywords": "slalib", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
non_process
slalib c needs a real makefile trac makefile is currently a pos needs to be re done with proper make constructs and variables ie cc cflags migrated from json status closed changetime ts description makefile is currently a pos needs to be re done with proper make constructs and variables ie cc cflags reporter nega cc resolution fixed time component tools ports summary slalib c needs a real makefile priority normal keywords slalib milestone owner nega type defect
0
21,981
30,472,628,225
IssuesEvent
2023-07-17 14:31:29
USGS-WiM/StreamStats
https://api.github.com/repos/USGS-WiM/StreamStats
closed
Add refresh button to Batch Status and Manage Queue tabs
Batch Processor
Add a refresh button to the Batch Status and Manage Queue tabs so that the user can easily see the latest queue/results
1.0
Add refresh button to Batch Status and Manage Queue tabs - Add a refresh button to the Batch Status and Manage Queue tabs so that the user can easily see the latest queue/results
process
add refresh button to batch status and manage queue tabs add a refresh button to the batch status and manage queue tabs so that the user can easily see the latest queue results
1
53,588
28,299,291,257
IssuesEvent
2023-04-10 03:28:22
ClickHouse/ClickHouse
https://api.github.com/repos/ClickHouse/ClickHouse
opened
Use `number` as an index for `system.numbers`
feature performance
**Use case** People want to write queries as follows: ``` SELECT number FROM system.numbers WHERE number BETWEEN 10 AND 100; SELECT number FROM system.numbers WHERE number IN (123, 456); ``` and expect this query to be smart enough to read only requested ranges. **Describe the solution you'd like** Determine ranges by `KeyCondition`.
True
Use `number` as an index for `system.numbers` - **Use case** People want to write queries as follows: ``` SELECT number FROM system.numbers WHERE number BETWEEN 10 AND 100; SELECT number FROM system.numbers WHERE number IN (123, 456); ``` and expect this query to be smart enough to read only requested ranges. **Describe the solution you'd like** Determine ranges by `KeyCondition`.
non_process
use number as an index for system numbers use case people want to write queries as follows select number from system numbers where number between and select number from system numbers where number in and expect this query to be smart enough to read only requested ranges describe the solution you d like determine ranges by keycondition
0
432,805
30,295,598,570
IssuesEvent
2023-07-09 20:21:35
kopia/kopia
https://api.github.com/repos/kopia/kopia
closed
Can you make a video explaining the retention policies in more detail
help wanted question onboarding-experience documentation stale
I'm not sure I understand the retention policies and how they apply to backups and what it means when selecting 4 weekly, 4 monthly etc... How do these retention policies affect backups?
1.0
Can you make a video explaining the retention policies in more detail - I'm not sure I understand the retention policies and how they apply to backups and what it means when selecting 4 weekly, 4 monthly etc... How do these retention policies affect backups?
non_process
can you make a video explaining the retention policies in more detail i m not sure i understand the retention policies and how they apply to backups and what it means when selecting weekly monthly etc how do these retention policies affect backups
0
16,329
20,985,764,704
IssuesEvent
2022-03-29 02:51:38
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
GDAL: "Clip vector by mask layer" takes forever as it does not seem to consider the mask layer's extent
Feedback stale Processing Bug
### What is the bug or the crash? I've been testing the "Clip vector by mask layer" and the "Clip vector by extent" tool of the GDAL processing suite I encountered very long processing times (>10 min) with "Clip vector by mask layer" which could easily be avoided by considering the clipped layer's extent. I was using a large vector point layer (20GB) as input. The mask layer (New scratch layer) consists of only the one simple geometry shown in the screenshot. Clipping the layer just by another layer's extent was a matter of seconds. Why not process the vector layer's features only within the mask layer's extent? At least the user should be offered to limit the calculation to the mask layer's extent by adding the -spat option. ![image](https://user-images.githubusercontent.com/67278094/152150175-13237117-633d-49f7-a618-dcff24a20f78.png) ![image](https://user-images.githubusercontent.com/67278094/152148175-c9044619-1982-46e4-b247-3711cf49143a.png) Not sure if this is a feature request though or a matter to be forwarded to GDAL. ### Steps to reproduce the issue 1. Load a large vector point layer and create a very simple vector mask layer as shown in the screenshot 2. Use the GDAL algorithm "Clip vector by mask layer" 3. Observe it takes very long to process. ### Versions QGIS version 3.22.3-Białowieża QGIS code revision 1628765ec7 Qt version 5.15.2 Python version 3.9.5 GDAL/OGR version 3.4.1 PROJ version 8.2.1 EPSG Registry database version v10.041 (2021-12-03) Compiled against GEOS 3.10.0-CAPI-1.16.0 Running against GEOS 3.10.2-CAPI-1.16.0 SQLite version 3.35.2 PDAL version 2.3.0 PostgreSQL client version 13.0 SpatiaLite version 5.0.1 QWT version 6.1.3 QScintilla2 version 2.11.5 OS version Windows 10 Version 2009 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context _No response_
1.0
GDAL: "Clip vector by mask layer" takes forever as it does not seem to consider the mask layer's extent - ### What is the bug or the crash? I've been testing the "Clip vector by mask layer" and the "Clip vector by extent" tool of the GDAL processing suite I encountered very long processing times (>10 min) with "Clip vector by mask layer" which could easily be avoided by considering the clipped layer's extent. I was using a large vector point layer (20GB) as input. The mask layer (New scratch layer) consists of only the one simple geometry shown in the screenshot. Clipping the layer just by another layer's extent was a matter of seconds. Why not process the vector layer's features only within the mask layer's extent? At least the user should be offered to limit the calculation to the mask layer's extent by adding the -spat option. ![image](https://user-images.githubusercontent.com/67278094/152150175-13237117-633d-49f7-a618-dcff24a20f78.png) ![image](https://user-images.githubusercontent.com/67278094/152148175-c9044619-1982-46e4-b247-3711cf49143a.png) Not sure if this is a feature request though or a matter to be forwarded to GDAL. ### Steps to reproduce the issue 1. Load a large vector point layer and create a very simple vector mask layer as shown in the screenshot 2. Use the GDAL algorithm "Clip vector by mask layer" 3. Observe it takes very long to process. ### Versions QGIS version 3.22.3-Białowieża QGIS code revision 1628765ec7 Qt version 5.15.2 Python version 3.9.5 GDAL/OGR version 3.4.1 PROJ version 8.2.1 EPSG Registry database version v10.041 (2021-12-03) Compiled against GEOS 3.10.0-CAPI-1.16.0 Running against GEOS 3.10.2-CAPI-1.16.0 SQLite version 3.35.2 PDAL version 2.3.0 PostgreSQL client version 13.0 SpatiaLite version 5.0.1 QWT version 6.1.3 QScintilla2 version 2.11.5 OS version Windows 10 Version 2009 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context _No response_
process
gdal clip vector by mask layer takes forever as it does not seem to consider the mask layer s extent what is the bug or the crash i ve been testing the clip vector by mask layer and the clip vector by extent tool of the gdal processing suite i encountered very long processing times min with clip vector by mask layer which could easily be avoided by considering the clipped layer s extent i was using a large vector point layer as input the mask layer new scratch layer consists of only the one simple geometry shown in the screenshot clipping the layer just by another layer s extent was a matter of seconds why not process the vector layer s features only within the mask layer s extent at least the user should be offered to limit the calculation to the mask layer s extent by adding the spat option not sure if this is a feature request though or a matter to be forwarded to gdal steps to reproduce the issue load a large vector point layer and create a very simple vector mask layer as shown in the screenshot use the gdal algorithm clip vector by mask layer observe it takes very long to process versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version compiled against geos capi running against geos capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
1
95,901
8,580,702,762
IssuesEvent
2018-11-13 12:47:41
sozu-proxy/sozu
https://api.github.com/repos/sozu-proxy/sozu
closed
new configuration messages: add/remove listeners
Configuration enhancement needs testing
the way TCP listeners are managed right now is slightly annoying: - HTTP and HTTPS proxy can only listen on one socket (this is not a very big deal, most of the time we'll only have HTTP on `0.0.0.0:80` and HTTPS on `0.0.0.0:443`) - TCP listeners are linked to TCP fronts, so there's a special case to register a listener when adding a TCP front - there is no way to tell sozu to start or stop listening Proposal: - implement an `AddListener` message with an IP address and port. When the message is received, the HTTP, HTTPS or TCP proxy starts listening on that address - implement a `RemoveListener` message with an IP address and port. When the message is received, the HTTP, HTTPS or TCP proxy stops listening on that address - link fronts to specific listeners. For HTTP and HTTPS, that means some application fronts could be available on all or part of the listeners. For TCP, there would be only one front for each listener Is it worth the effort? How would it work with the code that transmits sockets during upgrades?
1.0
new configuration messages: add/remove listeners - the way TCP listeners are managed right now is slightly annoying: - HTTP and HTTPS proxy can only listen on one socket (this is not a very big deal, most of the time we'll only have HTTP on `0.0.0.0:80` and HTTPS on `0.0.0.0:443`) - TCP listeners are linked to TCP fronts, so there's a special case to register a listener when adding a TCP front - there is no way to tell sozu to start or stop listening Proposal: - implement an `AddListener` message with an IP address and port. When the message is received, the HTTP, HTTPS or TCP proxy starts listening on that address - implement a `RemoveListener` message with an IP address and port. When the message is received, the HTTP, HTTPS or TCP proxy stops listening on that address - link fronts to specific listeners. For HTTP and HTTPS, that means some application fronts could be available on all or part of the listeners. For TCP, there would be only one front for each listener Is it worth the effort? How would it work with the code that transmits sockets during upgrades?
non_process
new configuration messages add remove listeners the way tcp listeners are managed right now is slightly annoying http and https proxy can only listen on one socket this is not a very big deal most of the time we ll only have http on and https on tcp listeners are linked to tcp fronts so there s a special case to register a listener when adding a tcp front there is no way to tell sozu to start or stop listening proposal implement an addlistener message with an ip address and port when the message is received the http https or tcp proxy starts listening on that address implement a removelistener message with an ip address and port when the message is received the http https or tcp proxy stops listening on that address link fronts to specific listeners for http and https that means some application fronts could be available on all or part of the listeners for tcp there would be only one front for each listener is it worth the effort how would it work with the code that transmits sockets during upgrades
0
71,458
8,657,041,522
IssuesEvent
2018-11-27 20:09:00
cityofaustin/techstack
https://api.github.com/repos/cityofaustin/techstack
closed
Janis v2 Design: breakpoint exploration
Janis 2.0 Resident Interface Size: M Team: Design + Research
Work on meeting USWDS 2.0 [break point criteria](https://v2.designsystem.digital.gov/utilities/layout-grid/): - [ ] Mobile large ≥480px - [ ] Tablet ≥640px - [ ] Desktop ≥ 1024px
1.0
Janis v2 Design: breakpoint exploration - Work on meeting USWDS 2.0 [break point criteria](https://v2.designsystem.digital.gov/utilities/layout-grid/): - [ ] Mobile large ≥480px - [ ] Tablet ≥640px - [ ] Desktop ≥ 1024px
non_process
janis design breakpoint exploration work on meeting uswds mobile large ≥ tablet ≥ desktop ≥
0
15,459
19,720,539,191
IssuesEvent
2022-01-13 14:59:37
pnp/sp-dev-fx-webparts
https://api.github.com/repos/pnp/sp-dev-fx-webparts
closed
react-birthdays wont build
type:bug-suspected status:node-compatibility status:wrong-author
### Sample react-birthdays ### Author(s) @VesaJuvonen ### What happened? Receive following error when execute npm install 8344 error code 1 8345 error path c:\react-birthdays\node_modules\deasync 8346 error command failed 8347 error command C:\WINDOWS\system32\cmd.exe /d /s /c node ./build.js 8348 error gyp info it worked if it ends with ok 8348 error gyp info using node-gyp@3.8.0 8348 error gyp info using node@16.13.1 | win32 | x64 8348 error gyp ERR! configure error 8348 error gyp ERR! stack Error: Command failed: C:\Python310\python.EXE -c import sys; print "%s.%s.%s" % sys.version_info[:3]; 8348 error gyp ERR! stack File "<string>", line 1 8348 error gyp ERR! stack import sys; print "%s.%s.%s" % sys.version_info[:3]; 8348 error gyp ERR! stack ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 8348 error gyp ERR! stack SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)? 8348 error gyp ERR! stack 8348 error gyp ERR! stack at ChildProcess.exithandler (node:child_process:397:12) 8348 error gyp ERR! stack at ChildProcess.emit (node:events:390:28) 8348 error gyp ERR! stack at maybeClose (node:internal/child_process:1064:16) 8348 error gyp ERR! stack at Socket.<anonymous> (node:internal/child_process:450:11) 8348 error gyp ERR! stack at Socket.emit (node:events:390:28) 8348 error gyp ERR! stack at Pipe.<anonymous> (node:net:687:12) 8348 error gyp ERR! System Windows_NT 10.0.19042 8348 error gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "c:\\react-birthdays\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" 8348 error gyp ERR! cwd c:\react-birthdays\node_modules\deasync 8348 error gyp ERR! node -v v16.13.1 8348 error gyp ERR! node-gyp -v v3.8.0 8348 error gyp ERR! not ok 8348 error Build failed ### Steps to reproduce 1. run npm install within react-birthdays folder ### Expected behavior build should be successful ### Target SharePoint environment SharePoint Online ### Developer environment Windows ### Browsers - [ ] Internet Explorer - [X] Microsoft Edge - [X] Google Chrome - [ ] FireFox - [ ] Safari - [ ] mobile (iOS/iPadOS) - [ ] mobile (Android) - [ ] not applicable - [ ] other (enter in the "Additional environment details" area below) ### Node.js version v16.13.1 ### Additional environment details _No response_
True
react-birthdays wont build - ### Sample react-birthdays ### Author(s) @VesaJuvonen ### What happened? Receive following error when execute npm install 8344 error code 1 8345 error path c:\react-birthdays\node_modules\deasync 8346 error command failed 8347 error command C:\WINDOWS\system32\cmd.exe /d /s /c node ./build.js 8348 error gyp info it worked if it ends with ok 8348 error gyp info using node-gyp@3.8.0 8348 error gyp info using node@16.13.1 | win32 | x64 8348 error gyp ERR! configure error 8348 error gyp ERR! stack Error: Command failed: C:\Python310\python.EXE -c import sys; print "%s.%s.%s" % sys.version_info[:3]; 8348 error gyp ERR! stack File "<string>", line 1 8348 error gyp ERR! stack import sys; print "%s.%s.%s" % sys.version_info[:3]; 8348 error gyp ERR! stack ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 8348 error gyp ERR! stack SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)? 8348 error gyp ERR! stack 8348 error gyp ERR! stack at ChildProcess.exithandler (node:child_process:397:12) 8348 error gyp ERR! stack at ChildProcess.emit (node:events:390:28) 8348 error gyp ERR! stack at maybeClose (node:internal/child_process:1064:16) 8348 error gyp ERR! stack at Socket.<anonymous> (node:internal/child_process:450:11) 8348 error gyp ERR! stack at Socket.emit (node:events:390:28) 8348 error gyp ERR! stack at Pipe.<anonymous> (node:net:687:12) 8348 error gyp ERR! System Windows_NT 10.0.19042 8348 error gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "c:\\react-birthdays\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" 8348 error gyp ERR! cwd c:\react-birthdays\node_modules\deasync 8348 error gyp ERR! node -v v16.13.1 8348 error gyp ERR! node-gyp -v v3.8.0 8348 error gyp ERR! not ok 8348 error Build failed ### Steps to reproduce 1. run npm install within react-birthdays folder ### Expected behavior build should be successful ### Target SharePoint environment SharePoint Online ### Developer environment Windows ### Browsers - [ ] Internet Explorer - [X] Microsoft Edge - [X] Google Chrome - [ ] FireFox - [ ] Safari - [ ] mobile (iOS/iPadOS) - [ ] mobile (Android) - [ ] not applicable - [ ] other (enter in the "Additional environment details" area below) ### Node.js version v16.13.1 ### Additional environment details _No response_
non_process
react birthdays wont build sample react birthdays author s vesajuvonen what happened receive following error when execute npm install error code error path c react birthdays node modules deasync error command failed error command c windows cmd exe d s c node build js error gyp info it worked if it ends with ok error gyp info using node gyp error gyp info using node error gyp err configure error error gyp err stack error command failed c python exe c import sys print s s s sys version info error gyp err stack file line error gyp err stack import sys print s s s sys version info error gyp err stack error gyp err stack syntaxerror missing parentheses in call to print did you mean print error gyp err stack error gyp err stack at childprocess exithandler node child process error gyp err stack at childprocess emit node events error gyp err stack at maybeclose node internal child process error gyp err stack at socket node internal child process error gyp err stack at socket emit node events error gyp err stack at pipe node net error gyp err system windows nt error gyp err command c program files nodejs node exe c react birthdays node modules node gyp bin node gyp js rebuild error gyp err cwd c react birthdays node modules deasync error gyp err node v error gyp err node gyp v error gyp err not ok error build failed steps to reproduce run npm install within react birthdays folder expected behavior build should be successful target sharepoint environment sharepoint online developer environment windows browsers internet explorer microsoft edge google chrome firefox safari mobile ios ipados mobile android not applicable other enter in the additional environment details area below node js version additional environment details no response
0
3,823
6,802,227,549
IssuesEvent
2017-11-02 19:25:59
ncbo/bioportal-project
https://api.github.com/repos/ncbo/bioportal-project
closed
LCMPT: classes page returns "Not Found"
ontology processing problem
### Issue 1: LCMPT, a SKOS ontology appears to have parsed correctly, but when navigating to the classes page: http://bioportal.bioontology.org/ontologies/LCMPT?p=classes BioPortal returns "The page you are looking for wasn't found. Please try again." ### Issue 2: The ontology pull location is set to an HTML page, instead of the RDF file: http://id.loc.gov/authorities/performanceMediums
1.0
LCMPT: classes page returns "Not Found" - ### Issue 1: LCMPT, a SKOS ontology appears to have parsed correctly, but when navigating to the classes page: http://bioportal.bioontology.org/ontologies/LCMPT?p=classes BioPortal returns "The page you are looking for wasn't found. Please try again." ### Issue 2: The ontology pull location is set to an HTML page, instead of the RDF file: http://id.loc.gov/authorities/performanceMediums
process
lcmpt classes page returns not found issue lcmpt a skos ontology appears to have parsed correctly but when navigating to the classes page bioportal returns the page you are looking for wasn t found please try again issue the ontology pull location is set to an html page instead of the rdf file
1
18,556
24,555,473,162
IssuesEvent
2022-10-12 15:32:42
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Android] [Offline indicator] Delete app account > I do not wish to delete my account button should be disabled in the confirmation screen
Bug P2 Android Process: Fixed Process: Tested QA Process: Tested dev
Delete app account > I do not wish to delete my account button should be disabled in the confirmation screen ![Screenshot_20220712-172129_FDA MyStudies](https://user-images.githubusercontent.com/86007179/178484346-aab480d5-bf7d-4432-898e-142385a08ec6.jpg)
3.0
[Android] [Offline indicator] Delete app account > I do not wish to delete my account button should be disabled in the confirmation screen - Delete app account > I do not wish to delete my account button should be disabled in the confirmation screen ![Screenshot_20220712-172129_FDA MyStudies](https://user-images.githubusercontent.com/86007179/178484346-aab480d5-bf7d-4432-898e-142385a08ec6.jpg)
process
delete app account i do not wish to delete my account button should be disabled in the confirmation screen delete app account i do not wish to delete my account button should be disabled in the confirmation screen
1
5,295
8,102,206,294
IssuesEvent
2018-08-12 22:52:47
gradiuscypher/security-learning-resources
https://api.github.com/repos/gradiuscypher/security-learning-resources
opened
Design Consideration - How should the project be organized?
process
We'll want to make sure the project is organized and easy to find the right information, but we don't want to fragment things too much, because that could reduce discover-ability. We also need to consider how to structure our JSON files, and how we should split those files, or if we want a single file for everything.
1.0
Design Consideration - How should the project be organized? - We'll want to make sure the project is organized and easy to find the right information, but we don't want to fragment things too much, because that could reduce discover-ability. We also need to consider how to structure our JSON files, and how we should split those files, or if we want a single file for everything.
process
design consideration how should the project be organized we ll want to make sure the project is organized and easy to find the right information but we don t want to fragment things too much because that could reduce discover ability we also need to consider how to structure our json files and how we should split those files or if we want a single file for everything
1
16,283
20,907,400,558
IssuesEvent
2022-03-24 04:49:02
javaer996/javaer996-comments
https://api.github.com/repos/javaer996/javaer996-comments
opened
Spring系列-BeanFactoryPostProcessor详解 - TENG JIANG BLOG
Gitalk /Spring%E7%B3%BB%E5%88%97-BeanFactoryPostProcessor
https://www.tengjiang.site/Spring%E7%B3%BB%E5%88%97-BeanFactoryPostProcessor%E8%AF%A6%E8%A7%A3.html BeanFactoryPostProcessor BeanDefinitionRegistryPostProcessor
1.0
Spring系列-BeanFactoryPostProcessor详解 - TENG JIANG BLOG - https://www.tengjiang.site/Spring%E7%B3%BB%E5%88%97-BeanFactoryPostProcessor%E8%AF%A6%E8%A7%A3.html BeanFactoryPostProcessor BeanDefinitionRegistryPostProcessor
process
spring系列 beanfactorypostprocessor详解 teng jiang blog beanfactorypostprocessor beandefinitionregistrypostprocessor
1
121
3,550,851,649
IssuesEvent
2016-01-20 23:49:30
Liverpool-UK/somebody-should
https://api.github.com/repos/Liverpool-UK/somebody-should
opened
Create the Design Principles for a Digital Liverpool
enhancement help wanted People
Matt Edgar does a good job of making a start on what he thinks the design principles for a city is (he's particularly thinking of Leeds, but we already share a canal so...) - http://blog.mattedgar.com/2015/11/17/design-principles-for-an-enterprising-city/ And there are lots of good ideas and initiatives in http://nycroadmap.us/ Richard Pope has also listed some government-as-a-platform skewed rules that might also be good food for thought at http://blog.memespring.co.uk/2015/11/12/10-rules/ We should make a start on a set of design principles for a digital Liverpool.
1.0
Create the Design Principles for a Digital Liverpool - Matt Edgar does a good job of making a start on what he thinks the design principles for a city is (he's particularly thinking of Leeds, but we already share a canal so...) - http://blog.mattedgar.com/2015/11/17/design-principles-for-an-enterprising-city/ And there are lots of good ideas and initiatives in http://nycroadmap.us/ Richard Pope has also listed some government-as-a-platform skewed rules that might also be good food for thought at http://blog.memespring.co.uk/2015/11/12/10-rules/ We should make a start on a set of design principles for a digital Liverpool.
non_process
create the design principles for a digital liverpool matt edgar does a good job of making a start on what he thinks the design principles for a city is he s particularly thinking of leeds but we already share a canal so and there are lots of good ideas and initiatives in richard pope has also listed some government as a platform skewed rules that might also be good food for thought at we should make a start on a set of design principles for a digital liverpool
0
3,993
6,919,099,119
IssuesEvent
2017-11-29 14:30:00
nlbdev/pipeline
https://api.github.com/repos/nlbdev/pipeline
closed
Info about missing print page numbers in "om boka"
enhancement pre-processing Priority:2 - Medium
autodetect that before starting the conversion and disable page number / page markers if there are no page numbers make that info available for *om boken* (ingen sidetall fra visuell utgave)
1.0
Info about missing print page numbers in "om boka" - autodetect that before starting the conversion and disable page number / page markers if there are no page numbers make that info available for *om boken* (ingen sidetall fra visuell utgave)
process
info about missing print page numbers in om boka autodetect that before starting the conversion and disable page number page markers if there are no page numbers make that info available for om boken ingen sidetall fra visuell utgave
1
167,714
6,345,021,363
IssuesEvent
2017-07-27 21:14:42
vmware/vic
https://api.github.com/repos/vmware/vic
closed
Add mount data from portlayer to replenish persona container cache on VCH restart
area/docker component/docker-api-server component/portlayer priority/medium
As a user of VIC, I expect docker inspect to return mount information for a container after a VCH restarts. Acceptance Criteria - [ ] Updated robot scripts to verify that the docker inspect returns valid mount data after a VCH restarts --- This is the second part of the docker inspect/mount data issue. This task is to add either a new portlayer call to get the mount data for a container or add the data to our current ContainerInfo call. It also requires replenishing the container cache on start. This missing data is needed for docker cp.
1.0
Add mount data from portlayer to replenish persona container cache on VCH restart - As a user of VIC, I expect docker inspect to return mount information for a container after a VCH restarts. Acceptance Criteria - [ ] Updated robot scripts to verify that the docker inspect returns valid mount data after a VCH restarts --- This is the second part of the docker inspect/mount data issue. This task is to add either a new portlayer call to get the mount data for a container or add the data to our current ContainerInfo call. It also requires replenishing the container cache on start. This missing data is needed for docker cp.
non_process
add mount data from portlayer to replenish persona container cache on vch restart as a user of vic i expect docker inspect to return mount information for a container after a vch restarts acceptance criteria updated robot scripts to verify that the docker inspect returns valid mount data after a vch restarts this is the second part of the docker inspect mount data issue this task is to add either a new portlayer call to get the mount data for a container or add the data to our current containerinfo call it also requires replenishing the container cache on start this missing data is needed for docker cp
0
5,419
8,252,205,582
IssuesEvent
2018-09-12 10:04:30
linnovate/root
https://api.github.com/repos/linnovate/root
opened
Can't delete files from folders and offices
Process bug bug
@abrahamos open new folder/office. fill the fields. upload a file. click on delete. in the new window write delete. the file didn't delete.
1.0
Can't delete files from folders and offices - @abrahamos open new folder/office. fill the fields. upload a file. click on delete. in the new window write delete. the file didn't delete.
process
can t delete files from folders and offices abrahamos open new folder office fill the fields upload a file click on delete in the new window write delete the file didn t delete
1
734,252
25,340,970,520
IssuesEvent
2022-11-18 21:34:55
grpc/grpc
https://api.github.com/repos/grpc/grpc
opened
Refactor internal data structure callbacks_and_connectivities
kind/enhancement lang/Python priority/P2
`callbacks_and_connectivities` is a member of internal class `_ChannelConnectivityState`, currently it have a type of `List[Sequence[Union[ Callable[[grpc.ChannelConnectivity], None], Optional[grpc.ChannelConnectivity]]]]` which can be simplified to `List[Tuple[Callable[[grpc.ChannelConnectivity], None], Optional[grpc.ChannelConnectivity]]]`. This requires some changes to how we deal with `callbacks_and_connectivities` (we have operations like `callback_and_connectivity[1] = state.connectivity` and ` state.callbacks_and_connectivities.append([callback, None])`), this issue is used to track those changes.
1.0
Refactor internal data structure callbacks_and_connectivities - `callbacks_and_connectivities` is a member of internal class `_ChannelConnectivityState`, currently it have a type of `List[Sequence[Union[ Callable[[grpc.ChannelConnectivity], None], Optional[grpc.ChannelConnectivity]]]]` which can be simplified to `List[Tuple[Callable[[grpc.ChannelConnectivity], None], Optional[grpc.ChannelConnectivity]]]`. This requires some changes to how we deal with `callbacks_and_connectivities` (we have operations like `callback_and_connectivity[1] = state.connectivity` and ` state.callbacks_and_connectivities.append([callback, None])`), this issue is used to track those changes.
non_process
refactor internal data structure callbacks and connectivities callbacks and connectivities is a member of internal class channelconnectivitystate currently it have a type of list sequence union callable none optional which can be simplified to list none optional this requires some changes to how we deal with callbacks and connectivities we have operations like callback and connectivity state connectivity and state callbacks and connectivities append this issue is used to track those changes
0
19,912
26,373,774,993
IssuesEvent
2023-01-11 23:24:58
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Wrong parent: GO:0044495 modulation of blood pressure in another organism
parent relationship query multi-species process
GO:0044495 modulation of blood pressure in another organism is_a 'regulation of blood pressure' This parent should be removed, and replaced by 'GO:0044553 modulation of biological quality in another organism' Thanks, Pascale
1.0
Wrong parent: GO:0044495 modulation of blood pressure in another organism - GO:0044495 modulation of blood pressure in another organism is_a 'regulation of blood pressure' This parent should be removed, and replaced by 'GO:0044553 modulation of biological quality in another organism' Thanks, Pascale
process
wrong parent go modulation of blood pressure in another organism go modulation of blood pressure in another organism is a regulation of blood pressure this parent should be removed and replaced by go modulation of biological quality in another organism thanks pascale
1
8,117
11,303,013,914
IssuesEvent
2020-01-17 19:03:34
processing/processing
https://api.github.com/repos/processing/processing
closed
Parsing of nested generics
preprocessor
Consider the following code: ``` class Three<A, B, C> {} class Two<A, B> {} class One<A> {} Two<One<One<Integer>>, Integer> fn = null; ``` The processing pre-processor give us the error "Maybe too many >characters" even though the type signature is correct at line 4. Interestingly enough both of those compile correctly: ``` Two<One<One<Integer> >, Integer> fn = null; Two<Integer, One<One<Integer>>> fn = null; ``` Trying with `Three` gives us those results: ``` Three<One<One<Integer>>,Integer, Integer> fn = null; //doesn't compile Three<Integer,One<One<Integer>>, Integer> fn = null; //doesn't compile Three<Integer, Integer, One<One<Integer>>> fn = null; //compile ``` There definitely is a problem with how processing parses intermediate Type arguments. tested with processing 3.1.1
1.0
Parsing of nested generics - Consider the following code: ``` class Three<A, B, C> {} class Two<A, B> {} class One<A> {} Two<One<One<Integer>>, Integer> fn = null; ``` The processing pre-processor give us the error "Maybe too many >characters" even though the type signature is correct at line 4. Interestingly enough both of those compile correctly: ``` Two<One<One<Integer> >, Integer> fn = null; Two<Integer, One<One<Integer>>> fn = null; ``` Trying with `Three` gives us those results: ``` Three<One<One<Integer>>,Integer, Integer> fn = null; //doesn't compile Three<Integer,One<One<Integer>>, Integer> fn = null; //doesn't compile Three<Integer, Integer, One<One<Integer>>> fn = null; //compile ``` There definitely is a problem with how processing parses intermediate Type arguments. tested with processing 3.1.1
process
parsing of nested generics consider the following code class three class two class one two integer fn null the processing pre processor give us the error maybe too many characters even though the type signature is correct at line interestingly enough both of those compile correctly two integer fn null two fn null trying with three gives us those results three integer integer fn null doesn t compile three integer fn null doesn t compile three fn null compile there definitely is a problem with how processing parses intermediate type arguments tested with processing
1
242,551
18,668,418,345
IssuesEvent
2021-10-30 08:17:26
101Loop/drf-user
https://api.github.com/repos/101Loop/drf-user
closed
Fix makefile in docs section
bug documentation
### Description of the bug <!-- A clear and concise description of what the bug is. --> While running sphinx-build to create HTML pages, it asks for a source dir now. Just put `$(SOURCEDIR)` in `$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html` after `$(ALLSPHINXOPTS)`. This will fix the issue. Try to run `make html` first to check if it's working or not. ### What you expected to happen <!-- A clear and concise description of what you expected to happen. --> ### How to reproduce (as minimally and precisely as possible) <!-- If applicable, add screenshots to help explain your problem. --> ### Anything else we need to know? <!-- Add any other context about the problem here. -->
1.0
Fix makefile in docs section - ### Description of the bug <!-- A clear and concise description of what the bug is. --> While running sphinx-build to create HTML pages, it asks for a source dir now. Just put `$(SOURCEDIR)` in `$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html` after `$(ALLSPHINXOPTS)`. This will fix the issue. Try to run `make html` first to check if it's working or not. ### What you expected to happen <!-- A clear and concise description of what you expected to happen. --> ### How to reproduce (as minimally and precisely as possible) <!-- If applicable, add screenshots to help explain your problem. --> ### Anything else we need to know? <!-- Add any other context about the problem here. -->
non_process
fix makefile in docs section description of the bug while running sphinx build to create html pages it asks for a source dir now just put sourcedir in sphinxbuild b html allsphinxopts builddir html after allsphinxopts this will fix the issue try to run make html first to check if it s working or not what you expected to happen how to reproduce as minimally and precisely as possible anything else we need to know
0
223,037
24,711,624,029
IssuesEvent
2022-10-20 01:34:23
Fanthony1805/Projet_Ecosysteme_Big_Data_ING5_SI02_App_AYMEMARTIN_DECASTRO_FERREYROLLES
https://api.github.com/repos/Fanthony1805/Projet_Ecosysteme_Big_Data_ING5_SI02_App_AYMEMARTIN_DECASTRO_FERREYROLLES
opened
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz
security vulnerability
## CVE-2022-3517 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p>Path to dependency file: /FrontEnd/package.json</p> <p>Path to vulnerable library: /FrontEnd/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - nodemon-2.0.15.tgz (Root Library) - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Fanthony1805/Projet_Ecosysteme_Big_Data_ING5_SI02_App_AYMEMARTIN_DECASTRO_FERREYROLLES/commit/573714b5b1131323e9ed60dca94fe5740d6f6dd8">573714b5b1131323e9ed60dca94fe5740d6f6dd8</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz - ## CVE-2022-3517 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p>Path to dependency file: /FrontEnd/package.json</p> <p>Path to vulnerable library: /FrontEnd/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - nodemon-2.0.15.tgz (Root Library) - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Fanthony1805/Projet_Ecosysteme_Big_Data_ING5_SI02_App_AYMEMARTIN_DECASTRO_FERREYROLLES/commit/573714b5b1131323e9ed60dca94fe5740d6f6dd8">573714b5b1131323e9ed60dca94fe5740d6f6dd8</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules minimatch package json dependency hierarchy nodemon tgz root library x minimatch tgz vulnerable library found in head commit a href found in base branch main vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch step up your open source security game with mend
0
21,437
29,477,625,601
IssuesEvent
2023-06-02 00:38:11
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Node.js Developer na Coodesh
SALVADOR BACK-END INFRAESTRUTURA FULL-STACK SCRUM BDD GIT TYPESCRIPT NODE.JS DOCKER DEVOPS AWS REQUISITOS REMOTO PROCESSOS INOVAÇÃO BACKEND GITHUB KANBAN CI CD SEGURANÇA GITFLOW UMA C QUALIDADE CLEAN XP TESTES AUTOMATIZADOS MICROSERVICES METODOLOGIAS ÁGEIS EXPRESS NEGÓCIOS MONITORAMENTO SRE PAAS Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/nodejs-developer-160233085?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>O Grupo fácil está buscando Node Developer para fazer parte de seu time!</p> <p>🏢 Quem somos: </p> <p>O Grupo Fácil está focado em desenvolvimento de produtos na área da saúde, com o objetivo principal em atender os grandes nomes em operadoras de saúde do mercado brasileiro. Há mais de 27 anos dedicados na entrega de valor aos nossos clientes, focados na melhoria contínua e evolução natural no mercado de software brasileiro.</p> <p>Atualmente ingressamos em uma nova grande jornada de inovação, onde novos processos e produtos serão elaborados para entregar mais valor a nossos clientes, por isso, esperamos que você faça parte dessa jornada conosco.&nbsp;</p> <p>💻 Como é nosso time? </p> <p>Trabalhamos de forma remota, usando as práticas de comunicação assíncrona e workflow para o desenvolvimento ágil, a principal ideia é trabalhar duro, focados nas metas e objetivos, porém sem perder a liberdade e a inspiração que nos faz acordar cedo todos os dias.</p> <p>Temos pessoas focadas no negócio, no frontend, outros no backend e até no perfil full-stack, esperamos de você que nos mostre onde se sente melhor. 😉&nbsp;</p> <p>O que esperamos de você?</p> <ul> <li>Transparência (comunicação clara e eficiente sem ruído);</li> <li>Senso crítico com as demandas e processos;</li> <li>Capacidade de pedir ajuda e/ou ajudar em momentos de crise/impedimentos;</li> <li>Capacidade de "se virar", "correr atrás" de algo desconhecido, assumir riscos (com parcimônia);</li> <li>Capacidade de ensinar, instruir e orientar pessoas menos experientes do projeto ou time (multiplique o conhecimento).</li> </ul> <p>Vamos falar de código? </p> <ul> <li>Aqui usamos Typescript com S.O.L.I.D;</li> <li>Usamos microservices, docker, AWS e tudo que há de bom 😁;</li> <li>A arquitetura é desacoplada, testada com Hexagonal (Ports and Adapters);</li> <li>No backend usamos Node.js, Express/Fastify, TypeORM, Oracle e etc;</li> <li>E mais alguns detalhes que ficaremos felizes em contar durante nossa entrevista técnica.</li> </ul> ## Grupo Fácil: <p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p> <p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes.&nbsp;</p><a href='https://coodesh.com/companies/grupo-facil'>Veja mais no site</a> ## Habilidades: - Node.js - Express.js - TypeORM - Oracle ## Local: 100% Remoto ## Requisitos: - Experiência com desenvolvimento de grandes projeto (escala, performance, qualidade e etc); - Experiência na codificação de testes automatizados (unit test, BDD, Integration test); - Experiência com Git/Github e GitFlow (já usou github actions?!); - Experiência com Node.js e Express.js. ## Diferenciais: - Conhecimento de Clean Architecture e/ou Hexagonal Architecture; - Conhecimentos/Experiência de automação e/ou infraestrutura como código (DevOps, CI/CD); - Conhecimento de metodologias ágeis (Kanban, Scrum, XP, Scrumban); - Conhecimento e/ou certificação em PaaS/Clouds (Preferencialmente AWS); - Fila, processamento assíncrono, tópicos, modelo pub/sub; - Sabe algo sobre monitoramento, autoscalling, stress-test, load-test, SRE?? ## Benefícios: - Convênio com farmácia; - Participação nos lucros; - Vale refeição; - Vale transporte; - Parcerias e convênios; - Programas de saúde e bem-estar. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Node.js Developer na Grupo Fácil](https://coodesh.com/jobs/nodejs-developer-160233085?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Categoria Back-End
1.0
[Remoto] Node.js Developer na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/nodejs-developer-160233085?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>O Grupo fácil está buscando Node Developer para fazer parte de seu time!</p> <p>🏢 Quem somos: </p> <p>O Grupo Fácil está focado em desenvolvimento de produtos na área da saúde, com o objetivo principal em atender os grandes nomes em operadoras de saúde do mercado brasileiro. Há mais de 27 anos dedicados na entrega de valor aos nossos clientes, focados na melhoria contínua e evolução natural no mercado de software brasileiro.</p> <p>Atualmente ingressamos em uma nova grande jornada de inovação, onde novos processos e produtos serão elaborados para entregar mais valor a nossos clientes, por isso, esperamos que você faça parte dessa jornada conosco.&nbsp;</p> <p>💻 Como é nosso time? </p> <p>Trabalhamos de forma remota, usando as práticas de comunicação assíncrona e workflow para o desenvolvimento ágil, a principal ideia é trabalhar duro, focados nas metas e objetivos, porém sem perder a liberdade e a inspiração que nos faz acordar cedo todos os dias.</p> <p>Temos pessoas focadas no negócio, no frontend, outros no backend e até no perfil full-stack, esperamos de você que nos mostre onde se sente melhor. 😉&nbsp;</p> <p>O que esperamos de você?</p> <ul> <li>Transparência (comunicação clara e eficiente sem ruído);</li> <li>Senso crítico com as demandas e processos;</li> <li>Capacidade de pedir ajuda e/ou ajudar em momentos de crise/impedimentos;</li> <li>Capacidade de "se virar", "correr atrás" de algo desconhecido, assumir riscos (com parcimônia);</li> <li>Capacidade de ensinar, instruir e orientar pessoas menos experientes do projeto ou time (multiplique o conhecimento).</li> </ul> <p>Vamos falar de código? </p> <ul> <li>Aqui usamos Typescript com S.O.L.I.D;</li> <li>Usamos microservices, docker, AWS e tudo que há de bom 😁;</li> <li>A arquitetura é desacoplada, testada com Hexagonal (Ports and Adapters);</li> <li>No backend usamos Node.js, Express/Fastify, TypeORM, Oracle e etc;</li> <li>E mais alguns detalhes que ficaremos felizes em contar durante nossa entrevista técnica.</li> </ul> ## Grupo Fácil: <p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p> <p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes.&nbsp;</p><a href='https://coodesh.com/companies/grupo-facil'>Veja mais no site</a> ## Habilidades: - Node.js - Express.js - TypeORM - Oracle ## Local: 100% Remoto ## Requisitos: - Experiência com desenvolvimento de grandes projeto (escala, performance, qualidade e etc); - Experiência na codificação de testes automatizados (unit test, BDD, Integration test); - Experiência com Git/Github e GitFlow (já usou github actions?!); - Experiência com Node.js e Express.js. ## Diferenciais: - Conhecimento de Clean Architecture e/ou Hexagonal Architecture; - Conhecimentos/Experiência de automação e/ou infraestrutura como código (DevOps, CI/CD); - Conhecimento de metodologias ágeis (Kanban, Scrum, XP, Scrumban); - Conhecimento e/ou certificação em PaaS/Clouds (Preferencialmente AWS); - Fila, processamento assíncrono, tópicos, modelo pub/sub; - Sabe algo sobre monitoramento, autoscalling, stress-test, load-test, SRE?? ## Benefícios: - Convênio com farmácia; - Participação nos lucros; - Vale refeição; - Vale transporte; - Parcerias e convênios; - Programas de saúde e bem-estar. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Node.js Developer na Grupo Fácil](https://coodesh.com/jobs/nodejs-developer-160233085?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Categoria Back-End
process
node js developer na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 o grupo fácil está buscando node developer para fazer parte de seu time 🏢 quem somos o grupo fácil está focado em desenvolvimento de produtos na área da saúde com o objetivo principal em atender os grandes nomes em operadoras de saúde do mercado brasileiro há mais de anos dedicados na entrega de valor aos nossos clientes focados na melhoria contínua e evolução natural no mercado de software brasileiro atualmente ingressamos em uma nova grande jornada de inovação onde novos processos e produtos serão elaborados para entregar mais valor a nossos clientes por isso esperamos que você faça parte dessa jornada conosco nbsp 💻 como é nosso time trabalhamos de forma remota usando as práticas de comunicação assíncrona e workflow para o desenvolvimento ágil a principal ideia é trabalhar duro focados nas metas e objetivos porém sem perder a liberdade e a inspiração que nos faz acordar cedo todos os dias temos pessoas focadas no negócio no frontend outros no backend e até no perfil full stack esperamos de você que nos mostre onde se sente melhor 😉 nbsp o que esperamos de você transparência comunicação clara e eficiente sem ruído senso crítico com as demandas e processos capacidade de pedir ajuda e ou ajudar em momentos de crise impedimentos capacidade de se virar correr atrás de algo desconhecido assumir riscos com parcimônia capacidade de ensinar instruir e orientar pessoas menos experientes do projeto ou time multiplique o conhecimento vamos falar de código aqui usamos typescript com s o l i d usamos microservices docker aws e tudo que há de bom 😁 a arquitetura é desacoplada testada com hexagonal ports and adapters no backend usamos node js express fastify typeorm oracle e etc e mais alguns detalhes que ficaremos felizes em contar durante nossa entrevista técnica grupo fácil ao longo de anos de história o grupo fácil se tornou referência nacional em sistemas softwares e serviços para a gestão de negócios nas áreas financeira e de crédito da saúde e no setor imobiliário o grupo fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes nbsp habilidades node js express js typeorm oracle local remoto requisitos experiência com desenvolvimento de grandes projeto escala performance qualidade e etc experiência na codificação de testes automatizados unit test bdd integration test experiência com git github e gitflow já usou github actions experiência com node js e express js diferenciais conhecimento de clean architecture e ou hexagonal architecture conhecimentos experiência de automação e ou infraestrutura como código devops ci cd conhecimento de metodologias ágeis kanban scrum xp scrumban conhecimento e ou certificação em paas clouds preferencialmente aws fila processamento assíncrono tópicos modelo pub sub sabe algo sobre monitoramento autoscalling stress test load test sre benefícios convênio com farmácia participação nos lucros vale refeição vale transporte parcerias e convênios programas de saúde e bem estar como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria back end
1
22,380
31,142,283,200
IssuesEvent
2023-08-16 01:44:01
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: catch-all for net_stubbing flake
stage: backlog process: flaky test topic: flake ❄️ priority: medium stage: flake topic: net_stubbing.cy.ts stale
### Link to dashboard or CircleCI failure 1. [empty string](https://dashboard.cypress.io/projects/ypt4pf/runs/38126/test-results/9c238974-bdf1-4ea1-9d5b-fcba767840b7) 2. UPDATE: ended up just skipping the whole file ### Link to failing test in GitHub See skipped flaky tests in [packages/driver/cypress/e2e/commands/net_stubbing.cy.ts](https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts) ### Analysis We see A LOT of flake originating from tests in `net_stubbing.cy.ts`. Many have been individually logged and have the label `topic: net_stubbing.cy.ts`, the remainder will be logged under this catch-all issue. ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: catch-all for net_stubbing flake - ### Link to dashboard or CircleCI failure 1. [empty string](https://dashboard.cypress.io/projects/ypt4pf/runs/38126/test-results/9c238974-bdf1-4ea1-9d5b-fcba767840b7) 2. UPDATE: ended up just skipping the whole file ### Link to failing test in GitHub See skipped flaky tests in [packages/driver/cypress/e2e/commands/net_stubbing.cy.ts](https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts) ### Analysis We see A LOT of flake originating from tests in `net_stubbing.cy.ts`. Many have been individually logged and have the label `topic: net_stubbing.cy.ts`, the remainder will be logged under this catch-all issue. ### Cypress Version 10.6.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test catch all for net stubbing flake link to dashboard or circleci failure update ended up just skipping the whole file link to failing test in github see skipped flaky tests in analysis we see a lot of flake originating from tests in net stubbing cy ts many have been individually logged and have the label topic net stubbing cy ts the remainder will be logged under this catch all issue cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
90,954
26,227,174,673
IssuesEvent
2023-01-04 19:52:25
Servoh/TheTower
https://api.github.com/repos/Servoh/TheTower
closed
Tron lines have collision
bug Building Waiting-for-release upstream private
We need to disable collision on the tron outlines, as it is also screwing with the navmesh Closes #16
1.0
Tron lines have collision - We need to disable collision on the tron outlines, as it is also screwing with the navmesh Closes #16
non_process
tron lines have collision we need to disable collision on the tron outlines as it is also screwing with the navmesh closes
0
14,851
18,245,212,036
IssuesEvent
2021-10-01 17:25:31
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
attribute "build_setting_default" is not configurable
type: support / not a bug (process) team-Configurability
Hi, I want to make the "build_setting_default" of a config_setting configurable based on a select: ``` my_conf = rule( implementation = _my_conf_impl, build_setting = config.string(flag = True), ) my_conf( name = "my_conf", build_setting_default = select({ "cpu:x86_64": "foo", "cpu:aarch64": "bar", }), ) ``` However I get the error: ``` attribute "build_setting_default" is not configurable ``` Why is that? Is there a workaround to achieve what I intend above?
1.0
attribute "build_setting_default" is not configurable - Hi, I want to make the "build_setting_default" of a config_setting configurable based on a select: ``` my_conf = rule( implementation = _my_conf_impl, build_setting = config.string(flag = True), ) my_conf( name = "my_conf", build_setting_default = select({ "cpu:x86_64": "foo", "cpu:aarch64": "bar", }), ) ``` However I get the error: ``` attribute "build_setting_default" is not configurable ``` Why is that? Is there a workaround to achieve what I intend above?
process
attribute build setting default is not configurable hi i want to make the build setting default of a config setting configurable based on a select my conf rule implementation my conf impl build setting config string flag true my conf name my conf build setting default select cpu foo cpu bar however i get the error attribute build setting default is not configurable why is that is there a workaround to achieve what i intend above
1
2,752
5,664,079,983
IssuesEvent
2017-04-11 00:45:22
codefordenver/org
https://api.github.com/repos/codefordenver/org
closed
Consolidate and Update CfD information
Process ready Writing
As a member of CfD, I want updated information centralized in an easy to find location, so I can know what I'm doing and how to contribute. - [x] Update new meetup member email - [x] Add CfD website url to meetup page - [ ] Make website more centralized source of info and means of contact (probably redirect through meetup)
1.0
Consolidate and Update CfD information - As a member of CfD, I want updated information centralized in an easy to find location, so I can know what I'm doing and how to contribute. - [x] Update new meetup member email - [x] Add CfD website url to meetup page - [ ] Make website more centralized source of info and means of contact (probably redirect through meetup)
process
consolidate and update cfd information as a member of cfd i want updated information centralized in an easy to find location so i can know what i m doing and how to contribute update new meetup member email add cfd website url to meetup page make website more centralized source of info and means of contact probably redirect through meetup
1
3,486
6,555,794,276
IssuesEvent
2017-09-06 11:48:47
zero-os/0-stor
https://api.github.com/repos/zero-os/0-stor
closed
Merge ObjectCreate with Write & ObjectGet with Read
process_wontfix type_bug
ObjectCreate & ObjectGet currently create & get data without processing it through pipes. While Write&Get process it through pipes. Client should merge this those methods, depend on the pipes existence
1.0
Merge ObjectCreate with Write & ObjectGet with Read - ObjectCreate & ObjectGet currently create & get data without processing it through pipes. While Write&Get process it through pipes. Client should merge this those methods, depend on the pipes existence
process
merge objectcreate with write objectget with read objectcreate objectget currently create get data without processing it through pipes while write get process it through pipes client should merge this those methods depend on the pipes existence
1
7,857
11,030,408,899
IssuesEvent
2019-12-06 15:41:58
GoogleCloudPlatform/java-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
closed
Use com.google.cloud:libraries-bom for dependency management
type: process
Moving forward, we're encouraging users to import the `com.google.cloud:libraries-bom` in the `dependencyManagement` section of the user's `pom.xml` to resolve dependency compatibility. Users will no longer need to specify individual version numbers for google-cloud-x libraries.
1.0
Use com.google.cloud:libraries-bom for dependency management - Moving forward, we're encouraging users to import the `com.google.cloud:libraries-bom` in the `dependencyManagement` section of the user's `pom.xml` to resolve dependency compatibility. Users will no longer need to specify individual version numbers for google-cloud-x libraries.
process
use com google cloud libraries bom for dependency management moving forward we re encouraging users to import the com google cloud libraries bom in the dependencymanagement section of the user s pom xml to resolve dependency compatibility users will no longer need to specify individual version numbers for google cloud x libraries
1
408,409
27,663,033,483
IssuesEvent
2023-03-12 18:41:31
FWDekker/mommy
https://api.github.com/repos/FWDekker/mommy
opened
rewrite installation instructions
enhancement documentation: user
i think it'd be more useful to categorise the installation instructions by operating system instead of by installation method.
1.0
rewrite installation instructions - i think it'd be more useful to categorise the installation instructions by operating system instead of by installation method.
non_process
rewrite installation instructions i think it d be more useful to categorise the installation instructions by operating system instead of by installation method
0
4,840
7,735,101,344
IssuesEvent
2018-05-27 11:19:03
bonopi07/2018-1_advML_project
https://api.github.com/repos/bonopi07/2018-1_advML_project
closed
데이터 수집 및 정제
processing data
1. 데이터셋 탐색 및 수집 (googling, kaggle dataset 참조) 2. 딥러닝의 input data로 사용하기 위해 데이터 정제 기법 적용
1.0
데이터 수집 및 정제 - 1. 데이터셋 탐색 및 수집 (googling, kaggle dataset 참조) 2. 딥러닝의 input data로 사용하기 위해 데이터 정제 기법 적용
process
데이터 수집 및 정제 데이터셋 탐색 및 수집 googling kaggle dataset 참조 딥러닝의 input data로 사용하기 위해 데이터 정제 기법 적용
1
19,921
26,383,037,335
IssuesEvent
2023-01-12 10:07:29
deepset-ai/haystack
https://api.github.com/repos/deepset-ai/haystack
closed
FAQ CSV Indexing using YAML description, adding missing nodes
type:feature topic:file_converter topic:preprocessing
**Is your feature request related to a problem? Please describe.** Not really. **Describe the solution you'd like** As an Haystack user I would like to be able to import FAQ (as CSV files), index them though API endpoint using an indexing pipeline described as YAML. **Describe alternatives you've considered** I've added 2 nodes to do so : - **Csv2Documents** : Takes a file input, parse it an FAQ CSV file to output Documents. - **EmbedDocuments** : Build using and embeder (actually a Retriever), takes Documents as input and run the `embeded.run_indexing`on it. Those 2 nodes are really basic but seems necessary to then build a FAQ with CSV indexing pipeline. I will push it today. **Additional context** Index pipeline description looks like this : ```yaml # To allow your IDE to autocomplete and validate your YAML pipelines, name them as <name of your choice>.haystack-pipeline.yml version: ignore components: # define all the building-blocks for Pipeline - name: DocumentStore type: ElasticsearchDocumentStore params: host: localhost embedding_field: question_emb embedding_dim: 384 excluded_meta_data: - question_emb similarity: cosine - name: Retriever type: EmbeddingRetriever params: document_store: DocumentStore # params can reference other components defined in the YAML embedding_model: sentence-transformers/all-MiniLM-L6-v2 scale_score: False - name: Doc2Answers # custom-name for the component; helpful for visualization & debugging type: Docs2Answers # Haystack Class name for the component - name: CsvToDocs type: CsvToDocuments - name: EmbedDocs type: EmbedDocuments params: embeder: Retriever pipelines: - name: query # a sample extractive-qa Pipeline nodes: - name: Retriever inputs: [Query] - name: Doc2Answers inputs: [Retriever] - name: indexing nodes: - name: CsvToDocs inputs: [File] - name: EmbedDocuments inputs: [ CsvToDocs ] - name: DocumentStore inputs: [ EmbedDocuments ] ```
1.0
FAQ CSV Indexing using YAML description, adding missing nodes - **Is your feature request related to a problem? Please describe.** Not really. **Describe the solution you'd like** As an Haystack user I would like to be able to import FAQ (as CSV files), index them though API endpoint using an indexing pipeline described as YAML. **Describe alternatives you've considered** I've added 2 nodes to do so : - **Csv2Documents** : Takes a file input, parse it an FAQ CSV file to output Documents. - **EmbedDocuments** : Build using and embeder (actually a Retriever), takes Documents as input and run the `embeded.run_indexing`on it. Those 2 nodes are really basic but seems necessary to then build a FAQ with CSV indexing pipeline. I will push it today. **Additional context** Index pipeline description looks like this : ```yaml # To allow your IDE to autocomplete and validate your YAML pipelines, name them as <name of your choice>.haystack-pipeline.yml version: ignore components: # define all the building-blocks for Pipeline - name: DocumentStore type: ElasticsearchDocumentStore params: host: localhost embedding_field: question_emb embedding_dim: 384 excluded_meta_data: - question_emb similarity: cosine - name: Retriever type: EmbeddingRetriever params: document_store: DocumentStore # params can reference other components defined in the YAML embedding_model: sentence-transformers/all-MiniLM-L6-v2 scale_score: False - name: Doc2Answers # custom-name for the component; helpful for visualization & debugging type: Docs2Answers # Haystack Class name for the component - name: CsvToDocs type: CsvToDocuments - name: EmbedDocs type: EmbedDocuments params: embeder: Retriever pipelines: - name: query # a sample extractive-qa Pipeline nodes: - name: Retriever inputs: [Query] - name: Doc2Answers inputs: [Retriever] - name: indexing nodes: - name: CsvToDocs inputs: [File] - name: EmbedDocuments inputs: [ CsvToDocs ] - name: DocumentStore inputs: [ EmbedDocuments ] ```
process
faq csv indexing using yaml description adding missing nodes is your feature request related to a problem please describe not really describe the solution you d like as an haystack user i would like to be able to import faq as csv files index them though api endpoint using an indexing pipeline described as yaml describe alternatives you ve considered i ve added nodes to do so takes a file input parse it an faq csv file to output documents embeddocuments build using and embeder actually a retriever takes documents as input and run the embeded run indexing on it those nodes are really basic but seems necessary to then build a faq with csv indexing pipeline i will push it today additional context index pipeline description looks like this yaml to allow your ide to autocomplete and validate your yaml pipelines name them as haystack pipeline yml version ignore components define all the building blocks for pipeline name documentstore type elasticsearchdocumentstore params host localhost embedding field question emb embedding dim excluded meta data question emb similarity cosine name retriever type embeddingretriever params document store documentstore params can reference other components defined in the yaml embedding model sentence transformers all minilm scale score false name custom name for the component helpful for visualization debugging type haystack class name for the component name csvtodocs type csvtodocuments name embeddocs type embeddocuments params embeder retriever pipelines name query a sample extractive qa pipeline nodes name retriever inputs name inputs name indexing nodes name csvtodocs inputs name embeddocuments inputs name documentstore inputs
1
123,134
12,193,534,665
IssuesEvent
2020-04-29 14:32:34
clever-ch/unq-tip-documentation
https://api.github.com/repos/clever-ch/unq-tip-documentation
closed
Armen la documentación
documentation
Creen una wiki de github. - Estaría bueno que armen el README de este proyecto con: _El Logo y el nombre del proyecto. _Una breve descripción o objetivo del proyecto. _Poner los links de los repositorios (Back / Front) - Sigan la guía - "02 - TTIP - Guía para la Prueba de concepto de la Arquitectura", miren la "Parte 3 Documentación", que se encuentra en la pagina de la materia en la sección "Entregables", para poder armar la documentación. Tienen ejemplos más abajo para que puedan guiarse.
1.0
Armen la documentación - Creen una wiki de github. - Estaría bueno que armen el README de este proyecto con: _El Logo y el nombre del proyecto. _Una breve descripción o objetivo del proyecto. _Poner los links de los repositorios (Back / Front) - Sigan la guía - "02 - TTIP - Guía para la Prueba de concepto de la Arquitectura", miren la "Parte 3 Documentación", que se encuentra en la pagina de la materia en la sección "Entregables", para poder armar la documentación. Tienen ejemplos más abajo para que puedan guiarse.
non_process
armen la documentación creen una wiki de github estaría bueno que armen el readme de este proyecto con el logo y el nombre del proyecto una breve descripción o objetivo del proyecto poner los links de los repositorios back front sigan la guía ttip guía para la prueba de concepto de la arquitectura miren la parte documentación que se encuentra en la pagina de la materia en la sección entregables para poder armar la documentación tienen ejemplos más abajo para que puedan guiarse
0
67,759
13,023,231,435
IssuesEvent
2020-07-27 09:39:04
ppy/osu-web
https://api.github.com/repos/ppy/osu-web
closed
Remove all usage of DB_HOST_READONLY
type:code-quality
Has been migrated away at an infrastructure level, with all existing usages being converted to routing rules. Can be removed from this project.
1.0
Remove all usage of DB_HOST_READONLY - Has been migrated away at an infrastructure level, with all existing usages being converted to routing rules. Can be removed from this project.
non_process
remove all usage of db host readonly has been migrated away at an infrastructure level with all existing usages being converted to routing rules can be removed from this project
0
62,553
7,611,133,900
IssuesEvent
2018-05-01 12:30:26
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Can't import new BNA into Playground when running as limited user
P2 design playground stale
## Context In Playground (0.11.2) If I'm running under the context of a user with limited rights, I'm not able to replace the contents of my workspace with a new BNA file. I am running in simulation mode, so not against a real Fabric instance. The error I get is: `t: Participant 'org.acme.vehicle.auction.Member#matt' does not have 'READ' access to resource 'org.hyperledger.composer.system.AssetRegistry#org.hyperledger.composer.system.Identity'` ## Expected Behavior I would expect to be always able to start anew and import a new project. ## Steps to Reproduce 1. Import and deploy (say) Car Auction 2. Issue a new identity no admin rights 3. Switch to that user's context 4. Try importing a new sample - error is produced ## Your Environment * Version used: Playground 0.11.2 * Environment name and version (e.g. Chrome 39, node.js 5.4): Firefox
1.0
Can't import new BNA into Playground when running as limited user - ## Context In Playground (0.11.2) If I'm running under the context of a user with limited rights, I'm not able to replace the contents of my workspace with a new BNA file. I am running in simulation mode, so not against a real Fabric instance. The error I get is: `t: Participant 'org.acme.vehicle.auction.Member#matt' does not have 'READ' access to resource 'org.hyperledger.composer.system.AssetRegistry#org.hyperledger.composer.system.Identity'` ## Expected Behavior I would expect to be always able to start anew and import a new project. ## Steps to Reproduce 1. Import and deploy (say) Car Auction 2. Issue a new identity no admin rights 3. Switch to that user's context 4. Try importing a new sample - error is produced ## Your Environment * Version used: Playground 0.11.2 * Environment name and version (e.g. Chrome 39, node.js 5.4): Firefox
non_process
can t import new bna into playground when running as limited user context in playground if i m running under the context of a user with limited rights i m not able to replace the contents of my workspace with a new bna file i am running in simulation mode so not against a real fabric instance the error i get is t participant org acme vehicle auction member matt does not have read access to resource org hyperledger composer system assetregistry org hyperledger composer system identity expected behavior i would expect to be always able to start anew and import a new project steps to reproduce import and deploy say car auction issue a new identity no admin rights switch to that user s context try importing a new sample error is produced your environment version used playground environment name and version e g chrome node js firefox
0
159,771
12,490,058,467
IssuesEvent
2020-05-31 21:55:41
RPTools/maptool
https://api.github.com/repos/RPTools/maptool
closed
Macros are not set to be Player Editable when they should be
bug tested
**Describe the bug** Some macros are not set to be Player Editable when they should be: - Macros created through the `createMacro()` command - Macros created by dropping a token from outside MapTool **To Reproduce** Steps to reproduce the behavior: 1. As a player, create a macro through one of these two methods 2. Notice the macro has the Player Editable field set to `false`. **Expected behavior** The Player Editable field is set to `true`. **MapTool Info** - Version: 1.7, develop - Install: New **Desktop (please complete the following information):** - OS: Windows - Version: 10
1.0
Macros are not set to be Player Editable when they should be - **Describe the bug** Some macros are not set to be Player Editable when they should be: - Macros created through the `createMacro()` command - Macros created by dropping a token from outside MapTool **To Reproduce** Steps to reproduce the behavior: 1. As a player, create a macro through one of these two methods 2. Notice the macro has the Player Editable field set to `false`. **Expected behavior** The Player Editable field is set to `true`. **MapTool Info** - Version: 1.7, develop - Install: New **Desktop (please complete the following information):** - OS: Windows - Version: 10
non_process
macros are not set to be player editable when they should be describe the bug some macros are not set to be player editable when they should be macros created through the createmacro command macros created by dropping a token from outside maptool to reproduce steps to reproduce the behavior as a player create a macro through one of these two methods notice the macro has the player editable field set to false expected behavior the player editable field is set to true maptool info version develop install new desktop please complete the following information os windows version
0