Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,786
| 6,775,504,157
|
IssuesEvent
|
2017-10-27 14:30:12
|
symfony/symfony-docs
|
https://api.github.com/repos/symfony/symfony-docs
|
closed
|
Allow to disable type enforcement in AbstractObjectNormalizer
|
hasPR Missing Documentation Serializer
|
This allows to denormalize simple DTOs with public properties using the property-info component and the ObjectNormalizer. See https://github.com/symfony/symfony/pull/23404
|
1.0
|
Allow to disable type enforcement in AbstractObjectNormalizer - This allows to denormalize simple DTOs with public properties using the property-info component and the ObjectNormalizer. See https://github.com/symfony/symfony/pull/23404
|
non_process
|
allow to disable type enforcement in abstractobjectnormalizer this allows to denormalize simple dtos with public properties using the property info component and the objectnormalizer see
| 0
|
666,704
| 22,364,722,699
|
IssuesEvent
|
2022-06-16 01:55:10
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Support WSL by disabling SO_REUSEPORT
|
kind/bug lang/core priority/P2 untriaged
|
### What version of gRPC and what language are you using?
Latest ( cloned this morning from master 2020-06-25 )
### What operating system (Linux, Windows,...) and version?
WSL Debian 10 (buster)
### What runtime / compiler are you using (e.g. python version or version of gcc)
Thread model: posix
gcc version 8.3.0 (Debian 8.3.0-6)
### What did you do?
Built the examples in WSL, also described in a [StackOverflow question](https://stackoverflow.com/questions/62569670/c-grpc-tcp-error-protocol-not-available)
### What did you expect to see?
A working hello world example
### What did you see instead?
```
E0625 21080 socket_utils_common_posix.cc:223] check for SO_REUSEPORT:
{
"created":"@1593068359.950045200",
"description":"Protocol not available",
"errno":92,
"file":"/mnt/.../grpc/src/core/lib/iomgr/socket_utils_common_posix.cc",
"file_line":201,"os_error":"Protocol not available",
"syscall":"getsockopt(SO_REUSEPORT)"
}
E0625 21080 socket_utils_common_posix.cc:327] setsockopt(TCP_USER_TIMEOUT) Protocol not available
Server listening on 0.0.0.0:50051
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
Supposedly it is already fixed in https://github.com/grpc/grpc/pull/13517 it just went stale.
|
1.0
|
Support WSL by disabling SO_REUSEPORT - ### What version of gRPC and what language are you using?
Latest ( cloned this morning from master 2020-06-25 )
### What operating system (Linux, Windows,...) and version?
WSL Debian 10 (buster)
### What runtime / compiler are you using (e.g. python version or version of gcc)
Thread model: posix
gcc version 8.3.0 (Debian 8.3.0-6)
### What did you do?
Built the examples in WSL, also described in a [StackOverflow question](https://stackoverflow.com/questions/62569670/c-grpc-tcp-error-protocol-not-available)
### What did you expect to see?
A working hello world example
### What did you see instead?
```
E0625 21080 socket_utils_common_posix.cc:223] check for SO_REUSEPORT:
{
"created":"@1593068359.950045200",
"description":"Protocol not available",
"errno":92,
"file":"/mnt/.../grpc/src/core/lib/iomgr/socket_utils_common_posix.cc",
"file_line":201,"os_error":"Protocol not available",
"syscall":"getsockopt(SO_REUSEPORT)"
}
E0625 21080 socket_utils_common_posix.cc:327] setsockopt(TCP_USER_TIMEOUT) Protocol not available
Server listening on 0.0.0.0:50051
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
Supposedly it is already fixed in https://github.com/grpc/grpc/pull/13517 it just went stale.
|
non_process
|
support wsl by disabling so reuseport what version of grpc and what language are you using latest cloned this morning from master what operating system linux windows and version wsl debian buster what runtime compiler are you using e g python version or version of gcc thread model posix gcc version debian what did you do built the examples in wsl also described in a what did you expect to see a working hello world example what did you see instead socket utils common posix cc check for so reuseport created description protocol not available errno file mnt grpc src core lib iomgr socket utils common posix cc file line os error protocol not available syscall getsockopt so reuseport socket utils common posix cc setsockopt tcp user timeout protocol not available server listening on make sure you include information that can help us debug full error message exception listing stack trace logs see for how to diagnose problems better anything else we should know about your project environment supposedly it is already fixed in it just went stale
| 0
|
14,542
| 17,652,237,082
|
IssuesEvent
|
2021-08-20 14:38:38
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
closed
|
[wgsl-in] Adding statements to a block causes implicit return to not be generated
|
kind: bug help wanted lang: WGSL area: front-end area: processing
|
Some time ago, we implemented the implicit return for Naga. I found out that this is not getting applied when not having an empty block.
Example that generates implicit `Return { value: None }`:
```
[[stage(vertex)]]
fn main() -> void {
}
```
Example that does not generate implicit `Return { value: None }`:
```
[[stage(vertex)]]
fn main() -> void {
if(true == true) {
}
}
```
**Note:** I have not tested the other front-ends, so that is the reason I marked it as wgsl-in, but presumably this is something that is done in the middle-end.
|
1.0
|
[wgsl-in] Adding statements to a block causes implicit return to not be generated - Some time ago, we implemented the implicit return for Naga. I found out that this is not getting applied when not having an empty block.
Example that generates implicit `Return { value: None }`:
```
[[stage(vertex)]]
fn main() -> void {
}
```
Example that does not generate implicit `Return { value: None }`:
```
[[stage(vertex)]]
fn main() -> void {
if(true == true) {
}
}
```
**Note:** I have not tested the other front-ends, so that is the reason I marked it as wgsl-in, but presumably this is something that is done in the middle-end.
|
process
|
adding statements to a block causes implicit return to not be generated some time ago we implemented the implicit return for naga i found out that this is not getting applied when not having an empty block example that generates implicit return value none fn main void example that does not generate implicit return value none fn main void if true true note i have not tested the other front ends so that is the reason i marked it as wgsl in but presumably this is something that is done in the middle end
| 1
|
2,830
| 5,785,831,214
|
IssuesEvent
|
2017-05-01 06:44:51
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Server side sort
|
inprocess
|
Hi! Great table component!
I'm using react-bootstrap-table as a view table only which triggers sorting on the server,
that's why I need the sort indicators on the column headers to display correct data. In other words I want to disable the the native component sort but to still enjoy the directional indicators.
I'm loading the table like so:
`TableWrapper.jsx`
```
remote(remoteObj){
return{
...remoteObj,
sort: true,
}
}
onSortChange(key,order){
browserHistory.getCurrentLocation();
browserHistory.push({
...location,
query: {
...location.query,
sortField: sortKey,
sortDirection: order
}
});
}
const tableOptions = {
sortName: this.props.sortField,
sortOrder: this.props.sortDirection,
onSortChange: this.onSortChange.bind(this),
};
return (
<BootstrapTable remote={this.remote} data={this.props.items} options={tableOptions}>
{this.props.children}
</BootstrapTable>
)
```
When the `onSortChange` is fired I push to history and the data is refetched (already sorted nicely by the server)..
Problem is, when one sets the `sortName` & `sortOrder` props, bootstrap table automatically tries to sort it again:
These are the lines causing the sort inside `BootstrapTable.js` (more context [here](https://github.com/AllenFang/react-bootstrap-table/blob/30cad00731488a2733a244e24ea861e4f7e47e00/src/BootstrapTable.js#L112-L115))
```
if (sortName && sortOrder) {
this.store.setSortInfo(sortOrder, sortName);
this.store.sort();
}
```
IMHO if sortName and sortOrder are present it's still should not sort the data without checking the `remote` first (if I understood `remote` correctly). That's why I'm suggesting this edit:
```
if (sortName && sortOrder) {
this.store.setSortInfo(sortOrder, sortName);
if(this.props.remote().sort !== true){
this.store.sort();
}
}
```
I'll be happy to submit a PR for this if this is the right way to go.
|
1.0
|
Server side sort - Hi! Great table component!
I'm using react-bootstrap-table as a view table only which triggers sorting on the server,
that's why I need the sort indicators on the column headers to display correct data. In other words I want to disable the the native component sort but to still enjoy the directional indicators.
I'm loading the table like so:
`TableWrapper.jsx`
```
remote(remoteObj){
return{
...remoteObj,
sort: true,
}
}
onSortChange(key,order){
browserHistory.getCurrentLocation();
browserHistory.push({
...location,
query: {
...location.query,
sortField: sortKey,
sortDirection: order
}
});
}
const tableOptions = {
sortName: this.props.sortField,
sortOrder: this.props.sortDirection,
onSortChange: this.onSortChange.bind(this),
};
return (
<BootstrapTable remote={this.remote} data={this.props.items} options={tableOptions}>
{this.props.children}
</BootstrapTable>
)
```
When the `onSortChange` is fired I push to history and the data is refetched (already sorted nicely by the server)..
Problem is, when one sets the `sortName` & `sortOrder` props, bootstrap table automatically tries to sort it again:
These are the lines causing the sort inside `BootstrapTable.js` (more context [here](https://github.com/AllenFang/react-bootstrap-table/blob/30cad00731488a2733a244e24ea861e4f7e47e00/src/BootstrapTable.js#L112-L115))
```
if (sortName && sortOrder) {
this.store.setSortInfo(sortOrder, sortName);
this.store.sort();
}
```
IMHO if sortName and sortOrder are present it's still should not sort the data without checking the `remote` first (if I understood `remote` correctly). That's why I'm suggesting this edit:
```
if (sortName && sortOrder) {
this.store.setSortInfo(sortOrder, sortName);
if(this.props.remote().sort !== true){
this.store.sort();
}
}
```
I'll be happy to submit a PR for this if this is the right way to go.
|
process
|
server side sort hi great table component i m using react bootstrap table as a view table only which triggers sorting on the server that s why i need the sort indicators on the column headers to display correct data in other words i want to disable the the native component sort but to still enjoy the directional indicators i m loading the table like so tablewrapper jsx remote remoteobj return remoteobj sort true onsortchange key order browserhistory getcurrentlocation browserhistory push location query location query sortfield sortkey sortdirection order const tableoptions sortname this props sortfield sortorder this props sortdirection onsortchange this onsortchange bind this return this props children when the onsortchange is fired i push to history and the data is refetched already sorted nicely by the server problem is when one sets the sortname sortorder props bootstrap table automatically tries to sort it again these are the lines causing the sort inside bootstraptable js more context if sortname sortorder this store setsortinfo sortorder sortname this store sort imho if sortname and sortorder are present it s still should not sort the data without checking the remote first if i understood remote correctly that s why i m suggesting this edit if sortname sortorder this store setsortinfo sortorder sortname if this props remote sort true this store sort i ll be happy to submit a pr for this if this is the right way to go
| 1
|
78,175
| 27,356,334,404
|
IssuesEvent
|
2023-02-27 13:07:35
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Memory leak and high CPU usage when I use the Explore
|
T-Defect
|
### Steps to reproduce
1. Open Explore;
2. Type "electron" to search
3. Monitor usage and check logs
4. Slowly scroll down to find the place where Element Desktop starts to destroy your hardware.
### Outcome
#### What did you expect?
Search does not ask hundreds MBs of RAM, Explore does not make jumps of RAM usage.
#### What happened instead?
High hardware usage, memory consuming... In my case it's jump from 700-800 to 1500-1600 MBs.
### Operating system
Windows 7
### Application version
1.11.23
### How did you install the app?
In-app update
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Memory leak and high CPU usage when I use the Explore - ### Steps to reproduce
1. Open Explore;
2. Type "electron" to search
3. Monitor usage and check logs
4. Slowly scroll down to find the place where Element Desktop starts to destroy your hardware.
### Outcome
#### What did you expect?
Search does not ask hundreds MBs of RAM, Explore does not make jumps of RAM usage.
#### What happened instead?
High hardware usage, memory consuming... In my case it's jump from 700-800 to 1500-1600 MBs.
### Operating system
Windows 7
### Application version
1.11.23
### How did you install the app?
In-app update
### Homeserver
matrix.org
### Will you send logs?
No
|
non_process
|
memory leak and high cpu usage when i use the explore steps to reproduce open explore type electron to search monitor usage and check logs slowly scroll down to find the place where element desktop starts to destroy your hardware outcome what did you expect search does not ask hundreds mbs of ram explore does not make jumps of ram usage what happened instead high hardware usage memory consuming in my case it s jump from to mbs operating system windows application version how did you install the app in app update homeserver matrix org will you send logs no
| 0
|
15,311
| 19,405,001,094
|
IssuesEvent
|
2021-12-19 21:06:26
|
ACupofAir/ACupofAir.github.io
|
https://api.github.com/repos/ACupofAir/ACupofAir.github.io
|
opened
|
数字图像处理大作业 · Cup Air
|
Gitalk /posts/image_process/image_process/
|
https://acupofair.github.io/posts/image_process/image_process/
数字图像处理的大作业,使用 electron 作为前端框架,后端使用 python 来实现中值滤波, Kuwahara 滤波器(桑原滤波器),灰度梯度分组技术。
|
2.0
|
数字图像处理大作业 · Cup Air - https://acupofair.github.io/posts/image_process/image_process/
数字图像处理的大作业,使用 electron 作为前端框架,后端使用 python 来实现中值滤波, Kuwahara 滤波器(桑原滤波器),灰度梯度分组技术。
|
process
|
数字图像处理大作业 · cup air 数字图像处理的大作业,使用 electron 作为前端框架,后端使用 python 来实现中值滤波, kuwahara 滤波器 桑原滤波器 ,灰度梯度分组技术。
| 1
|
20,939
| 27,798,442,226
|
IssuesEvent
|
2023-03-17 14:13:09
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Sleigh table exporting a value with length 0
|
Feature: Processor/68000 Status: Internal
|
In the `68000` family of CPUs, the table `fcc` (constructor `sf`), exports a value with length 0: https://github.com/NationalSecurityAgency/ghidra/blob/e7406dbb7c0bf3d0d0076e9d7774113e73d35f8e/Ghidra/Processors/68000/data/languages/68000.sinc#L1917
It's unclear how the sleigh lang should handle a value with length 0.
In my interpretation, a value of length 0 should be impossible, but also all table constructors should export values of the same length, in this case length 1.
If I'm correct, the sleigh file can be fixed with https://github.com/NationalSecurityAgency/ghidra/pull/5093, also a verification should be added to the compiler to avoid those invalid values.
This was found on https://github.com/rbran/sleigh-rs/issues/1
|
1.0
|
Sleigh table exporting a value with length 0 - In the `68000` family of CPUs, the table `fcc` (constructor `sf`), exports a value with length 0: https://github.com/NationalSecurityAgency/ghidra/blob/e7406dbb7c0bf3d0d0076e9d7774113e73d35f8e/Ghidra/Processors/68000/data/languages/68000.sinc#L1917
It's unclear how the sleigh lang should handle a value with length 0.
In my interpretation, a value of length 0 should be impossible, but also all table constructors should export values of the same length, in this case length 1.
If I'm correct, the sleigh file can be fixed with https://github.com/NationalSecurityAgency/ghidra/pull/5093, also a verification should be added to the compiler to avoid those invalid values.
This was found on https://github.com/rbran/sleigh-rs/issues/1
|
process
|
sleigh table exporting a value with length in the family of cpus the table fcc constructor sf exports a value with length it s unclear how the sleigh lang should handle a value with length in my interpretation a value of length should be impossible but also all table constructors should export values of the same length in this case length if i m correct the sleigh file can be fixed with also a verification should be added to the compiler to avoid those invalid values this was found on
| 1
|
19,547
| 25,866,329,722
|
IssuesEvent
|
2022-12-13 21:14:42
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Possible deadlock on sys.stdout/stderr when combining multiprocessing with threads
|
type-bug stdlib 3.7 expert-multiprocessing
|
BPO | [28382](https://bugs.python.org/issue28382)
--- | :---
Nosy | @pitrou, @applio, @Hadhoke
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2016-10-06.23:20:37.260>
labels = ['3.7', 'type-bug', 'library']
title = 'Possible deadlock on sys.stdout/stderr when combining multiprocessing with threads'
updated_at = <Date 2017-07-23.12:16:39.930>
user = 'https://github.com/Hadhoke'
```
bugs.python.org fields:
```python
activity = <Date 2017-07-23.12:16:39.930>
actor = 'pitrou'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2016-10-06.23:20:37.260>
creator = 'Hadhoke'
dependencies = []
files = []
hgrepos = []
issue_num = 28382
keywords = []
message_count = 3.0
messages = ['278221', '298875', '298900']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'Hadhoke']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue28382'
versions = ['Python 3.5', 'Python 3.6', 'Python 3.7']
```
</p></details>
|
1.0
|
Possible deadlock on sys.stdout/stderr when combining multiprocessing with threads - BPO | [28382](https://bugs.python.org/issue28382)
--- | :---
Nosy | @pitrou, @applio, @Hadhoke
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2016-10-06.23:20:37.260>
labels = ['3.7', 'type-bug', 'library']
title = 'Possible deadlock on sys.stdout/stderr when combining multiprocessing with threads'
updated_at = <Date 2017-07-23.12:16:39.930>
user = 'https://github.com/Hadhoke'
```
bugs.python.org fields:
```python
activity = <Date 2017-07-23.12:16:39.930>
actor = 'pitrou'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2016-10-06.23:20:37.260>
creator = 'Hadhoke'
dependencies = []
files = []
hgrepos = []
issue_num = 28382
keywords = []
message_count = 3.0
messages = ['278221', '298875', '298900']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'Hadhoke']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue28382'
versions = ['Python 3.5', 'Python 3.6', 'Python 3.7']
```
</p></details>
|
process
|
possible deadlock on sys stdout stderr when combining multiprocessing with threads bpo nosy pitrou applio hadhoke note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title possible deadlock on sys stdout stderr when combining multiprocessing with threads updated at user bugs python org fields python activity actor pitrou assignee none closed false closed date none closer none components creation creator hadhoke dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage none status open superseder none type behavior url versions
| 1
|
213,589
| 24,009,156,798
|
IssuesEvent
|
2022-09-14 17:11:06
|
papermerge/papermerge-core
|
https://api.github.com/repos/papermerge/papermerge-core
|
closed
|
Security issue - IDOR
|
security
|
Hi! I've noticed that the _document-versions/<uuid:pk>/download/_ API is vulnerable to [IDOR](https://portswigger.net/web-security/access-control/idor) vulnerability allowing any user to download any file.
I've tried to download a file uploaded by the admin account (and only visible to the admin account) using a low-privileged user, and the file was successfully downloaded
**Admin view:**

**User view:**

**Downloading admin file using direct url from the user account:**

Can you verify the issue, please?
Thanks!
|
True
|
Security issue - IDOR - Hi! I've noticed that the _document-versions/<uuid:pk>/download/_ API is vulnerable to [IDOR](https://portswigger.net/web-security/access-control/idor) vulnerability allowing any user to download any file.
I've tried to download a file uploaded by the admin account (and only visible to the admin account) using a low-privileged user, and the file was successfully downloaded
**Admin view:**

**User view:**

**Downloading admin file using direct url from the user account:**

Can you verify the issue, please?
Thanks!
|
non_process
|
security issue idor hi i ve noticed that the document versions download api is vulnerable to vulnerability allowing any user to download any file i ve tried to download a file uploaded by the admin account and only visible to the admin account using a low privileged user and the file was successfully downloaded admin view user view downloading admin file using direct url from the user account can you verify the issue please thanks
| 0
|
22,386
| 31,142,285,265
|
IssuesEvent
|
2023-08-16 01:44:12
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: should delay the same amount on every response
|
process: flaky test topic: flake ❄️ stage: flake stale
|
### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/38079/test-results/31667809-d924-4f54-950c-4747173d7d9f
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1706
### Analysis
<img width="433" alt="Screen Shot 2022-08-17 at 12 15 24 PM" src="https://user-images.githubusercontent.com/26726429/185223920-e952e445-4c8c-4d2f-93fc-b6fef230f407.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: should delay the same amount on every response - ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/38079/test-results/31667809-d924-4f54-950c-4747173d7d9f
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1706
### Analysis
<img width="433" alt="Screen Shot 2022-08-17 at 12 15 24 PM" src="https://user-images.githubusercontent.com/26726429/185223920-e952e445-4c8c-4d2f-93fc-b6fef230f407.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test should delay the same amount on every response link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
40,695
| 5,254,182,316
|
IssuesEvent
|
2017-02-02 12:03:20
|
Chringo/SSP
|
https://api.github.com/repos/Chringo/SSP
|
closed
|
How do we integrate animations in the system?
|
design discussion
|
The dependencies that are to be required for the animations is the ResourceLib (to gain access of the skeleton and animation data). The overall structure of handling animations in the engine is currently none. Several people are required to solve this matter,
|
1.0
|
How do we integrate animations in the system? - The dependencies that are to be required for the animations is the ResourceLib (to gain access of the skeleton and animation data). The overall structure of handling animations in the engine is currently none. Several people are required to solve this matter,
|
non_process
|
how do we integrate animations in the system the dependencies that are to be required for the animations is the resourcelib to gain access of the skeleton and animation data the overall structure of handling animations in the engine is currently none several people are required to solve this matter
| 0
|
6,650
| 9,769,997,128
|
IssuesEvent
|
2019-06-06 09:52:42
|
dzhw/zofar
|
https://api.github.com/repos/dzhw/zofar
|
closed
|
Abs 13-2: survey termination (Jan 19)
|
2 category: services category: technical.processes prio: 2 status: afterfield type: backlog.task
|
### **the survey ends on 02.01.19**
- [x] final return statistic (@andreaschu )
- [x] reroute the link (@vdick or @dzhwmeisner )
- [x] undeploy survey (@vdick or @dzhwmeisner )
- [x] prepare export (@andreaschu )
|
1.0
|
Abs 13-2: survey termination (Jan 19) - ### **the survey ends on 02.01.19**
- [x] final return statistic (@andreaschu )
- [x] reroute the link (@vdick or @dzhwmeisner )
- [x] undeploy survey (@vdick or @dzhwmeisner )
- [x] prepare export (@andreaschu )
|
process
|
abs survey termination jan the survey ends on final return statistic andreaschu reroute the link vdick or dzhwmeisner undeploy survey vdick or dzhwmeisner prepare export andreaschu
| 1
|
16,830
| 22,061,917,218
|
IssuesEvent
|
2022-05-30 19:12:26
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
ZERO Hydra Failures 22.05
|
6.topic: release process
|
## Mission
Every time we branch off a release we stabilize the release branch.
Our goal here is to get as little as possible jobs failing on the trunk/master jobsets.
We call this effort "Zero Hydra Failure".
I'd like to heighten, while it's great to focus on zero as our goal, it's essentially to
have all deliverables that worked in the previous release work here also.
Please note the changes included in [RFC 85](https://github.com/NixOS/rfcs/blob/master/rfcs/0085-nixos-release-stablization.md).
Most significantly, branch off will occur on 2022 May 22; prior to that date, ZHF will be conducted
on master; after that date, ZHF will be conducted on the release channel using a backport
workflow similar to previous ZHFs.
## Jobsets
[trunk Jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk) (includes linux, darwin, and aarch64-linux builds)
[nixos/combined Jobset](https://hydra.nixos.org/jobset/nixos/trunk-combined) (includes many nixos tests)
<!--[nixos:release-22.05 Jobset](https://hydra.nixos.org/jobset/nixos/release-22.05)
[nixpkgs:nixpkgs-22.05-darwin Jobset](https://hydra.nixos.org/jobset/nixpkgs/nixpkgs-22.05-darwin)-->
## How to help (textual)
1. Select an evaluation of the [trunk jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk)

2. Find a failed job ❌️ , you can use the filter field to scope packages to your platform, or search for packages that are relevant to you.

Note: you can filter for architecture by filtering for it, eg: https://hydra.nixos.org/eval/1719540?filter=x86_64-linux&compare=1719463&full=#tabs-still-fail
3. Search to see if a PR is not already open for the package. It there is one, please help review it.
4. If there is no open PR, troubleshoot why it's failing and fix it.
5. Create a Pull Request with the fix targeting master, wait for it to be merged.
If your PR causes around 500+ rebuilds, it's preferred to target `staging` to avoid compute and storage churn. If your PR is fixing Haskell packages, target the `haskell-updates` branch instead.
6. (after 2022 May 22) Please follow [backporting steps](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#backporting-changes) and target the `release-22.05` branch if the original PR landed in `master` or `staging-22.05` if the PR landed in `staging`. Be sure to do `git cherry-pick -x <rev>` on the commits that landed in unstable. @jonringer created [a video covering the backport process](https://www.youtube.com/watch?v=4Zb3GpIc6vk&t=520s).
Always reference this issue in the body of your PR:
```
ZHF: #172160
```
Please ping @NixOS/nixos-release-managers on the PR and add the `0.kind: build failure` label to the pull request.
If you're unable to because you're not a member of the NixOS org please ping @dasJ, @tomberek, @jonringer, @Mic92
## How can I easily check packages that I maintain?
I have created an experimental website that automatically crawls Hydra and lists packages by maintainer and lists the most important dependencies (failing packages with the most dependants).
You can reach it here: https://zh.fail
If you prefer the command-line way, you can also check failing packages that you maintain by running:
```
# from root of nixpkgs
nix-build maintainers/scripts/build.nix --argstr maintainer <name>
```
## New to nixpkgs?
- [Packaging a basic C application](https://www.youtube.com/watch?v=LiEqN8r-BRw)
- [Python nix packaging](https://www.youtube.com/watch?v=jXd-hkP4xnU)
- [Adding a package to nixpkgs](https://www.youtube.com/watch?v=fvj8H5yUKu8)
- other resources at: https://github.com/nix-community/awesome-nix
- https://nix.dev/tutorials/
## Packages that don't get fixed
The remaining packages will be marked as broken before the release (on the failing platforms).
You can do this like:
```nix
meta = {
# ref to issue/explanation
# `true` is for everything
broken = stdenv.isDarwin;
};
```
## Closing
This is a great way to help NixOS, and it is a great time for new contributors to start their nixpkgs adventure. :partying_face:
As with the [feature freeze issue](https://github.com/NixOS/nixpkgs/issues/167025), please keep discussion here to a minimal so you don't ping all maintainers (although relevant comments can of course be added here if they are directly ZHF-related) and ping me or the release managers team in the respective issues.
cc @NixOS/nixpkgs-committers @NixOS/nixpkgs-maintainers @NixOS/release-engineers
## Related Issues
- Timeline: #165792
- Feature Freeze Items: #167025
|
1.0
|
ZERO Hydra Failures 22.05 - ## Mission
Every time we branch off a release we stabilize the release branch.
Our goal here is to get as little as possible jobs failing on the trunk/master jobsets.
We call this effort "Zero Hydra Failure".
I'd like to heighten, while it's great to focus on zero as our goal, it's essentially to
have all deliverables that worked in the previous release work here also.
Please note the changes included in [RFC 85](https://github.com/NixOS/rfcs/blob/master/rfcs/0085-nixos-release-stablization.md).
Most significantly, branch off will occur on 2022 May 22; prior to that date, ZHF will be conducted
on master; after that date, ZHF will be conducted on the release channel using a backport
workflow similar to previous ZHFs.
## Jobsets
[trunk Jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk) (includes linux, darwin, and aarch64-linux builds)
[nixos/combined Jobset](https://hydra.nixos.org/jobset/nixos/trunk-combined) (includes many nixos tests)
<!--[nixos:release-22.05 Jobset](https://hydra.nixos.org/jobset/nixos/release-22.05)
[nixpkgs:nixpkgs-22.05-darwin Jobset](https://hydra.nixos.org/jobset/nixpkgs/nixpkgs-22.05-darwin)-->
## How to help (textual)
1. Select an evaluation of the [trunk jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk)

2. Find a failed job ❌️ , you can use the filter field to scope packages to your platform, or search for packages that are relevant to you.

Note: you can filter for architecture by filtering for it, eg: https://hydra.nixos.org/eval/1719540?filter=x86_64-linux&compare=1719463&full=#tabs-still-fail
3. Search to see if a PR is not already open for the package. It there is one, please help review it.
4. If there is no open PR, troubleshoot why it's failing and fix it.
5. Create a Pull Request with the fix targeting master, wait for it to be merged.
If your PR causes around 500+ rebuilds, it's preferred to target `staging` to avoid compute and storage churn. If your PR is fixing Haskell packages, target the `haskell-updates` branch instead.
6. (after 2022 May 22) Please follow [backporting steps](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#backporting-changes) and target the `release-22.05` branch if the original PR landed in `master` or `staging-22.05` if the PR landed in `staging`. Be sure to do `git cherry-pick -x <rev>` on the commits that landed in unstable. @jonringer created [a video covering the backport process](https://www.youtube.com/watch?v=4Zb3GpIc6vk&t=520s).
Always reference this issue in the body of your PR:
```
ZHF: #172160
```
Please ping @NixOS/nixos-release-managers on the PR and add the `0.kind: build failure` label to the pull request.
If you're unable to because you're not a member of the NixOS org please ping @dasJ, @tomberek, @jonringer, @Mic92
## How can I easily check packages that I maintain?
I have created an experimental website that automatically crawls Hydra and lists packages by maintainer and lists the most important dependencies (failing packages with the most dependants).
You can reach it here: https://zh.fail
If you prefer the command-line way, you can also check failing packages that you maintain by running:
```
# from root of nixpkgs
nix-build maintainers/scripts/build.nix --argstr maintainer <name>
```
## New to nixpkgs?
- [Packaging a basic C application](https://www.youtube.com/watch?v=LiEqN8r-BRw)
- [Python nix packaging](https://www.youtube.com/watch?v=jXd-hkP4xnU)
- [Adding a package to nixpkgs](https://www.youtube.com/watch?v=fvj8H5yUKu8)
- other resources at: https://github.com/nix-community/awesome-nix
- https://nix.dev/tutorials/
## Packages that don't get fixed
The remaining packages will be marked as broken before the release (on the failing platforms).
You can do this like:
```nix
meta = {
# ref to issue/explanation
# `true` is for everything
broken = stdenv.isDarwin;
};
```
## Closing
This is a great way to help NixOS, and it is a great time for new contributors to start their nixpkgs adventure. :partying_face:
As with the [feature freeze issue](https://github.com/NixOS/nixpkgs/issues/167025), please keep discussion here to a minimal so you don't ping all maintainers (although relevant comments can of course be added here if they are directly ZHF-related) and ping me or the release managers team in the respective issues.
cc @NixOS/nixpkgs-committers @NixOS/nixpkgs-maintainers @NixOS/release-engineers
## Related Issues
- Timeline: #165792
- Feature Freeze Items: #167025
|
process
|
zero hydra failures mission every time we branch off a release we stabilize the release branch our goal here is to get as little as possible jobs failing on the trunk master jobsets we call this effort zero hydra failure i d like to heighten while it s great to focus on zero as our goal it s essentially to have all deliverables that worked in the previous release work here also please note the changes included in most significantly branch off will occur on may prior to that date zhf will be conducted on master after that date zhf will be conducted on the release channel using a backport workflow similar to previous zhfs jobsets includes linux darwin and linux builds includes many nixos tests how to help textual select an evaluation of the find a failed job ❌️ you can use the filter field to scope packages to your platform or search for packages that are relevant to you note you can filter for architecture by filtering for it eg search to see if a pr is not already open for the package it there is one please help review it if there is no open pr troubleshoot why it s failing and fix it create a pull request with the fix targeting master wait for it to be merged if your pr causes around rebuilds it s preferred to target staging to avoid compute and storage churn if your pr is fixing haskell packages target the haskell updates branch instead after may please follow and target the release branch if the original pr landed in master or staging if the pr landed in staging be sure to do git cherry pick x on the commits that landed in unstable jonringer created always reference this issue in the body of your pr zhf please ping nixos nixos release managers on the pr and add the kind build failure label to the pull request if you re unable to because you re not a member of the nixos org please ping dasj tomberek jonringer how can i easily check packages that i maintain i have created an experimental website that automatically crawls hydra and lists packages by maintainer and lists the most important dependencies failing packages with the most dependants you can reach it here if you prefer the command line way you can also check failing packages that you maintain by running from root of nixpkgs nix build maintainers scripts build nix argstr maintainer new to nixpkgs other resources at packages that don t get fixed the remaining packages will be marked as broken before the release on the failing platforms you can do this like nix meta ref to issue explanation true is for everything broken stdenv isdarwin closing this is a great way to help nixos and it is a great time for new contributors to start their nixpkgs adventure partying face as with the please keep discussion here to a minimal so you don t ping all maintainers although relevant comments can of course be added here if they are directly zhf related and ping me or the release managers team in the respective issues cc nixos nixpkgs committers nixos nixpkgs maintainers nixos release engineers related issues timeline feature freeze items
| 1
|
361,595
| 25,345,185,576
|
IssuesEvent
|
2022-11-19 05:23:18
|
Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml
|
https://api.github.com/repos/Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml
|
opened
|
https://github.com/Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml.wiki.gitWelcome to https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml Discussions!
|
documentation duplicate enhancement good first issue help wanted invalid question wontfix
|
### Discussed in https://github.com/Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml/discussions/2
<div type='discussions-op-text'>
<sup>Originally posted by **michaelshadell25** November 18, 2022</sup>
<!--
✏️ Optional: Customize the content below to let your community know what you intend to use Discussions for.
-->
## 👋 Welcome!
We’re using Discussions as a place to connect with other members of our community. We hope that you:
* Ask questions you’re wondering about.
* Share ideas.
* Engage with other community members.
* Welcome others and are open-minded. Remember that this is a community we
build together 💪.
To get started, comment below with an introduction of yourself and tell us about what you do with this community.
<!--
For the maintainers, here are some tips 💡 for getting started with Discussions. We'll leave these in Markdown comments for now, but feel free to take out the comments for all maintainers to see.
📢 **Announce to your community** that Discussions is available! Go ahead and send that tweet, post, or link it from the website to drive traffic here.
🔗 If you use issue templates, **link any relevant issue templates** such as questions and community conversations to Discussions. Declutter your issues by driving community content to where they belong in Discussions. If you need help, here's a [link to the documentation](https://docs.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository#configuring-the-template-chooser).
➡️ You can **convert issues to discussions** either individually or bulk by labels. Looking at you, issues labeled “question” or “discussion”.
-->
</div>
|
1.0
|
https://github.com/Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml.wiki.gitWelcome to https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml Discussions! - ### Discussed in https://github.com/Milk42031/https-raw.githubusercontent.com-actions-upload-artifact-main-action.yml/discussions/2
<div type='discussions-op-text'>
<sup>Originally posted by **michaelshadell25** November 18, 2022</sup>
<!--
✏️ Optional: Customize the content below to let your community know what you intend to use Discussions for.
-->
## 👋 Welcome!
We’re using Discussions as a place to connect with other members of our community. We hope that you:
* Ask questions you’re wondering about.
* Share ideas.
* Engage with other community members.
* Welcome others and are open-minded. Remember that this is a community we
build together 💪.
To get started, comment below with an introduction of yourself and tell us about what you do with this community.
<!--
For the maintainers, here are some tips 💡 for getting started with Discussions. We'll leave these in Markdown comments for now, but feel free to take out the comments for all maintainers to see.
📢 **Announce to your community** that Discussions is available! Go ahead and send that tweet, post, or link it from the website to drive traffic here.
🔗 If you use issue templates, **link any relevant issue templates** such as questions and community conversations to Discussions. Declutter your issues by driving community content to where they belong in Discussions. If you need help, here's a [link to the documentation](https://docs.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository#configuring-the-template-chooser).
➡️ You can **convert issues to discussions** either individually or bulk by labels. Looking at you, issues labeled “question” or “discussion”.
-->
</div>
|
non_process
|
to https raw githubusercontent com actions upload artifact main action yml discussions discussed in originally posted by november ✏️ optional customize the content below to let your community know what you intend to use discussions for 👋 welcome we’re using discussions as a place to connect with other members of our community we hope that you ask questions you’re wondering about share ideas engage with other community members welcome others and are open minded remember that this is a community we build together 💪 to get started comment below with an introduction of yourself and tell us about what you do with this community for the maintainers here are some tips 💡 for getting started with discussions we ll leave these in markdown comments for now but feel free to take out the comments for all maintainers to see 📢 announce to your community that discussions is available go ahead and send that tweet post or link it from the website to drive traffic here 🔗 if you use issue templates link any relevant issue templates such as questions and community conversations to discussions declutter your issues by driving community content to where they belong in discussions if you need help here s a ➡️ you can convert issues to discussions either individually or bulk by labels looking at you issues labeled “question” or “discussion”
| 0
|
2,163
| 5,008,956,794
|
IssuesEvent
|
2016-12-12 20:59:58
|
bongo227/StatsNotes
|
https://api.github.com/repos/bongo227/StatsNotes
|
opened
|
Estimation of short-term standard deviation from small samples. Ability of a process to meet tolerances.
|
15.3 Statistical Process Control
|
From the spec:
Estimate of proportion not meeting tolerances.
|
1.0
|
Estimation of short-term standard deviation from small samples. Ability of a process to meet tolerances. - From the spec:
Estimate of proportion not meeting tolerances.
|
process
|
estimation of short term standard deviation from small samples ability of a process to meet tolerances from the spec estimate of proportion not meeting tolerances
| 1
|
199,697
| 15,779,028,338
|
IssuesEvent
|
2021-04-01 08:20:19
|
sony/flutter-embedded-linux
|
https://api.github.com/repos/sony/flutter-embedded-linux
|
closed
|
README and information on how to gather logs and errors
|
documentation
|
There is no information on how to gather logs or to trace errors.
To gather some further information about the DRM segfault I'm getting on my aarch64 devices, I've built the drm backend with an additional parameter so that lldb and gdb can provide more detailed feedback:
`cmake -DUSER_PROJECT_PATH=examples/flutter-drm-backend -DCMAKE_BUILD_TYPE=Debug ..`
What would be the preferred method of capturing/tracing errors with the embedder?
|
1.0
|
README and information on how to gather logs and errors - There is no information on how to gather logs or to trace errors.
To gather some further information about the DRM segfault I'm getting on my aarch64 devices, I've built the drm backend with an additional parameter so that lldb and gdb can provide more detailed feedback:
`cmake -DUSER_PROJECT_PATH=examples/flutter-drm-backend -DCMAKE_BUILD_TYPE=Debug ..`
What would be the preferred method of capturing/tracing errors with the embedder?
|
non_process
|
readme and information on how to gather logs and errors there is no information on how to gather logs or to trace errors to gather some further information about the drm segfault i m getting on my devices i ve built the drm backend with an additional parameter so that lldb and gdb can provide more detailed feedback cmake duser project path examples flutter drm backend dcmake build type debug what would be the preferred method of capturing tracing errors with the embedder
| 0
|
13,866
| 16,622,893,069
|
IssuesEvent
|
2021-06-03 05:28:44
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
API Documentation using Spring Fox library
|
Auth server Feature request Participant datastore Participant manager datastore Process: Fixed Response datastore
|
1. Add ` @Api` to Controller classes
2. Add `@ApiOperation` to API endpoints
3. Add `SwaggerGeneratorTest `class to generate the` openapi.json` under` fda-mystudies/documentation/API/<service-context-path>/`
Related Issue #2976
|
1.0
|
API Documentation using Spring Fox library - 1. Add ` @Api` to Controller classes
2. Add `@ApiOperation` to API endpoints
3. Add `SwaggerGeneratorTest `class to generate the` openapi.json` under` fda-mystudies/documentation/API/<service-context-path>/`
Related Issue #2976
|
process
|
api documentation using spring fox library add api to controller classes add apioperation to api endpoints add swaggergeneratortest class to generate the openapi json under fda mystudies documentation api related issue
| 1
|
16,875
| 11,449,701,293
|
IssuesEvent
|
2020-02-06 07:55:54
|
vsch/idea-multimarkdown
|
https://api.github.com/repos/vsch/idea-multimarkdown
|
closed
|
build directory should be added to gitignore
|
usability
|
Current size of the repo is over 1.5gb, most of which is the `dist` folder which doesn't contain any code. The rest of the repo is under 50 mb in size.
|
True
|
build directory should be added to gitignore - Current size of the repo is over 1.5gb, most of which is the `dist` folder which doesn't contain any code. The rest of the repo is under 50 mb in size.
|
non_process
|
build directory should be added to gitignore current size of the repo is over most of which is the dist folder which doesn t contain any code the rest of the repo is under mb in size
| 0
|
3,620
| 2,889,757,610
|
IssuesEvent
|
2015-06-13 18:40:18
|
ConvolutedAlmonds/almond-client
|
https://api.github.com/repos/ConvolutedAlmonds/almond-client
|
closed
|
TravelModeCtrl should deregister it's event listener after receiving data
|
Code Style
|
*description forthcoming*
|
1.0
|
TravelModeCtrl should deregister it's event listener after receiving data - *description forthcoming*
|
non_process
|
travelmodectrl should deregister it s event listener after receiving data description forthcoming
| 0
|
9,252
| 12,290,836,354
|
IssuesEvent
|
2020-05-10 06:39:18
|
kubeflow/community
|
https://api.github.com/repos/kubeflow/community
|
closed
|
Slack instructions send a message to jlewi
|
area/community kind/feature kind/process lifecycle/stale priority/p2
|
When users join slack, the getting started flow appears to tell them to send a message. That message appears to be a direct message to me.
I'm not sure where this is set but we should probably change that. If people are going to say hi it would probably be better to do so in a community channel like #general.
|
1.0
|
Slack instructions send a message to jlewi - When users join slack, the getting started flow appears to tell them to send a message. That message appears to be a direct message to me.
I'm not sure where this is set but we should probably change that. If people are going to say hi it would probably be better to do so in a community channel like #general.
|
process
|
slack instructions send a message to jlewi when users join slack the getting started flow appears to tell them to send a message that message appears to be a direct message to me i m not sure where this is set but we should probably change that if people are going to say hi it would probably be better to do so in a community channel like general
| 1
|
4,332
| 7,242,196,670
|
IssuesEvent
|
2018-02-14 06:15:39
|
muflihun/residue
|
https://api.github.com/repos/muflihun/residue
|
closed
|
Client integrity task can remove pending dead client
|
area: log-processing edge-case type: bug
|
### Details
If log processing is running behind and one of the clients are past their age and client integrity task runs in the mean time, it will remove the client from registry. This will cause the log to fail with invalid message `[log-request-handler.cc:107] Failed: Client not connected yet`
Currently the situation is handled and client is temporarily brought alive if client is DEAD but still registered.
### Expected
Log processing should be successful for such dead client because it passed the original validation (at time of connection and token retrieval)
### Affected Version
v1.x.x
|
1.0
|
Client integrity task can remove pending dead client - ### Details
If log processing is running behind and one of the clients are past their age and client integrity task runs in the mean time, it will remove the client from registry. This will cause the log to fail with invalid message `[log-request-handler.cc:107] Failed: Client not connected yet`
Currently the situation is handled and client is temporarily brought alive if client is DEAD but still registered.
### Expected
Log processing should be successful for such dead client because it passed the original validation (at time of connection and token retrieval)
### Affected Version
v1.x.x
|
process
|
client integrity task can remove pending dead client details if log processing is running behind and one of the clients are past their age and client integrity task runs in the mean time it will remove the client from registry this will cause the log to fail with invalid message failed client not connected yet currently the situation is handled and client is temporarily brought alive if client is dead but still registered expected log processing should be successful for such dead client because it passed the original validation at time of connection and token retrieval affected version x x
| 1
|
4,764
| 4,622,107,680
|
IssuesEvent
|
2016-09-27 05:52:51
|
ember-cli/ember-cli
|
https://api.github.com/repos/ember-cli/ember-cli
|
closed
|
sluggish startup times
|
Bug Performance
|
```
~ ember v (878ms)
WARNING: Addon[ember-try]'s commands took: 194ms to load, see: http://some-link.com/help
ember-cli: 2.7.0-beta.3-log-slow-command-loads-61749d551d
node: 6.2.2
os: darwin x64
Start time: (2016-06-21 06:51:29 UTC) [treshold=1%]
# module time %
1 graceful-fs (src/ember-c...eful-fs/graceful-fs.js) 11ms ▇ 1%
2 glob (src/ember-cli/node_modules/glob/glob.js) 17ms ▇ 2%
3 rimraf (src/ember-cli/no...dules/rimraf/rimraf.js) 18ms ▇ 2%
4 ./remove (src/ember-cli/...ra/lib/remove/index.js) 19ms ▇ 2%
5 fs-extra (src/ember-cli/.../fs-extra/lib/index.js) 61ms ▇▇▇ 6%
6 ./_Hash (src/ember-cli/n...odules/lodash/_Hash.js) 12ms ▇ 1%
7 ./_mapCacheClear (src/em...dash/_mapCacheClear.js) 13ms ▇ 1%
8 ./_MapCache (src/ember-c...es/lodash/_MapCache.js) 20ms ▇ 2%
9 ./_stackSet (src/ember-c...es/lodash/_stackSet.js) 20ms ▇ 2%
10 ./_Stack (src/ember-cli/...dules/lodash/_Stack.js) 28ms ▇▇ 3%
11 ./_equalObjects (src/emb...odash/_equalObjects.js) 11ms ▇ 1%
12 ./_baseIsEqualDeep (src/...sh/_baseIsEqualDeep.js) 30ms ▇▇ 3%
13 ./_baseIsEqual (src/embe...lodash/_baseIsEqual.js) 30ms ▇▇ 3%
14 ./_baseIsMatch (src/embe...lodash/_baseIsMatch.js) 59ms ▇▇▇ 5%
15 ./_baseMatches (src/embe...lodash/_baseMatches.js) 61ms ▇▇▇ 6%
16 ./_baseIteratee (src/emb...odash/_baseIteratee.js) 72ms ▇▇▇▇ 7%
17 ./_createFind (src/ember.../lodash/_createFind.js) 72ms ▇▇▇▇ 7%
18 lodash/find (src/ember-c...modules/lodash/find.js) 77ms ▇▇▇▇ 7%
19 ./_baseClone (src/ember-...s/lodash/_baseClone.js) 15ms ▇ 1%
20 ./_baseMergeDeep (src/em...dash/_baseMergeDeep.js) 22ms ▇▇ 2%
21 ./_baseMerge (src/ember-...s/lodash/_baseMerge.js) 25ms ▇▇ 2%
22 lodash/merge (src/ember-...odules/lodash/merge.js) 25ms ▇▇ 2%
23 ../models/addon-discover...els/addon-discovery.js) 12ms ▇ 1%
24 ../models/command (src/e.../lib/models/command.js) 36ms ▇▇ 3%
25 ../models/project (src/e.../lib/models/project.js) 264ms ▇▇▇▇▇▇▇▇▇▇▇▇▇ 25%
26 diff (src/ember-cli/node...ules/diff/lib/index.js) 14ms ▇ 1%
27 ./edit-file-diff (src/em...dels/edit-file-diff.js) 23ms ▇▇ 2%
28 ./file-info (src/ember-c...ib/models/file-info.js) 33ms ▇▇ 3%
29 inflection (src/ember-cl...tion/lib/inflection.js) 11ms ▇ 1%
30 ../models/blueprint (src...ib/models/blueprint.js) 69ms ▇▇▇▇ 6%
31 ../utilities/merge-bluep...e-blueprint-options.js) 70ms ▇▇▇▇ 7%
32 ./new (src/ember-cli/lib/commands/new.js) 72ms ▇▇▇▇ 7%
33 /Users/stefanepenner/src.../lib/commands/addon.js) 73ms ▇▇▇▇ 7%
34 /Users/stefanepenner/src.../lib/commands/serve.js) 11ms ▇ 1%
35 ../models/builder (src/e.../lib/models/builder.js) 16ms ▇ 1%
36 /Users/stefanepenner/src...i/lib/commands/test.js) 17ms ▇ 2%
37 /Users/stefanepenner/src...cli/lib/tasks/serve.js) 15ms ▇ 1%
38 ../lib/cli (src/ember-cli/lib/cli/index.js) 437ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 41%
39 leek (src/ember-cli/node...dules/leek/lib/leek.js) 18ms ▇ 2%
40 lodash.merge (src/ember-.../lodash.merge/index.js) 13ms ▇ 1%
41 yam (src/ember-cli/node_modules/yam/lib/yam.js) 16ms ▇ 1%
42 configstore (src/ember-c...s/configstore/index.js) 13ms ▇ 1%
43 ./baseMergeDeep (src/emb...ernal/baseMergeDeep.js) 17ms ▇ 2%
44 ../internal/baseMerge (s.../internal/baseMerge.js) 21ms ▇ 2%
45 lodash/object/merge (src...lodash/object/merge.js) 24ms ▇▇ 2%
46 ./lib/style-plugin (src/...ry/lib/style-plugin.js) 34ms ▇▇ 3%
47 ./ (src/ember-cli/node_m...cess-registry/index.js) 38ms ▇▇ 4%
48 ember-cli-preprocess-reg...istry/preprocessors.js) 38ms ▇▇ 4%
49 broccoli-funnel (src/emb...occoli-funnel/index.js) 12ms ▇ 1%
50 ../models/addon (src/emb...li/lib/models/addon.js) 54ms ▇▇▇ 5%
51 lodash/array (src/ember-...odules/lodash/array.js) 58ms ▇▇▇ 5%
52 node-fetch (src/ember-cl...es/node-fetch/index.js) 24ms ▇▇ 2%
53 ./fetch-ember-versions-f...ersions-from-github.js) 25ms ▇▇ 2%
54 ./get-ember-versions (sr.../get-ember-versions.js) 83ms ▇▇▇▇ 8%
55 ember-try-config (src/em...io-config-for-ember.js) 85ms ▇▇▇▇ 8%
56 ../utils/config (src/emb...ry/lib/utils/config.js) 87ms ▇▇▇▇ 8%
57 lodash (src/ember-cli/no...odules/lodash/index.js) 45ms ▇▇▇ 4%
58 ./utils (src/ember-cli/n...li-table2/src/utils.js) 53ms ▇▇▇ 5%
59 ./src/table (src/ember-c...li-table2/src/table.js) 59ms ▇▇▇ 5%
60 cli-table2 (src/ember-cl...es/cli-table2/index.js) 59ms ▇▇▇ 5%
61 ./../utils/result-summar...tils/result-summary.js) 60ms ▇▇▇ 6%
62 fs-extra (src/ember-cli/.../fs-extra/lib/index.js) 26ms ▇▇ 2%
63 ../dependency-manager-ad...ager-adapters/bower.js) 30ms ▇▇ 3%
64 ./../utils/dependency-ma...ger-adapter-factory.js) 32ms ▇▇ 3%
65 ../tasks/try-each (src/e.../lib/tasks/try-each.js) 99ms ▇▇▇▇▇ 9%
66 ./try (src/ember-cli/nod...ry/lib/commands/try.js) 187ms ▇▇▇▇▇▇▇▇▇ 17%
67 ./lib/commands (src/embe.../lib/commands/index.js) 194ms ▇▇▇▇▇▇▇▇▇ 18%
68 ./pubsuffix (src/ember-c...ookie/lib/pubsuffix.js) 20ms ▇ 2%
69 tough-cookie (src/ember-...h-cookie/lib/cookie.js) 28ms ▇▇ 3%
70 ./lib/cookies (src/ember...request/lib/cookies.js) 28ms ▇▇ 3%
71 bl (src/ember-cli/node_modules/bl/bl.js) 12ms ▇ 1%
72 hawk (src/ember-cli/node...ules/hawk/lib/index.js) 15ms ▇ 1%
73 ./formats/auto (src/embe...pk/lib/formats/auto.js) 11ms ▇ 1%
74 ./private-key (src/ember...hpk/lib/private-key.js) 24ms ▇▇ 2%
75 ./utils (src/ember-cli/n...les/sshpk/lib/utils.js) 27ms ▇▇ 3%
76 ./fingerprint (src/ember...hpk/lib/fingerprint.js) 29ms ▇▇ 3%
77 ./key (src/ember-cli/nod...dules/sshpk/lib/key.js) 36ms ▇▇ 3%
78 sshpk (src/ember-cli/nod...les/sshpk/lib/index.js) 37ms ▇▇ 3%
79 ./utils (src/ember-cli/n...signature/lib/utils.js) 39ms ▇▇ 4%
80 ./parser (src/ember-cli/...ignature/lib/parser.js) 42ms ▇▇ 4%
81 http-signature (src/embe...signature/lib/index.js) 51ms ▇▇▇ 5%
82 mime-types (src/ember-cl...es/mime-types/index.js) 14ms ▇ 1%
83 ./runner (src/ember-cli/...alidator/lib/runner.js) 17ms ▇ 2%
84 har-validator (src/ember...validator/lib/index.js) 18ms ▇ 2%
85 ./lib/har (src/ember-cli...les/request/lib/har.js) 18ms ▇ 2%
86 ./request (src/ember-cli...les/request/request.js) 141ms ▇▇▇▇▇▇▇ 13%
87 request (src/ember-cli/n...dules/request/index.js) 172ms ▇▇▇▇▇▇▇▇ 16%
Total require(): 2211
Total time: 1.1s
```
using: https://www.npmjs.com/package/time-require
|
True
|
sluggish startup times - ```
~ ember v (878ms)
WARNING: Addon[ember-try]'s commands took: 194ms to load, see: http://some-link.com/help
ember-cli: 2.7.0-beta.3-log-slow-command-loads-61749d551d
node: 6.2.2
os: darwin x64
Start time: (2016-06-21 06:51:29 UTC) [treshold=1%]
# module time %
1 graceful-fs (src/ember-c...eful-fs/graceful-fs.js) 11ms ▇ 1%
2 glob (src/ember-cli/node_modules/glob/glob.js) 17ms ▇ 2%
3 rimraf (src/ember-cli/no...dules/rimraf/rimraf.js) 18ms ▇ 2%
4 ./remove (src/ember-cli/...ra/lib/remove/index.js) 19ms ▇ 2%
5 fs-extra (src/ember-cli/.../fs-extra/lib/index.js) 61ms ▇▇▇ 6%
6 ./_Hash (src/ember-cli/n...odules/lodash/_Hash.js) 12ms ▇ 1%
7 ./_mapCacheClear (src/em...dash/_mapCacheClear.js) 13ms ▇ 1%
8 ./_MapCache (src/ember-c...es/lodash/_MapCache.js) 20ms ▇ 2%
9 ./_stackSet (src/ember-c...es/lodash/_stackSet.js) 20ms ▇ 2%
10 ./_Stack (src/ember-cli/...dules/lodash/_Stack.js) 28ms ▇▇ 3%
11 ./_equalObjects (src/emb...odash/_equalObjects.js) 11ms ▇ 1%
12 ./_baseIsEqualDeep (src/...sh/_baseIsEqualDeep.js) 30ms ▇▇ 3%
13 ./_baseIsEqual (src/embe...lodash/_baseIsEqual.js) 30ms ▇▇ 3%
14 ./_baseIsMatch (src/embe...lodash/_baseIsMatch.js) 59ms ▇▇▇ 5%
15 ./_baseMatches (src/embe...lodash/_baseMatches.js) 61ms ▇▇▇ 6%
16 ./_baseIteratee (src/emb...odash/_baseIteratee.js) 72ms ▇▇▇▇ 7%
17 ./_createFind (src/ember.../lodash/_createFind.js) 72ms ▇▇▇▇ 7%
18 lodash/find (src/ember-c...modules/lodash/find.js) 77ms ▇▇▇▇ 7%
19 ./_baseClone (src/ember-...s/lodash/_baseClone.js) 15ms ▇ 1%
20 ./_baseMergeDeep (src/em...dash/_baseMergeDeep.js) 22ms ▇▇ 2%
21 ./_baseMerge (src/ember-...s/lodash/_baseMerge.js) 25ms ▇▇ 2%
22 lodash/merge (src/ember-...odules/lodash/merge.js) 25ms ▇▇ 2%
23 ../models/addon-discover...els/addon-discovery.js) 12ms ▇ 1%
24 ../models/command (src/e.../lib/models/command.js) 36ms ▇▇ 3%
25 ../models/project (src/e.../lib/models/project.js) 264ms ▇▇▇▇▇▇▇▇▇▇▇▇▇ 25%
26 diff (src/ember-cli/node...ules/diff/lib/index.js) 14ms ▇ 1%
27 ./edit-file-diff (src/em...dels/edit-file-diff.js) 23ms ▇▇ 2%
28 ./file-info (src/ember-c...ib/models/file-info.js) 33ms ▇▇ 3%
29 inflection (src/ember-cl...tion/lib/inflection.js) 11ms ▇ 1%
30 ../models/blueprint (src...ib/models/blueprint.js) 69ms ▇▇▇▇ 6%
31 ../utilities/merge-bluep...e-blueprint-options.js) 70ms ▇▇▇▇ 7%
32 ./new (src/ember-cli/lib/commands/new.js) 72ms ▇▇▇▇ 7%
33 /Users/stefanepenner/src.../lib/commands/addon.js) 73ms ▇▇▇▇ 7%
34 /Users/stefanepenner/src.../lib/commands/serve.js) 11ms ▇ 1%
35 ../models/builder (src/e.../lib/models/builder.js) 16ms ▇ 1%
36 /Users/stefanepenner/src...i/lib/commands/test.js) 17ms ▇ 2%
37 /Users/stefanepenner/src...cli/lib/tasks/serve.js) 15ms ▇ 1%
38 ../lib/cli (src/ember-cli/lib/cli/index.js) 437ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 41%
39 leek (src/ember-cli/node...dules/leek/lib/leek.js) 18ms ▇ 2%
40 lodash.merge (src/ember-.../lodash.merge/index.js) 13ms ▇ 1%
41 yam (src/ember-cli/node_modules/yam/lib/yam.js) 16ms ▇ 1%
42 configstore (src/ember-c...s/configstore/index.js) 13ms ▇ 1%
43 ./baseMergeDeep (src/emb...ernal/baseMergeDeep.js) 17ms ▇ 2%
44 ../internal/baseMerge (s.../internal/baseMerge.js) 21ms ▇ 2%
45 lodash/object/merge (src...lodash/object/merge.js) 24ms ▇▇ 2%
46 ./lib/style-plugin (src/...ry/lib/style-plugin.js) 34ms ▇▇ 3%
47 ./ (src/ember-cli/node_m...cess-registry/index.js) 38ms ▇▇ 4%
48 ember-cli-preprocess-reg...istry/preprocessors.js) 38ms ▇▇ 4%
49 broccoli-funnel (src/emb...occoli-funnel/index.js) 12ms ▇ 1%
50 ../models/addon (src/emb...li/lib/models/addon.js) 54ms ▇▇▇ 5%
51 lodash/array (src/ember-...odules/lodash/array.js) 58ms ▇▇▇ 5%
52 node-fetch (src/ember-cl...es/node-fetch/index.js) 24ms ▇▇ 2%
53 ./fetch-ember-versions-f...ersions-from-github.js) 25ms ▇▇ 2%
54 ./get-ember-versions (sr.../get-ember-versions.js) 83ms ▇▇▇▇ 8%
55 ember-try-config (src/em...io-config-for-ember.js) 85ms ▇▇▇▇ 8%
56 ../utils/config (src/emb...ry/lib/utils/config.js) 87ms ▇▇▇▇ 8%
57 lodash (src/ember-cli/no...odules/lodash/index.js) 45ms ▇▇▇ 4%
58 ./utils (src/ember-cli/n...li-table2/src/utils.js) 53ms ▇▇▇ 5%
59 ./src/table (src/ember-c...li-table2/src/table.js) 59ms ▇▇▇ 5%
60 cli-table2 (src/ember-cl...es/cli-table2/index.js) 59ms ▇▇▇ 5%
61 ./../utils/result-summar...tils/result-summary.js) 60ms ▇▇▇ 6%
62 fs-extra (src/ember-cli/.../fs-extra/lib/index.js) 26ms ▇▇ 2%
63 ../dependency-manager-ad...ager-adapters/bower.js) 30ms ▇▇ 3%
64 ./../utils/dependency-ma...ger-adapter-factory.js) 32ms ▇▇ 3%
65 ../tasks/try-each (src/e.../lib/tasks/try-each.js) 99ms ▇▇▇▇▇ 9%
66 ./try (src/ember-cli/nod...ry/lib/commands/try.js) 187ms ▇▇▇▇▇▇▇▇▇ 17%
67 ./lib/commands (src/embe.../lib/commands/index.js) 194ms ▇▇▇▇▇▇▇▇▇ 18%
68 ./pubsuffix (src/ember-c...ookie/lib/pubsuffix.js) 20ms ▇ 2%
69 tough-cookie (src/ember-...h-cookie/lib/cookie.js) 28ms ▇▇ 3%
70 ./lib/cookies (src/ember...request/lib/cookies.js) 28ms ▇▇ 3%
71 bl (src/ember-cli/node_modules/bl/bl.js) 12ms ▇ 1%
72 hawk (src/ember-cli/node...ules/hawk/lib/index.js) 15ms ▇ 1%
73 ./formats/auto (src/embe...pk/lib/formats/auto.js) 11ms ▇ 1%
74 ./private-key (src/ember...hpk/lib/private-key.js) 24ms ▇▇ 2%
75 ./utils (src/ember-cli/n...les/sshpk/lib/utils.js) 27ms ▇▇ 3%
76 ./fingerprint (src/ember...hpk/lib/fingerprint.js) 29ms ▇▇ 3%
77 ./key (src/ember-cli/nod...dules/sshpk/lib/key.js) 36ms ▇▇ 3%
78 sshpk (src/ember-cli/nod...les/sshpk/lib/index.js) 37ms ▇▇ 3%
79 ./utils (src/ember-cli/n...signature/lib/utils.js) 39ms ▇▇ 4%
80 ./parser (src/ember-cli/...ignature/lib/parser.js) 42ms ▇▇ 4%
81 http-signature (src/embe...signature/lib/index.js) 51ms ▇▇▇ 5%
82 mime-types (src/ember-cl...es/mime-types/index.js) 14ms ▇ 1%
83 ./runner (src/ember-cli/...alidator/lib/runner.js) 17ms ▇ 2%
84 har-validator (src/ember...validator/lib/index.js) 18ms ▇ 2%
85 ./lib/har (src/ember-cli...les/request/lib/har.js) 18ms ▇ 2%
86 ./request (src/ember-cli...les/request/request.js) 141ms ▇▇▇▇▇▇▇ 13%
87 request (src/ember-cli/n...dules/request/index.js) 172ms ▇▇▇▇▇▇▇▇ 16%
Total require(): 2211
Total time: 1.1s
```
using: https://www.npmjs.com/package/time-require
|
non_process
|
sluggish startup times ember v warning addon s commands took to load see ember cli beta log slow command loads node os darwin start time utc module time graceful fs src ember c eful fs graceful fs js ▇ glob src ember cli node modules glob glob js ▇ rimraf src ember cli no dules rimraf rimraf js ▇ remove src ember cli ra lib remove index js ▇ fs extra src ember cli fs extra lib index js ▇▇▇ hash src ember cli n odules lodash hash js ▇ mapcacheclear src em dash mapcacheclear js ▇ mapcache src ember c es lodash mapcache js ▇ stackset src ember c es lodash stackset js ▇ stack src ember cli dules lodash stack js ▇▇ equalobjects src emb odash equalobjects js ▇ baseisequaldeep src sh baseisequaldeep js ▇▇ baseisequal src embe lodash baseisequal js ▇▇ baseismatch src embe lodash baseismatch js ▇▇▇ basematches src embe lodash basematches js ▇▇▇ baseiteratee src emb odash baseiteratee js ▇▇▇▇ createfind src ember lodash createfind js ▇▇▇▇ lodash find src ember c modules lodash find js ▇▇▇▇ baseclone src ember s lodash baseclone js ▇ basemergedeep src em dash basemergedeep js ▇▇ basemerge src ember s lodash basemerge js ▇▇ lodash merge src ember odules lodash merge js ▇▇ models addon discover els addon discovery js ▇ models command src e lib models command js ▇▇ models project src e lib models project js ▇▇▇▇▇▇▇▇▇▇▇▇▇ diff src ember cli node ules diff lib index js ▇ edit file diff src em dels edit file diff js ▇▇ file info src ember c ib models file info js ▇▇ inflection src ember cl tion lib inflection js ▇ models blueprint src ib models blueprint js ▇▇▇▇ utilities merge bluep e blueprint options js ▇▇▇▇ new src ember cli lib commands new js ▇▇▇▇ users stefanepenner src lib commands addon js ▇▇▇▇ users stefanepenner src lib commands serve js ▇ models builder src e lib models builder js ▇ users stefanepenner src i lib commands test js ▇ users stefanepenner src cli lib tasks serve js ▇ lib cli src ember cli lib cli index js ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ leek src ember cli node dules leek lib leek js ▇ lodash merge src ember lodash merge index js ▇ yam src ember cli node modules yam lib yam js ▇ configstore src ember c s configstore index js ▇ basemergedeep src emb ernal basemergedeep js ▇ internal basemerge s internal basemerge js ▇ lodash object merge src lodash object merge js ▇▇ lib style plugin src ry lib style plugin js ▇▇ src ember cli node m cess registry index js ▇▇ ember cli preprocess reg istry preprocessors js ▇▇ broccoli funnel src emb occoli funnel index js ▇ models addon src emb li lib models addon js ▇▇▇ lodash array src ember odules lodash array js ▇▇▇ node fetch src ember cl es node fetch index js ▇▇ fetch ember versions f ersions from github js ▇▇ get ember versions sr get ember versions js ▇▇▇▇ ember try config src em io config for ember js ▇▇▇▇ utils config src emb ry lib utils config js ▇▇▇▇ lodash src ember cli no odules lodash index js ▇▇▇ utils src ember cli n li src utils js ▇▇▇ src table src ember c li src table js ▇▇▇ cli src ember cl es cli index js ▇▇▇ utils result summar tils result summary js ▇▇▇ fs extra src ember cli fs extra lib index js ▇▇ dependency manager ad ager adapters bower js ▇▇ utils dependency ma ger adapter factory js ▇▇ tasks try each src e lib tasks try each js ▇▇▇▇▇ try src ember cli nod ry lib commands try js ▇▇▇▇▇▇▇▇▇ lib commands src embe lib commands index js ▇▇▇▇▇▇▇▇▇ pubsuffix src ember c ookie lib pubsuffix js ▇ tough cookie src ember h cookie lib cookie js ▇▇ lib cookies src ember request lib cookies js ▇▇ bl src ember cli node modules bl bl js ▇ hawk src ember cli node ules hawk lib index js ▇ formats auto src embe pk lib formats auto js ▇ private key src ember hpk lib private key js ▇▇ utils src ember cli n les sshpk lib utils js ▇▇ fingerprint src ember hpk lib fingerprint js ▇▇ key src ember cli nod dules sshpk lib key js ▇▇ sshpk src ember cli nod les sshpk lib index js ▇▇ utils src ember cli n signature lib utils js ▇▇ parser src ember cli ignature lib parser js ▇▇ http signature src embe signature lib index js ▇▇▇ mime types src ember cl es mime types index js ▇ runner src ember cli alidator lib runner js ▇ har validator src ember validator lib index js ▇ lib har src ember cli les request lib har js ▇ request src ember cli les request request js ▇▇▇▇▇▇▇ request src ember cli n dules request index js ▇▇▇▇▇▇▇▇ total require total time using
| 0
|
10,061
| 13,044,161,786
|
IssuesEvent
|
2020-07-29 03:47:26
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateStringString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateStringString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateStringString` from TiDB -
## Description
Port the scalar function `AddDateStringString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddatestringstring from tidb description port the scalar function adddatestringstring from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
48,629
| 2,999,084,002
|
IssuesEvent
|
2015-07-23 17:13:13
|
VertNet/webapp
|
https://api.github.com/repos/VertNet/webapp
|
closed
|
Tissue Flag
|
bug priority-high
|
For all of the other filter flags, a user can use 1 or 0 to indicate presence or absence of the flag on the URL string.
So mappable:1 or mappable:0 in combination with other data both return appropriate values.
Same with media:1 or media:0
Tissue however treats tissue:0 the same as tissue:1.
|
1.0
|
Tissue Flag - For all of the other filter flags, a user can use 1 or 0 to indicate presence or absence of the flag on the URL string.
So mappable:1 or mappable:0 in combination with other data both return appropriate values.
Same with media:1 or media:0
Tissue however treats tissue:0 the same as tissue:1.
|
non_process
|
tissue flag for all of the other filter flags a user can use or to indicate presence or absence of the flag on the url string so mappable or mappable in combination with other data both return appropriate values same with media or media tissue however treats tissue the same as tissue
| 0
|
18,807
| 4,312,679,619
|
IssuesEvent
|
2016-07-22 07:09:28
|
projectatomic/adb-atomic-developer-bundle
|
https://api.github.com/repos/projectatomic/adb-atomic-developer-bundle
|
closed
|
README needs updating
|
documentation
|
The "What does it contain" section needs to be more clear and either list all providers or differentiate in what is cached versus not cached ...
|
1.0
|
README needs updating - The "What does it contain" section needs to be more clear and either list all providers or differentiate in what is cached versus not cached ...
|
non_process
|
readme needs updating the what does it contain section needs to be more clear and either list all providers or differentiate in what is cached versus not cached
| 0
|
23,927
| 6,495,360,047
|
IssuesEvent
|
2017-08-22 04:37:15
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Very slow in Category view list when I added arround 30 custom fields for article
|
No Code Attached Yet
|
I would like to ask....why the category list view page loading time is very slow (around 10sec even i have only 10 article in category list) .
I have added around 30 custom fields for 1 article. Anyone can tell me how to solve it ??
### Steps to reproduce the issue
I added 30 custom fields for one specific category article for category list views
### Expected result
normal loading speed for category list view
### Actual result
Very slow loading in category view page e.g over 10 sec for only 10 article list in category page
### System information (as much as possible)
### Additional comments

|
1.0
|
Very slow in Category view list when I added arround 30 custom fields for article - I would like to ask....why the category list view page loading time is very slow (around 10sec even i have only 10 article in category list) .
I have added around 30 custom fields for 1 article. Anyone can tell me how to solve it ??
### Steps to reproduce the issue
I added 30 custom fields for one specific category article for category list views
### Expected result
normal loading speed for category list view
### Actual result
Very slow loading in category view page e.g over 10 sec for only 10 article list in category page
### System information (as much as possible)
### Additional comments

|
non_process
|
very slow in category view list when i added arround custom fields for article i would like to ask why the category list view page loading time is very slow around even i have only article in category list i have added around custom fields for article anyone can tell me how to solve it steps to reproduce the issue i added custom fields for one specific category article for category list views expected result normal loading speed for category list view actual result very slow loading in category view page e g over sec for only article list in category page system information as much as possible additional comments
| 0
|
746,584
| 26,036,527,408
|
IssuesEvent
|
2022-12-22 05:53:51
|
wso2/api-manager
|
https://api.github.com/repos/wso2/api-manager
|
opened
|
SMB2 inbound endpoint stops working after idle time
|
Type/Bug Priority/Normal
|
### Description
SMB2 inbound endpoint stops working after 15 minutes when setting the polling interval to a value of more than 15 minutes.
### Steps to Reproduce
- Get an updated MI 4.1.0 server( We tested the scenario in the latest update level 26).
- Deploy an Inbound Endpoint to poll files from an SMB location.
- After one poll, Wait for 15 minutes. You'll observe the below errors continuously.
```
[2022-12-21 12:32:00,599] ERROR {Promise} - << 1063 >> woke to: {} com.hierynomus.smbj.common.SMBRuntimeException: com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Connection reset
at com.hierynomus.smbj.common.SMBRuntimeException$1.wrap(SMBRuntimeException.java:28)
at com.hierynomus.smbj.common.SMBRuntimeException$1.wrap(SMBRuntimeException.java:22)
at com.hierynomus.protocol.commons.concurrent.Promise.deliverError(Promise.java:95)
at com.hierynomus.smbj.connection.OutstandingRequests.handleError(OutstandingRequests.java:88)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:292)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Connection reset
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.doRead(DirectTcpPacketReader.java:53)
at com.hierynomus.smbj.transport.PacketReader.readPacket(PacketReader.java:70)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:48)
... 1 more
Caused by: java.net.SocketException: Connection reset
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.readFully(DirectTcpPacketReader.java:70)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.readTcpHeader(DirectTcpPacketReader.java:59)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.doRead(DirectTcpPacketReader.java:48)
... 3 more
[2022-12-21 12:32:00,599] ERROR {Session} - Caught exception while closing TreeConnect with id: 1 com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Broken pipe (Write failed)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:78)
at com.hierynomus.smbj.connection.Connection.send(Connection.java:234)
at com.hierynomus.smbj.session.Session.send(Session.java:300)
at com.hierynomus.smbj.share.TreeConnect.close(TreeConnect.java:69)
at com.hierynomus.smbj.share.Share.close(Share.java:116)
at com.hierynomus.smbj.session.Session.logoff(Session.java:236)
at com.hierynomus.smbj.session.Session.close(Session.java:279)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:178)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:155)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:294)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:75)
... 11 more
[2022-12-21 12:32:00,607] ERROR {Session} - Caught exception while closing TreeConnect with id: 5 com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Broken pipe (Write failed)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:78)
at com.hierynomus.smbj.connection.Connection.send(Connection.java:234)
at com.hierynomus.smbj.session.Session.send(Session.java:300)
at com.hierynomus.smbj.share.TreeConnect.close(TreeConnect.java:69)
at com.hierynomus.smbj.share.Share.close(Share.java:116)
at com.hierynomus.smbj.session.Session.logoff(Session.java:236)
at com.hierynomus.smbj.session.Session.close(Session.java:279)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:178)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:155)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:294)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:75)
... 11 more
[2022-12-21 12:32:00,608] ERROR {FilePollingConsumer} - Error checking for existence and readability : smb2://ayeshd:***@192.168.102.6/test/IN org.apache.commons.vfs2.FileSystemException: Could not determine the type of file "smb2://ayeshd:Pwss5$Tp%40r5g@192.168.102.6/test/IN".
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1302)
at org.apache.commons.vfs2.provider.AbstractFileObject.exists(AbstractFileObject.java:900)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.poll(FilePollingConsumer.java:187)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.execute(FilePollingConsumer.java:151)
at org.wso2.carbon.inbound.endpoint.protocol.file.FileTask.taskExecute(FileTask.java:45)
at org.wso2.carbon.inbound.endpoint.common.InboundTask.execute(InboundTask.java:43)
at org.wso2.micro.integrator.mediation.ntask.NTaskAdapter.execute(NTaskAdapter.java:105)
at org.wso2.micro.integrator.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:63)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.commons.vfs2.FileSystemException: Unknown message with code "Could not get information for file: IN".
at org.apache.commons.vfs2.provider.smb2.Smb2ClientWrapper.getFileInfo(Smb2ClientWrapper.java:144)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.getFileInfo(Smb2FileObject.java:112)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.doGetType(Smb2FileObject.java:88)
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1296)
... 13 more
[2022-12-21 12:32:00,945] ERROR {FilePollingConsumer} - Error checking for existence and readability : smb2://ayeshd:***@192.168.102.6/test/IN org.apache.commons.vfs2.FileSystemException: Could not determine the type of file "smb2://ayeshd:Pwss5$Tp%40r5g@192.168.102.6/test/IN".
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1302)
at org.apache.commons.vfs2.provider.AbstractFileObject.exists(AbstractFileObject.java:900)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.poll(FilePollingConsumer.java:187)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.execute(FilePollingConsumer.java:151)
at org.wso2.carbon.inbound.endpoint.protocol.file.FileTask.taskExecute(FileTask.java:45)
at org.wso2.carbon.inbound.endpoint.common.InboundTask.execute(InboundTask.java:43)
at org.wso2.micro.integrator.mediation.ntask.NTaskAdapter.execute(NTaskAdapter.java:105)
at org.wso2.micro.integrator.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:63)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.commons.vfs2.FileSystemException: Unknown message with code "Could not get information for file: IN".
at org.apache.commons.vfs2.provider.smb2.Smb2ClientWrapper.getFileInfo(Smb2ClientWrapper.java:144)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.getFileInfo(Smb2FileObject.java:112)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.doGetType(Smb2FileObject.java:88)
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1296)
... 13 more
```
- We can see the same behavior after restarting the files server.
### Affected Component
MI
### Version
4.1.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
1.0
|
SMB2 inbound endpoint stops working after idle time - ### Description
SMB2 inbound endpoint stops working after 15 minutes when setting the polling interval to a value of more than 15 minutes.
### Steps to Reproduce
- Get an updated MI 4.1.0 server( We tested the scenario in the latest update level 26).
- Deploy an Inbound Endpoint to poll files from an SMB location.
- After one poll, Wait for 15 minutes. You'll observe the below errors continuously.
```
[2022-12-21 12:32:00,599] ERROR {Promise} - << 1063 >> woke to: {} com.hierynomus.smbj.common.SMBRuntimeException: com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Connection reset
at com.hierynomus.smbj.common.SMBRuntimeException$1.wrap(SMBRuntimeException.java:28)
at com.hierynomus.smbj.common.SMBRuntimeException$1.wrap(SMBRuntimeException.java:22)
at com.hierynomus.protocol.commons.concurrent.Promise.deliverError(Promise.java:95)
at com.hierynomus.smbj.connection.OutstandingRequests.handleError(OutstandingRequests.java:88)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:292)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Connection reset
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.doRead(DirectTcpPacketReader.java:53)
at com.hierynomus.smbj.transport.PacketReader.readPacket(PacketReader.java:70)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:48)
... 1 more
Caused by: java.net.SocketException: Connection reset
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.readFully(DirectTcpPacketReader.java:70)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.readTcpHeader(DirectTcpPacketReader.java:59)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpPacketReader.doRead(DirectTcpPacketReader.java:48)
... 3 more
[2022-12-21 12:32:00,599] ERROR {Session} - Caught exception while closing TreeConnect with id: 1 com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Broken pipe (Write failed)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:78)
at com.hierynomus.smbj.connection.Connection.send(Connection.java:234)
at com.hierynomus.smbj.session.Session.send(Session.java:300)
at com.hierynomus.smbj.share.TreeConnect.close(TreeConnect.java:69)
at com.hierynomus.smbj.share.Share.close(Share.java:116)
at com.hierynomus.smbj.session.Session.logoff(Session.java:236)
at com.hierynomus.smbj.session.Session.close(Session.java:279)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:178)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:155)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:294)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:75)
... 11 more
[2022-12-21 12:32:00,607] ERROR {Session} - Caught exception while closing TreeConnect with id: 5 com.hierynomus.protocol.transport.TransportException: java.net.SocketException: Broken pipe (Write failed)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:78)
at com.hierynomus.smbj.connection.Connection.send(Connection.java:234)
at com.hierynomus.smbj.session.Session.send(Session.java:300)
at com.hierynomus.smbj.share.TreeConnect.close(TreeConnect.java:69)
at com.hierynomus.smbj.share.Share.close(Share.java:116)
at com.hierynomus.smbj.session.Session.logoff(Session.java:236)
at com.hierynomus.smbj.session.Session.close(Session.java:279)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:178)
at com.hierynomus.smbj.connection.Connection.close(Connection.java:155)
at com.hierynomus.smbj.connection.Connection.handleError(Connection.java:294)
at com.hierynomus.smbj.transport.PacketReader.run(PacketReader.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at com.hierynomus.smbj.transport.tcp.direct.DirectTcpTransport.write(DirectTcpTransport.java:75)
... 11 more
[2022-12-21 12:32:00,608] ERROR {FilePollingConsumer} - Error checking for existence and readability : smb2://ayeshd:***@192.168.102.6/test/IN org.apache.commons.vfs2.FileSystemException: Could not determine the type of file "smb2://ayeshd:Pwss5$Tp%40r5g@192.168.102.6/test/IN".
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1302)
at org.apache.commons.vfs2.provider.AbstractFileObject.exists(AbstractFileObject.java:900)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.poll(FilePollingConsumer.java:187)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.execute(FilePollingConsumer.java:151)
at org.wso2.carbon.inbound.endpoint.protocol.file.FileTask.taskExecute(FileTask.java:45)
at org.wso2.carbon.inbound.endpoint.common.InboundTask.execute(InboundTask.java:43)
at org.wso2.micro.integrator.mediation.ntask.NTaskAdapter.execute(NTaskAdapter.java:105)
at org.wso2.micro.integrator.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:63)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.commons.vfs2.FileSystemException: Unknown message with code "Could not get information for file: IN".
at org.apache.commons.vfs2.provider.smb2.Smb2ClientWrapper.getFileInfo(Smb2ClientWrapper.java:144)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.getFileInfo(Smb2FileObject.java:112)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.doGetType(Smb2FileObject.java:88)
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1296)
... 13 more
[2022-12-21 12:32:00,945] ERROR {FilePollingConsumer} - Error checking for existence and readability : smb2://ayeshd:***@192.168.102.6/test/IN org.apache.commons.vfs2.FileSystemException: Could not determine the type of file "smb2://ayeshd:Pwss5$Tp%40r5g@192.168.102.6/test/IN".
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1302)
at org.apache.commons.vfs2.provider.AbstractFileObject.exists(AbstractFileObject.java:900)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.poll(FilePollingConsumer.java:187)
at org.wso2.carbon.inbound.endpoint.protocol.file.FilePollingConsumer.execute(FilePollingConsumer.java:151)
at org.wso2.carbon.inbound.endpoint.protocol.file.FileTask.taskExecute(FileTask.java:45)
at org.wso2.carbon.inbound.endpoint.common.InboundTask.execute(InboundTask.java:43)
at org.wso2.micro.integrator.mediation.ntask.NTaskAdapter.execute(NTaskAdapter.java:105)
at org.wso2.micro.integrator.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:63)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.commons.vfs2.FileSystemException: Unknown message with code "Could not get information for file: IN".
at org.apache.commons.vfs2.provider.smb2.Smb2ClientWrapper.getFileInfo(Smb2ClientWrapper.java:144)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.getFileInfo(Smb2FileObject.java:112)
at org.apache.commons.vfs2.provider.smb2.Smb2FileObject.doGetType(Smb2FileObject.java:88)
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:1296)
... 13 more
```
- We can see the same behavior after restarting the files server.
### Affected Component
MI
### Version
4.1.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
non_process
|
inbound endpoint stops working after idle time description inbound endpoint stops working after minutes when setting the polling interval to a value of more than minutes steps to reproduce get an updated mi server we tested the scenario in the latest update level deploy an inbound endpoint to poll files from an smb location after one poll wait for minutes you ll observe the below errors continuously error promise woke to com hierynomus smbj common smbruntimeexception com hierynomus protocol transport transportexception java net socketexception connection reset at com hierynomus smbj common smbruntimeexception wrap smbruntimeexception java at com hierynomus smbj common smbruntimeexception wrap smbruntimeexception java at com hierynomus protocol commons concurrent promise delivererror promise java at com hierynomus smbj connection outstandingrequests handleerror outstandingrequests java at com hierynomus smbj connection connection handleerror connection java at com hierynomus smbj transport packetreader run packetreader java at java base java lang thread run thread java caused by com hierynomus protocol transport transportexception java net socketexception connection reset at com hierynomus smbj transport tcp direct directtcppacketreader doread directtcppacketreader java at com hierynomus smbj transport packetreader readpacket packetreader java at com hierynomus smbj transport packetreader run packetreader java more caused by java net socketexception connection reset at java base java net socketinputstream read socketinputstream java at java base java net socketinputstream read socketinputstream java at com hierynomus smbj transport tcp direct directtcppacketreader readfully directtcppacketreader java at com hierynomus smbj transport tcp direct directtcppacketreader readtcpheader directtcppacketreader java at com hierynomus smbj transport tcp direct directtcppacketreader doread directtcppacketreader java more error session caught exception while closing treeconnect with id com hierynomus protocol transport transportexception java net socketexception broken pipe write failed at com hierynomus smbj transport tcp direct directtcptransport write directtcptransport java at com hierynomus smbj connection connection send connection java at com hierynomus smbj session session send session java at com hierynomus smbj share treeconnect close treeconnect java at com hierynomus smbj share share close share java at com hierynomus smbj session session logoff session java at com hierynomus smbj session session close session java at com hierynomus smbj connection connection close connection java at com hierynomus smbj connection connection close connection java at com hierynomus smbj connection connection handleerror connection java at com hierynomus smbj transport packetreader run packetreader java at java base java lang thread run thread java caused by java net socketexception broken pipe write failed at java base java net socketoutputstream native method at java base java net socketoutputstream socketwrite socketoutputstream java at java base java net socketoutputstream write socketoutputstream java at java base java io bufferedoutputstream flushbuffer bufferedoutputstream java at java base java io bufferedoutputstream flush bufferedoutputstream java at com hierynomus smbj transport tcp direct directtcptransport write directtcptransport java more error session caught exception while closing treeconnect with id com hierynomus protocol transport transportexception java net socketexception broken pipe write failed at com hierynomus smbj transport tcp direct directtcptransport write directtcptransport java at com hierynomus smbj connection connection send connection java at com hierynomus smbj session session send session java at com hierynomus smbj share treeconnect close treeconnect java at com hierynomus smbj share share close share java at com hierynomus smbj session session logoff session java at com hierynomus smbj session session close session java at com hierynomus smbj connection connection close connection java at com hierynomus smbj connection connection close connection java at com hierynomus smbj connection connection handleerror connection java at com hierynomus smbj transport packetreader run packetreader java at java base java lang thread run thread java caused by java net socketexception broken pipe write failed at java base java net socketoutputstream native method at java base java net socketoutputstream socketwrite socketoutputstream java at java base java net socketoutputstream write socketoutputstream java at java base java io bufferedoutputstream flushbuffer bufferedoutputstream java at java base java io bufferedoutputstream flush bufferedoutputstream java at com hierynomus smbj transport tcp direct directtcptransport write directtcptransport java more error filepollingconsumer error checking for existence and readability ayeshd test in org apache commons filesystemexception could not determine the type of file ayeshd tp test in at org apache commons provider abstractfileobject gettype abstractfileobject java at org apache commons provider abstractfileobject exists abstractfileobject java at org carbon inbound endpoint protocol file filepollingconsumer poll filepollingconsumer java at org carbon inbound endpoint protocol file filepollingconsumer execute filepollingconsumer java at org carbon inbound endpoint protocol file filetask taskexecute filetask java at org carbon inbound endpoint common inboundtask execute inboundtask java at org micro integrator mediation ntask ntaskadapter execute ntaskadapter java at org micro integrator ntask core impl taskquartzjobadapter execute taskquartzjobadapter java at org quartz core jobrunshell run jobrunshell java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by org apache commons filesystemexception unknown message with code could not get information for file in at org apache commons provider getfileinfo java at org apache commons provider getfileinfo java at org apache commons provider dogettype java at org apache commons provider abstractfileobject gettype abstractfileobject java more error filepollingconsumer error checking for existence and readability ayeshd test in org apache commons filesystemexception could not determine the type of file ayeshd tp test in at org apache commons provider abstractfileobject gettype abstractfileobject java at org apache commons provider abstractfileobject exists abstractfileobject java at org carbon inbound endpoint protocol file filepollingconsumer poll filepollingconsumer java at org carbon inbound endpoint protocol file filepollingconsumer execute filepollingconsumer java at org carbon inbound endpoint protocol file filetask taskexecute filetask java at org carbon inbound endpoint common inboundtask execute inboundtask java at org micro integrator mediation ntask ntaskadapter execute ntaskadapter java at org micro integrator ntask core impl taskquartzjobadapter execute taskquartzjobadapter java at org quartz core jobrunshell run jobrunshell java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by org apache commons filesystemexception unknown message with code could not get information for file in at org apache commons provider getfileinfo java at org apache commons provider getfileinfo java at org apache commons provider dogettype java at org apache commons provider abstractfileobject gettype abstractfileobject java more we can see the same behavior after restarting the files server affected component mi version environment details with versions no response relevant log output no response related issues no response suggested labels no response
| 0
|
239,394
| 26,223,445,384
|
IssuesEvent
|
2023-01-04 16:33:55
|
NS-Mend/Java-Demo
|
https://api.github.com/repos/NS-Mend/Java-Demo
|
closed
|
CVE-2019-17571 (High) detected in log4j-1.2.13.jar - autoclosed
|
security vulnerability
|
## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.13.jar</b></p></summary>
<p>Log4j</p>
<p>Library home page: <a href="http://logging.apache.org/log4j/">http://logging.apache.org/log4j/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/log4j/log4j/1.2.13/log4j-1.2.13.jar</p>
<p>
Dependency Hierarchy:
- slf4j-log4j12-1.5.0.jar (Root Library)
- :x: **log4j-1.2.13.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NS-Mend/Java-Demo/commit/7029f3960bcddacd18c3a708c2d968d98d8a978f">7029f3960bcddacd18c3a708c2d968d98d8a978f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p>
</p>
</details>
<p></p>
|
True
|
CVE-2019-17571 (High) detected in log4j-1.2.13.jar - autoclosed - ## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.13.jar</b></p></summary>
<p>Log4j</p>
<p>Library home page: <a href="http://logging.apache.org/log4j/">http://logging.apache.org/log4j/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/log4j/log4j/1.2.13/log4j-1.2.13.jar</p>
<p>
Dependency Hierarchy:
- slf4j-log4j12-1.5.0.jar (Root Library)
- :x: **log4j-1.2.13.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NS-Mend/Java-Demo/commit/7029f3960bcddacd18c3a708c2d968d98d8a978f">7029f3960bcddacd18c3a708c2d968d98d8a978f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jar autoclosed cve high severity vulnerability vulnerable library jar library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository jar dependency hierarchy jar root library x jar vulnerable library found in head commit a href found in base branch master vulnerability details included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution manual javadoc
| 0
|
16,679
| 21,781,964,371
|
IssuesEvent
|
2022-05-13 20:03:09
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
[FR] Lens shading map (DNG GainMap) support
|
feature: enhancement scope: camera support scope: image processing
|
Some smartphone RAW DNGs contain a lens shading map, which is like a generalization of vignetting correction. Unlike the existing vignetting correction in lensfun, it can be an arbitrary shape rather than radially symmetric, and a separate map is applied to each raw color channel separately to correct spatially varying color casts. Also, it's generated for each individual raw image rather than being constant for a specific camera+lens module. I'm not sure how widely used this feature is across various phones and camera apps. The RAW DNGs from the Google Camera app on my Pixel 4a have it and the vignetting is pretty noticeable if the correction is not done.
The [Android Camera2 API documentation](https://developer.android.com/reference/android/hardware/camera2/CaptureResult#STATISTICS_LENS_SHADING_CORRECTION_MAP) has a detailed description of how the correction works. The [Adobe DNG spec](https://wwwimages.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf) describes how the lens shading map is encoded as GainMap opcodes within the OpcodeList2 exif tag. There is a [working implementation in ART](https://bitbucket.org/agriggio/art/src/master/rtengine/gainmap.cc) - it is enabled by the "Flat-Field" module in the Raw category when "Embedded in metadata" is checked.
I have been trying to figure out how this could be implemented in Darktable. According to the DNG spec, as a stage opcode (in OpcodeList2) the gain map should be applied to linear raw data after black level subtraction but before demosaicing. For the typical case of a RGGB Bayer sensor where the GainMap for both of the G channels is identical (like from Pixel 4a) it would probably end up with the same result if the GainMaps were applied to the demosaiced RGB image prior to input color profile, but it probably makes more sense to apply it to the raw data according to the spec.
One possibility is to do the correction within RawSpeed. It already has the ability to parse DNG opcodes (other than GainMap), but currently this is only done for lossy DNGs. It could be modified to also apply black level subtraction and GainMaps to RAW DNGs that have the gain map, and then it would just output 0 for the black levels so that rawprepare does not perform any further black level subtraction. I'm not sure how the UI would work if there was a need to make this correction optional, since it wouldn't be part of any module.
Another possibility is to do the correction within the pipeline. I think this would be more complicated to implement. It could be its own module that comes after rawprepare, or maybe it could be added to rawprepare. Similarly to #7092 it would need to access the additional info from the exif which is not currently stored in dt_image_t or the image database. It could be awkward to add there because it can be quite large (20kb from the Pixel 4a) and there is not an explicit upper bound on the size defined by the file format. Maybe it could be managed similarly to dt_image_t.profile - it's not stored in the structure or the sql database, there is only a pointer to it. The colorin module loads it from the image file, allocates memory for it dynamically, and it's freed when it is removed from the image cache. The module implementing the gainmap could do something similar - if it hasn't been loaded into dt_image_t already, read it out of the exif, allocate memory, and store a pointer to it in dt_image_t.
|
1.0
|
[FR] Lens shading map (DNG GainMap) support - Some smartphone RAW DNGs contain a lens shading map, which is like a generalization of vignetting correction. Unlike the existing vignetting correction in lensfun, it can be an arbitrary shape rather than radially symmetric, and a separate map is applied to each raw color channel separately to correct spatially varying color casts. Also, it's generated for each individual raw image rather than being constant for a specific camera+lens module. I'm not sure how widely used this feature is across various phones and camera apps. The RAW DNGs from the Google Camera app on my Pixel 4a have it and the vignetting is pretty noticeable if the correction is not done.
The [Android Camera2 API documentation](https://developer.android.com/reference/android/hardware/camera2/CaptureResult#STATISTICS_LENS_SHADING_CORRECTION_MAP) has a detailed description of how the correction works. The [Adobe DNG spec](https://wwwimages.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf) describes how the lens shading map is encoded as GainMap opcodes within the OpcodeList2 exif tag. There is a [working implementation in ART](https://bitbucket.org/agriggio/art/src/master/rtengine/gainmap.cc) - it is enabled by the "Flat-Field" module in the Raw category when "Embedded in metadata" is checked.
I have been trying to figure out how this could be implemented in Darktable. According to the DNG spec, as a stage opcode (in OpcodeList2) the gain map should be applied to linear raw data after black level subtraction but before demosaicing. For the typical case of a RGGB Bayer sensor where the GainMap for both of the G channels is identical (like from Pixel 4a) it would probably end up with the same result if the GainMaps were applied to the demosaiced RGB image prior to input color profile, but it probably makes more sense to apply it to the raw data according to the spec.
One possibility is to do the correction within RawSpeed. It already has the ability to parse DNG opcodes (other than GainMap), but currently this is only done for lossy DNGs. It could be modified to also apply black level subtraction and GainMaps to RAW DNGs that have the gain map, and then it would just output 0 for the black levels so that rawprepare does not perform any further black level subtraction. I'm not sure how the UI would work if there was a need to make this correction optional, since it wouldn't be part of any module.
Another possibility is to do the correction within the pipeline. I think this would be more complicated to implement. It could be its own module that comes after rawprepare, or maybe it could be added to rawprepare. Similarly to #7092 it would need to access the additional info from the exif which is not currently stored in dt_image_t or the image database. It could be awkward to add there because it can be quite large (20kb from the Pixel 4a) and there is not an explicit upper bound on the size defined by the file format. Maybe it could be managed similarly to dt_image_t.profile - it's not stored in the structure or the sql database, there is only a pointer to it. The colorin module loads it from the image file, allocates memory for it dynamically, and it's freed when it is removed from the image cache. The module implementing the gainmap could do something similar - if it hasn't been loaded into dt_image_t already, read it out of the exif, allocate memory, and store a pointer to it in dt_image_t.
|
process
|
lens shading map dng gainmap support some smartphone raw dngs contain a lens shading map which is like a generalization of vignetting correction unlike the existing vignetting correction in lensfun it can be an arbitrary shape rather than radially symmetric and a separate map is applied to each raw color channel separately to correct spatially varying color casts also it s generated for each individual raw image rather than being constant for a specific camera lens module i m not sure how widely used this feature is across various phones and camera apps the raw dngs from the google camera app on my pixel have it and the vignetting is pretty noticeable if the correction is not done the has a detailed description of how the correction works the describes how the lens shading map is encoded as gainmap opcodes within the exif tag there is a it is enabled by the flat field module in the raw category when embedded in metadata is checked i have been trying to figure out how this could be implemented in darktable according to the dng spec as a stage opcode in the gain map should be applied to linear raw data after black level subtraction but before demosaicing for the typical case of a rggb bayer sensor where the gainmap for both of the g channels is identical like from pixel it would probably end up with the same result if the gainmaps were applied to the demosaiced rgb image prior to input color profile but it probably makes more sense to apply it to the raw data according to the spec one possibility is to do the correction within rawspeed it already has the ability to parse dng opcodes other than gainmap but currently this is only done for lossy dngs it could be modified to also apply black level subtraction and gainmaps to raw dngs that have the gain map and then it would just output for the black levels so that rawprepare does not perform any further black level subtraction i m not sure how the ui would work if there was a need to make this correction optional since it wouldn t be part of any module another possibility is to do the correction within the pipeline i think this would be more complicated to implement it could be its own module that comes after rawprepare or maybe it could be added to rawprepare similarly to it would need to access the additional info from the exif which is not currently stored in dt image t or the image database it could be awkward to add there because it can be quite large from the pixel and there is not an explicit upper bound on the size defined by the file format maybe it could be managed similarly to dt image t profile it s not stored in the structure or the sql database there is only a pointer to it the colorin module loads it from the image file allocates memory for it dynamically and it s freed when it is removed from the image cache the module implementing the gainmap could do something similar if it hasn t been loaded into dt image t already read it out of the exif allocate memory and store a pointer to it in dt image t
| 1
|
9,316
| 12,336,556,421
|
IssuesEvent
|
2020-05-14 13:45:50
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Empty page being rendered on Testcafe
|
AREA: client FREQUENCY: level 2 SYSTEM: client side processing TYPE: bug
|
<!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
Navigate to the support page and run some UI tests on it
### What is the Current behavior?
Empty page being rendered

### What is the Expected behavior?
<!-- Describe what you expected to happen. -->
Your website URL (or attach your complete example): https://support.okta.com/help/s/
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
test('test', async t => {
await t.navigateTo('https://support.okta.com/help/s');
});
```
</details>
### Your Environment details:
* testcafe version: 1.2.1
* node.js version: v8.11.1
* command-line arguments: "testcafe chrome test.js"
* browser name and version: Chrome (any browser would do)
* platform and version: Chrome 75.0.3770 / Mac OS X 10.14.5
|
1.0
|
Empty page being rendered on Testcafe - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
Navigate to the support page and run some UI tests on it
### What is the Current behavior?
Empty page being rendered

### What is the Expected behavior?
<!-- Describe what you expected to happen. -->
Your website URL (or attach your complete example): https://support.okta.com/help/s/
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
test('test', async t => {
await t.navigateTo('https://support.okta.com/help/s');
});
```
</details>
### Your Environment details:
* testcafe version: 1.2.1
* node.js version: v8.11.1
* command-line arguments: "testcafe chrome test.js"
* browser name and version: Chrome (any browser would do)
* platform and version: Chrome 75.0.3770 / Mac OS X 10.14.5
|
process
|
empty page being rendered on testcafe if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario navigate to the support page and run some ui tests on it what is the current behavior empty page being rendered what is the expected behavior your website url or attach your complete example your complete test code or attach your test files js test test async t await t navigateto your environment details testcafe version node js version command line arguments testcafe chrome test js browser name and version chrome any browser would do platform and version chrome mac os x
| 1
|
13,310
| 15,781,688,743
|
IssuesEvent
|
2021-04-01 11:45:17
|
wekan/wekan
|
https://api.github.com/repos/wekan/wekan
|
closed
|
Snap: Get verified on snapcraft.io
|
Meta:Release-process Targets:Ubuntu-snap
|
If this is official as stated https://snapcraft.io/wekan please move to a Wekan named account and get it verified by snapcraft.
|
1.0
|
Snap: Get verified on snapcraft.io - If this is official as stated https://snapcraft.io/wekan please move to a Wekan named account and get it verified by snapcraft.
|
process
|
snap get verified on snapcraft io if this is official as stated please move to a wekan named account and get it verified by snapcraft
| 1
|
20,110
| 26,649,369,253
|
IssuesEvent
|
2023-01-25 12:33:04
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/servicegraph] Failed to find dimensions for key xxx
|
bug needs triage processor/servicegraph
|
### Component(s)
processor/servicegraph
### What happened?
## Description
I get an error in my app when it sent span to a otel collector which configed a servicegraph processor
the error cause by:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/9166107431795fb5a7433ffea78c9ab3ddde208c/processor/servicegraphprocessor/processor.go#L365-L384
a relate bug discuss at https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/15687#discussion_r1008880704
### Collector version
latest
### Environment information
## Environment
OS: cenos 7
running on container
### OpenTelemetry Collector configuration
_No response_
### Log output
```shell
2022/11/11 08:01:18 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key istio-ingressgatewayghippo-keycloakx
2022/11/11 08:01:23 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key dsp-controlplane-appserverdsp-controlplane-backend
2022/11/11 08:01:28 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key mcamel-elasticsearch-apiserverkpanda-proxy-ingress
```
### Additional context
relate bug report
|
1.0
|
[processor/servicegraph] Failed to find dimensions for key xxx - ### Component(s)
processor/servicegraph
### What happened?
## Description
I get an error in my app when it sent span to a otel collector which configed a servicegraph processor
the error cause by:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/9166107431795fb5a7433ffea78c9ab3ddde208c/processor/servicegraphprocessor/processor.go#L365-L384
a relate bug discuss at https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/15687#discussion_r1008880704
### Collector version
latest
### Environment information
## Environment
OS: cenos 7
running on container
### OpenTelemetry Collector configuration
_No response_
### Log output
```shell
2022/11/11 08:01:18 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key istio-ingressgatewayghippo-keycloakx
2022/11/11 08:01:23 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key dsp-controlplane-appserverdsp-controlplane-backend
2022/11/11 08:01:28 rpc error: code = Unknown desc = failed to build metrics: failed to find dimensions for key mcamel-elasticsearch-apiserverkpanda-proxy-ingress
```
### Additional context
relate bug report
|
process
|
failed to find dimensions for key xxx component s processor servicegraph what happened description i get an error in my app when it sent span to a otel collector which configed a servicegraph processor the error cause by a relate bug discuss at collector version latest environment information environment os cenos running on container opentelemetry collector configuration no response log output shell rpc error code unknown desc failed to build metrics failed to find dimensions for key istio ingressgatewayghippo keycloakx rpc error code unknown desc failed to build metrics failed to find dimensions for key dsp controlplane appserverdsp controlplane backend rpc error code unknown desc failed to build metrics failed to find dimensions for key mcamel elasticsearch apiserverkpanda proxy ingress additional context relate bug report
| 1
|
45,477
| 5,717,606,494
|
IssuesEvent
|
2017-04-19 17:37:56
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Only run tests for modules affected by a PR
|
testing
|
Inspired by https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1579
This may require building a directed graph (based on direct imports) and traversing the graph. We could use `networkx` for this, or maybe `pylint` already has tooling for dependency graphs.
|
1.0
|
Only run tests for modules affected by a PR - Inspired by https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1579
This may require building a directed graph (based on direct imports) and traversing the graph. We could use `networkx` for this, or maybe `pylint` already has tooling for dependency graphs.
|
non_process
|
only run tests for modules affected by a pr inspired by this may require building a directed graph based on direct imports and traversing the graph we could use networkx for this or maybe pylint already has tooling for dependency graphs
| 0
|
9,700
| 12,701,588,132
|
IssuesEvent
|
2020-06-22 18:27:35
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
reopened
|
Error opening a TLS connection: unknown Cert Authority
|
bug/2-confirmed kind/bug process/candidate topic: ssl
|
## Bug description
When connecting to a [ScaleGrid](https://scalegrid.io/) Postgres database with SSL enabled I get the following error despite having explicitly set `sslaccept=accept_invalid_certs` in the connection string:
```
Error opening a TLS connection: unknown Cert Authority
```
I'm pretty new to a lot of the SSL stuff so i'll walk through how I went about creating the certificates as it might be something I am doing wrong:
### Download root ca certificate from ScaleGrid.
ScaleGrid provides root certificates for the database server but they are **self signed**.
This is kept in the `prisma` folder and linked to in the connection url `sslcert=rootca.cert`
### Create self-signed client certificate
I followed the guide in [this blog post](https://msol.io/blog/tech/create-a-self-signed-ssl-certificate-with-openssl/).
1. Generate 2048-bit RSA private key:
`openssl genrsa -out key.pem 2048`
2. Generate a Certificate Signing Request:
`openssl req -new -sha256 -key key.pem -out csr.csr`
3. Generate a self-signed x509 certificate suitable for use on web servers.
`openssl req -x509 -sha256 -days 365 -key key.pem -in csr.csr -out certificate.pem`
4. Create SSL identity file in PKCS12 as mentioned [here](https://github.com/prisma/prisma/issues/1433#issuecomment-578646902)
`openssl pkcs12 -export -out client-identity.p12 -inkey key.pem -in certificate.pem`
5. Connect to the database:
My connection string looked as follows:
```
postgresql://user:XX@serverAddress:port/dbname?&sslmode=require&sslaccept=accept_invalid_certs&sslidentity=client-identity.p12&sslpassword=XXXX&sslcert=rootca.cert&connection_limit=3"
```
### Prisma information
@prisma/client": "2.0.0-beta.7"
|
1.0
|
Error opening a TLS connection: unknown Cert Authority - ## Bug description
When connecting to a [ScaleGrid](https://scalegrid.io/) Postgres database with SSL enabled I get the following error despite having explicitly set `sslaccept=accept_invalid_certs` in the connection string:
```
Error opening a TLS connection: unknown Cert Authority
```
I'm pretty new to a lot of the SSL stuff so i'll walk through how I went about creating the certificates as it might be something I am doing wrong:
### Download root ca certificate from ScaleGrid.
ScaleGrid provides root certificates for the database server but they are **self signed**.
This is kept in the `prisma` folder and linked to in the connection url `sslcert=rootca.cert`
### Create self-signed client certificate
I followed the guide in [this blog post](https://msol.io/blog/tech/create-a-self-signed-ssl-certificate-with-openssl/).
1. Generate 2048-bit RSA private key:
`openssl genrsa -out key.pem 2048`
2. Generate a Certificate Signing Request:
`openssl req -new -sha256 -key key.pem -out csr.csr`
3. Generate a self-signed x509 certificate suitable for use on web servers.
`openssl req -x509 -sha256 -days 365 -key key.pem -in csr.csr -out certificate.pem`
4. Create SSL identity file in PKCS12 as mentioned [here](https://github.com/prisma/prisma/issues/1433#issuecomment-578646902)
`openssl pkcs12 -export -out client-identity.p12 -inkey key.pem -in certificate.pem`
5. Connect to the database:
My connection string looked as follows:
```
postgresql://user:XX@serverAddress:port/dbname?&sslmode=require&sslaccept=accept_invalid_certs&sslidentity=client-identity.p12&sslpassword=XXXX&sslcert=rootca.cert&connection_limit=3"
```
### Prisma information
@prisma/client": "2.0.0-beta.7"
|
process
|
error opening a tls connection unknown cert authority bug description when connecting to a postgres database with ssl enabled i get the following error despite having explicitly set sslaccept accept invalid certs in the connection string error opening a tls connection unknown cert authority i m pretty new to a lot of the ssl stuff so i ll walk through how i went about creating the certificates as it might be something i am doing wrong download root ca certificate from scalegrid scalegrid provides root certificates for the database server but they are self signed this is kept in the prisma folder and linked to in the connection url sslcert rootca cert create self signed client certificate i followed the guide in generate bit rsa private key openssl genrsa out key pem generate a certificate signing request openssl req new key key pem out csr csr generate a self signed certificate suitable for use on web servers openssl req days key key pem in csr csr out certificate pem create ssl identity file in as mentioned openssl export out client identity inkey key pem in certificate pem connect to the database my connection string looked as follows postgresql user xx serveraddress port dbname sslmode require sslaccept accept invalid certs sslidentity client identity sslpassword xxxx sslcert rootca cert connection limit prisma information prisma client beta
| 1
|
266,344
| 20,147,299,042
|
IssuesEvent
|
2022-02-09 08:57:15
|
kubesphere/website
|
https://api.github.com/repos/kubesphere/website
|
closed
|
The Service Accounts doc is missing a chinese version
|
good first issue help wanted kind/documentation
|
https://kubesphere.io/zh/docs/project-user-guide/configuration/serviceaccounts/
The doc of `Service Accounts` need to be translated.
|
1.0
|
The Service Accounts doc is missing a chinese version - https://kubesphere.io/zh/docs/project-user-guide/configuration/serviceaccounts/
The doc of `Service Accounts` need to be translated.
|
non_process
|
the service accounts doc is missing a chinese version the doc of service accounts need to be translated
| 0
|
7,337
| 10,473,557,053
|
IssuesEvent
|
2019-09-23 12:53:03
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
process.kill(pid, signal) does not recognize common signal strings
|
process question
|
I am getting this error:
```
Uncaught exception: TypeError [ERR_UNKNOWN_SIGNAL]: Unknown signal: INT
```
and it was used like:
```js
process.kill(num, 'INT');
```
this works tho:
```js
process.kill(num, 'SIGINT');
```
it should probably work with the short signal names?
|
1.0
|
process.kill(pid, signal) does not recognize common signal strings - I am getting this error:
```
Uncaught exception: TypeError [ERR_UNKNOWN_SIGNAL]: Unknown signal: INT
```
and it was used like:
```js
process.kill(num, 'INT');
```
this works tho:
```js
process.kill(num, 'SIGINT');
```
it should probably work with the short signal names?
|
process
|
process kill pid signal does not recognize common signal strings i am getting this error uncaught exception typeerror unknown signal int and it was used like js process kill num int this works tho js process kill num sigint it should probably work with the short signal names
| 1
|
42,067
| 9,126,344,826
|
IssuesEvent
|
2019-02-24 20:50:40
|
C0ZEN/ngx-store-test
|
https://api.github.com/repos/C0ZEN/ngx-store-test
|
closed
|
Fix "identical-code" issue in src/app/views/todos/todos.component.ts
|
codeclimate
|
Identical blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff
|
1.0
|
Fix "identical-code" issue in src/app/views/todos/todos.component.ts - Identical blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/C0ZEN/ngx-store-test/src/app/views/todos/todos.component.ts#issue_5c72f6e276cfa600010000ff
|
non_process
|
fix identical code issue in src app views todos todos component ts identical blocks of code found in locations consider refactoring
| 0
|
22,357
| 31,048,032,815
|
IssuesEvent
|
2023-08-11 02:59:37
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Run ServiceController.Start(serviceName) throw exception access is denied
|
question area-System.ServiceProcess needs-further-triage
|
### Description
Run ServiceController.Start(serviceName) throw exception access is denied
### Steps to Reproduce
1.Create maui app.
2.Create worker service.
3.Run this code from maui in VS as admin or MSIX
ServiceController service = new ServiceController(servicName);
service.Start();
service.WaitForStatus(ServiceControllerStatus.Running);
4.Exception:
2022-09-13 10:57:12,636 [1] ERROR AutomationClient.Shared.Helpers.ServiceHelper - StartService
System.InvalidOperationException: Cannot open 'GS Automation Client Worker Service' service on computer '.'.
---> System.ComponentModel.Win32Exception (5): Access is denied.
--- End of inner exception stack trace ---
at System.ServiceProcess.ServiceController.GetServiceHandle(Int32 desiredAccess)
at System.ServiceProcess.ServiceController.Start(String[] args)
at System.ServiceProcess.ServiceController.Start()
at AutomationClient.Shared.Helpers.ServiceHelper.StartService(String servicName)
### Link to public reproduction project repository
...
### Version with bug
6.0.486 (current)
### Last version that worked well
Unknown/Other
### Affected platforms
Windows
### Affected platform versions
Window 10
### Did you find any workaround?
No
### Relevant log output
```shell
2022-09-13 10:57:12,636 [1] ERROR AutomationClient.Shared.Helpers.ServiceHelper - StartService
System.InvalidOperationException: Cannot open 'GS Automation Client Worker Service' service on computer '.'.
---> System.ComponentModel.Win32Exception (5): Access is denied.
--- End of inner exception stack trace ---
at System.ServiceProcess.ServiceController.GetServiceHandle(Int32 desiredAccess)
at System.ServiceProcess.ServiceController.Start(String[] args)
at System.ServiceProcess.ServiceController.Start()
at AutomationClient.Shared.Helpers.ServiceHelper.StartService(String servicName)
```
|
1.0
|
Run ServiceController.Start(serviceName) throw exception access is denied - ### Description
Run ServiceController.Start(serviceName) throw exception access is denied
### Steps to Reproduce
1.Create maui app.
2.Create worker service.
3.Run this code from maui in VS as admin or MSIX
ServiceController service = new ServiceController(servicName);
service.Start();
service.WaitForStatus(ServiceControllerStatus.Running);
4.Exception:
2022-09-13 10:57:12,636 [1] ERROR AutomationClient.Shared.Helpers.ServiceHelper - StartService
System.InvalidOperationException: Cannot open 'GS Automation Client Worker Service' service on computer '.'.
---> System.ComponentModel.Win32Exception (5): Access is denied.
--- End of inner exception stack trace ---
at System.ServiceProcess.ServiceController.GetServiceHandle(Int32 desiredAccess)
at System.ServiceProcess.ServiceController.Start(String[] args)
at System.ServiceProcess.ServiceController.Start()
at AutomationClient.Shared.Helpers.ServiceHelper.StartService(String servicName)
### Link to public reproduction project repository
...
### Version with bug
6.0.486 (current)
### Last version that worked well
Unknown/Other
### Affected platforms
Windows
### Affected platform versions
Window 10
### Did you find any workaround?
No
### Relevant log output
```shell
2022-09-13 10:57:12,636 [1] ERROR AutomationClient.Shared.Helpers.ServiceHelper - StartService
System.InvalidOperationException: Cannot open 'GS Automation Client Worker Service' service on computer '.'.
---> System.ComponentModel.Win32Exception (5): Access is denied.
--- End of inner exception stack trace ---
at System.ServiceProcess.ServiceController.GetServiceHandle(Int32 desiredAccess)
at System.ServiceProcess.ServiceController.Start(String[] args)
at System.ServiceProcess.ServiceController.Start()
at AutomationClient.Shared.Helpers.ServiceHelper.StartService(String servicName)
```
|
process
|
run servicecontroller start servicename throw exception access is denied description run servicecontroller start servicename throw exception access is denied steps to reproduce create maui app create worker service run this code from maui in vs as admin or msix servicecontroller service new servicecontroller servicname service start service waitforstatus servicecontrollerstatus running exception error automationclient shared helpers servicehelper startservice system invalidoperationexception cannot open gs automation client worker service service on computer system componentmodel access is denied end of inner exception stack trace at system serviceprocess servicecontroller getservicehandle desiredaccess at system serviceprocess servicecontroller start string args at system serviceprocess servicecontroller start at automationclient shared helpers servicehelper startservice string servicname link to public reproduction project repository version with bug current last version that worked well unknown other affected platforms windows affected platform versions window did you find any workaround no relevant log output shell error automationclient shared helpers servicehelper startservice system invalidoperationexception cannot open gs automation client worker service service on computer system componentmodel access is denied end of inner exception stack trace at system serviceprocess servicecontroller getservicehandle desiredaccess at system serviceprocess servicecontroller start string args at system serviceprocess servicecontroller start at automationclient shared helpers servicehelper startservice string servicname
| 1
|
6,912
| 10,061,740,762
|
IssuesEvent
|
2019-07-22 22:16:45
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DataLoader leaking Semaphores.
|
module: dataloader module: multiprocessing
|
Following off of this error report: https://github.com/pytorch/pytorch/issues/11727
We concluded that `multiprocessing.set_start_method('spawn')` needed to be included in `if __name__ == '__main__'` for the error to go away; however, that's no longer the case with `torch.multiprocessing.spawn`.
## 🐛 Bug
```python
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
```
## To Reproduce
```python
from torch import multiprocessing
from torch.utils.data import DataLoader
import torch
mp_lock = multiprocessing.Lock()
def main(device_index):
list(DataLoader([torch.tensor(i) for i in range(10)], num_workers=4))
if __name__ == '__main__':
torch.multiprocessing.spawn(main)
```
## Environment
- PyTorch Version (e.g., 1.0): 1.1.post2
- OS (e.g., Linux): MacOS
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.7.3
|
1.0
|
DataLoader leaking Semaphores. - Following off of this error report: https://github.com/pytorch/pytorch/issues/11727
We concluded that `multiprocessing.set_start_method('spawn')` needed to be included in `if __name__ == '__main__'` for the error to go away; however, that's no longer the case with `torch.multiprocessing.spawn`.
## 🐛 Bug
```python
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
```
## To Reproduce
```python
from torch import multiprocessing
from torch.utils.data import DataLoader
import torch
mp_lock = multiprocessing.Lock()
def main(device_index):
list(DataLoader([torch.tensor(i) for i in range(10)], num_workers=4))
if __name__ == '__main__':
torch.multiprocessing.spawn(main)
```
## Environment
- PyTorch Version (e.g., 1.0): 1.1.post2
- OS (e.g., Linux): MacOS
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.7.3
|
process
|
dataloader leaking semaphores following off of this error report we concluded that multiprocessing set start method spawn needed to be included in if name main for the error to go away however that s no longer the case with torch multiprocessing spawn 🐛 bug python usr local cellar python frameworks python framework versions lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache usr local cellar python frameworks python framework versions lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache usr local cellar python frameworks python framework versions lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache usr local cellar python frameworks python framework versions lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache to reproduce python from torch import multiprocessing from torch utils data import dataloader import torch mp lock multiprocessing lock def main device index list dataloader num workers if name main torch multiprocessing spawn main environment pytorch version e g os e g linux macos how you installed pytorch conda pip source pip python version
| 1
|
5,840
| 8,666,748,477
|
IssuesEvent
|
2018-11-29 05:52:44
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Observe data emitted through stdout and stderr
|
feature request process stalled
|
Currently without [hacking of `process.stdout.write`](https://github.com/sindresorhus/hook-std/blob/master/index.js#L10-L20) there's no way to observe what data do process logs to console, and even through hacking [100% coverage seems not possible](https://github.com/sindresorhus/hook-std/issues/9)
It would be great if such functionality is easily accessible in Node.js without a need for tweaking node internals
|
1.0
|
Observe data emitted through stdout and stderr - Currently without [hacking of `process.stdout.write`](https://github.com/sindresorhus/hook-std/blob/master/index.js#L10-L20) there's no way to observe what data do process logs to console, and even through hacking [100% coverage seems not possible](https://github.com/sindresorhus/hook-std/issues/9)
It would be great if such functionality is easily accessible in Node.js without a need for tweaking node internals
|
process
|
observe data emitted through stdout and stderr currently without there s no way to observe what data do process logs to console and even through hacking it would be great if such functionality is easily accessible in node js without a need for tweaking node internals
| 1
|
32,469
| 13,851,350,809
|
IssuesEvent
|
2020-10-15 03:44:47
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
opened
|
Spacing around port rules is too skinny
|
area/service kind/bug
|
Steps:
1. Go to Services
2. Click on Create & select ClusterIP
3. Add Port Rules
Results: The spacing around the fields for port rules is too skinny. From Lauren in a previous design bug " Spacing around boxes in the port rules is too skinny" --- needs a dev to convert to grid not table.

|
1.0
|
Spacing around port rules is too skinny - Steps:
1. Go to Services
2. Click on Create & select ClusterIP
3. Add Port Rules
Results: The spacing around the fields for port rules is too skinny. From Lauren in a previous design bug " Spacing around boxes in the port rules is too skinny" --- needs a dev to convert to grid not table.

|
non_process
|
spacing around port rules is too skinny steps go to services click on create select clusterip add port rules results the spacing around the fields for port rules is too skinny from lauren in a previous design bug spacing around boxes in the port rules is too skinny needs a dev to convert to grid not table
| 0
|
220,331
| 16,938,050,118
|
IssuesEvent
|
2021-06-27 00:22:21
|
monome/crow
|
https://api.github.com/repos/monome/crow
|
closed
|
Docs don't describe how to use input scale etc
|
documentation
|
The v2.0 input features only describe how to activate them, but not what the event handler should look like. really need to make some examples.
|
1.0
|
Docs don't describe how to use input scale etc - The v2.0 input features only describe how to activate them, but not what the event handler should look like. really need to make some examples.
|
non_process
|
docs don t describe how to use input scale etc the input features only describe how to activate them but not what the event handler should look like really need to make some examples
| 0
|
15,870
| 20,036,587,590
|
IssuesEvent
|
2022-02-02 12:34:40
|
syncfusion/ej2-angular-ui-components
|
https://api.github.com/repos/syncfusion/ej2-angular-ui-components
|
closed
|
Context Menu Elements are not removed after the component is destroyed
|
word-processor
|
Dear Team,
I found out that the context menu elements and some other elements of the angular document editor container were not destroyed or removed even after the component containing the editor component had already been destroyed.

The first image describes the context menu when I open the component containing the editor at the first time

The second image describes the context menu when I open the component at the second time.
The context menu still appears even though the component is destroyed.
Thank you.
|
1.0
|
Context Menu Elements are not removed after the component is destroyed - Dear Team,
I found out that the context menu elements and some other elements of the angular document editor container were not destroyed or removed even after the component containing the editor component had already been destroyed.

The first image describes the context menu when I open the component containing the editor at the first time

The second image describes the context menu when I open the component at the second time.
The context menu still appears even though the component is destroyed.
Thank you.
|
process
|
context menu elements are not removed after the component is destroyed dear team i found out that the context menu elements and some other elements of the angular document editor container were not destroyed or removed even after the component containing the editor component had already been destroyed the first image describes the context menu when i open the component containing the editor at the first time the second image describes the context menu when i open the component at the second time the context menu still appears even though the component is destroyed thank you
| 1
|
22,167
| 30,717,174,991
|
IssuesEvent
|
2023-07-27 13:44:25
|
prusa3d/Prusa-Firmware
|
https://api.github.com/repos/prusa3d/Prusa-Firmware
|
closed
|
M915 Stealth Mode Code not Recognized when sent from octoprint terminal
|
testing processing
|
When attempting to place printer into Silent Mode from terminal with code M915 it states it is an unknown code
|
1.0
|
M915 Stealth Mode Code not Recognized when sent from octoprint terminal - When attempting to place printer into Silent Mode from terminal with code M915 it states it is an unknown code
|
process
|
stealth mode code not recognized when sent from octoprint terminal when attempting to place printer into silent mode from terminal with code it states it is an unknown code
| 1
|
185,778
| 21,843,763,769
|
IssuesEvent
|
2022-05-18 01:07:53
|
coffeehorn/MaxwellBurdick
|
https://api.github.com/repos/coffeehorn/MaxwellBurdick
|
opened
|
CVE-2022-29353 (Medium) detected in graphql-upload-11.0.0.tgz
|
security vulnerability
|
## CVE-2022-29353 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>graphql-upload-11.0.0.tgz</b></p></summary>
<p>Middleware and an Upload scalar to add support for GraphQL multipart requests (file uploads via queries and mutations) to various Node.js GraphQL servers.</p>
<p>Library home page: <a href="https://registry.npmjs.org/graphql-upload/-/graphql-upload-11.0.0.tgz">https://registry.npmjs.org/graphql-upload/-/graphql-upload-11.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/graphql-upload/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.29.1.tgz (Root Library)
- eslint-plugin-graphql-4.0.0.tgz
- graphql-config-3.2.0.tgz
- url-loader-6.7.1.tgz
- :x: **graphql-upload-11.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An arbitrary file upload vulnerability in the file upload module of Graphql-upload v13.0.0 allows attackers to execute arbitrary code via a crafted filename.
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29353>CVE-2022-29353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-29353">https://nvd.nist.gov/vuln/detail/CVE-2022-29353</a></p>
<p>Release Date: 2022-05-16</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-29353 (Medium) detected in graphql-upload-11.0.0.tgz - ## CVE-2022-29353 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>graphql-upload-11.0.0.tgz</b></p></summary>
<p>Middleware and an Upload scalar to add support for GraphQL multipart requests (file uploads via queries and mutations) to various Node.js GraphQL servers.</p>
<p>Library home page: <a href="https://registry.npmjs.org/graphql-upload/-/graphql-upload-11.0.0.tgz">https://registry.npmjs.org/graphql-upload/-/graphql-upload-11.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/graphql-upload/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.29.1.tgz (Root Library)
- eslint-plugin-graphql-4.0.0.tgz
- graphql-config-3.2.0.tgz
- url-loader-6.7.1.tgz
- :x: **graphql-upload-11.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An arbitrary file upload vulnerability in the file upload module of Graphql-upload v13.0.0 allows attackers to execute arbitrary code via a crafted filename.
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29353>CVE-2022-29353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-29353">https://nvd.nist.gov/vuln/detail/CVE-2022-29353</a></p>
<p>Release Date: 2022-05-16</p>
<p>Fix Resolution: no_fix</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in graphql upload tgz cve medium severity vulnerability vulnerable library graphql upload tgz middleware and an upload scalar to add support for graphql multipart requests file uploads via queries and mutations to various node js graphql servers library home page a href path to dependency file package json path to vulnerable library node modules graphql upload package json dependency hierarchy gatsby tgz root library eslint plugin graphql tgz graphql config tgz url loader tgz x graphql upload tgz vulnerable library found in base branch main vulnerability details an arbitrary file upload vulnerability in the file upload module of graphql upload allows attackers to execute arbitrary code via a crafted filename publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution no fix step up your open source security game with whitesource
| 0
|
52,082
| 6,565,750,684
|
IssuesEvent
|
2017-09-08 09:36:12
|
graphcool/graphcool
|
https://api.github.com/repos/graphcool/graphcool
|
opened
|
"Add Field" and "Add Relation" buttons are too discrete on schema page
|
area/design kind/feedback
|
<a href="https://github.com/Thebigbignooby"><img src="https://avatars3.githubusercontent.com/u/4172090?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [Thebigbignooby](https://github.com/Thebigbignooby)**
_Wednesday Apr 19, 2017 at 14:39 GMT_
_Originally opened as https://github.com/graphcool/ui-feedback/issues/10_
----
i literally did not see those 2 buttons

I think the last row in table is where the "add field" button should be
I also expect to be able to define a relation when adding a field
|
1.0
|
"Add Field" and "Add Relation" buttons are too discrete on schema page - <a href="https://github.com/Thebigbignooby"><img src="https://avatars3.githubusercontent.com/u/4172090?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [Thebigbignooby](https://github.com/Thebigbignooby)**
_Wednesday Apr 19, 2017 at 14:39 GMT_
_Originally opened as https://github.com/graphcool/ui-feedback/issues/10_
----
i literally did not see those 2 buttons

I think the last row in table is where the "add field" button should be
I also expect to be able to define a relation when adding a field
|
non_process
|
add field and add relation buttons are too discrete on schema page issue by wednesday apr at gmt originally opened as i literally did not see those buttons i think the last row in table is where the add field button should be i also expect to be able to define a relation when adding a field
| 0
|
283,897
| 24,569,872,133
|
IssuesEvent
|
2022-10-13 07:47:40
|
Scille/parsec-cloud
|
https://api.github.com/repos/Scille/parsec-cloud
|
opened
|
Flaky test `tests/test_time_provider.py::test_sleep_in_nursery[raw]`
|
bug inconsistent testing python
|
```
=================================== FAILURES ===================================
__________________________ test_sleep_in_nursery[raw] __________________________
[gw1] linux -- Python 3.9.14 /home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/bin/python
trio.MultiError: Cancelled(), Cancelled()
Details of embedded exception 1:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 187, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 2)
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 15, in wait_for_sleeping_stat
await trio.sleep(0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 74, in sleep
await trio.lowlevel.checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
Details of embedded exception 2:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 187, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 2)
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 15, in wait_for_sleeping_stat
await trio.sleep(0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 74, in sleep
await trio.lowlevel.checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 934, in _nested_child_finished
await checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
During handling of the above exception, another exception occurred:
value = <trio.Nursery object at 0x7f7073244910>
async def yield_(value=None):
> return await _yield_(value)
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/async_generator/_impl.py:106:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/async_generator/_impl.py:99: in _yield_
return (yield _wrap(value))
tests/test_time_provider.py:189: in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
/opt/hostedtoolcache/Python/3.9.14/x64/lib/python3.9/contextlib.py:137: in __exit__
self.gen.throw(typ, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
deadline = 171241.72590534255
@contextmanager
def fail_at(deadline):
"""Creates a cancel scope with the given deadline, and raises an error if it
is actually cancelled.
This function and :func:`move_on_at` are similar in that both create a
cancel scope with a given absolute deadline, and if the deadline expires
then both will cause :exc:`Cancelled` to be raised within the scope. The
difference is that when the :exc:`Cancelled` exception reaches
:func:`move_on_at`, it's caught and discarded. When it reaches
:func:`fail_at`, then it's caught and :exc:`TooSlowError` is raised in its
place.
Raises:
TooSlowError: if a :exc:`Cancelled` exception is raised in this scope
and caught by the context manager.
"""
with move_on_at(deadline) as scope:
yield scope
if scope.cancelled_caught:
> raise TooSlowError
E trio.TooSlowError
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py:108: TooSlowError
============================= slowest 10 durations =============================
1.01s call tests/test_time_provider.py::test_sleep_in_nursery[raw]
0.73s call tests/test_cli.py::test_pki_enrollment[mock_parsec_ext]
0.19s call tests/test_cli.py::test_bootstrap_sequester
0.04s setup tests/test_cli.py::test_share_workspace[NONE]
0.03s setup tests/test_cli.py::test_share_workspace[OWNER]
0.02s call tests/test_time_provider.py::test_sleep_with_mock
0.02s call tests/test_cli.py::test_reencrypt_workspace
0.02s setup tests/test_logging.py::test_sentry_structlog_integration
0.02s setup tests/test_logging.py::test_sentry_stdlib_integration
0.02s call tests/test_cli.py::test_share_workspace[OWNER]
=========================== short test summary info ============================
FAILED tests/test_time_provider.py::test_sleep_in_nursery[raw] - trio.TooSlow...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!!
================== 1 failed, 106 passed, 12 skipped in 17.16s ==================
```
https://github.com/Scille/parsec-cloud/actions/runs/3240517897/jobs/5311238818
|
1.0
|
Flaky test `tests/test_time_provider.py::test_sleep_in_nursery[raw]` - ```
=================================== FAILURES ===================================
__________________________ test_sleep_in_nursery[raw] __________________________
[gw1] linux -- Python 3.9.14 /home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/bin/python
trio.MultiError: Cancelled(), Cancelled()
Details of embedded exception 1:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 187, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 2)
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 15, in wait_for_sleeping_stat
await trio.sleep(0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 74, in sleep
await trio.lowlevel.checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
Details of embedded exception 2:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 187, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 2)
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 15, in wait_for_sleeping_stat
await trio.sleep(0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 74, in sleep
await trio.lowlevel.checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py", line 106, in fail_at
yield scope
File "/home/runner/work/parsec-cloud/parsec-cloud/tests/test_time_provider.py", line 189, in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 813, in __aexit__
raise combined_error_from_nursery
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 934, in _nested_child_finished
await checkpoint()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 2353, in checkpoint
await _core.wait_task_rescheduled(lambda _: _core.Abort.SUCCEEDED)
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_traps.py", line 166, in wait_task_rescheduled
return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/outcome/_impl.py", line 138, in unwrap
raise captured_error
File "/home/runner/.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_core/_run.py", line 1173, in raise_cancel
raise Cancelled._create()
trio.Cancelled: Cancelled
During handling of the above exception, another exception occurred:
value = <trio.Nursery object at 0x7f7073244910>
async def yield_(value=None):
> return await _yield_(value)
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/async_generator/_impl.py:106:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/async_generator/_impl.py:99: in _yield_
return (yield _wrap(value))
tests/test_time_provider.py:189: in test_sleep_in_nursery
await wait_for_sleeping_stat(tp, 0)
/opt/hostedtoolcache/Python/3.9.14/x64/lib/python3.9/contextlib.py:137: in __exit__
self.gen.throw(typ, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
deadline = 171241.72590534255
@contextmanager
def fail_at(deadline):
"""Creates a cancel scope with the given deadline, and raises an error if it
is actually cancelled.
This function and :func:`move_on_at` are similar in that both create a
cancel scope with a given absolute deadline, and if the deadline expires
then both will cause :exc:`Cancelled` to be raised within the scope. The
difference is that when the :exc:`Cancelled` exception reaches
:func:`move_on_at`, it's caught and discarded. When it reaches
:func:`fail_at`, then it's caught and :exc:`TooSlowError` is raised in its
place.
Raises:
TooSlowError: if a :exc:`Cancelled` exception is raised in this scope
and caught by the context manager.
"""
with move_on_at(deadline) as scope:
yield scope
if scope.cancelled_caught:
> raise TooSlowError
E trio.TooSlowError
../../../.cache/pypoetry/virtualenvs/parsec-cloud-dZ-E5qY_-py3.9/lib/python3.9/site-packages/trio/_timeouts.py:108: TooSlowError
============================= slowest 10 durations =============================
1.01s call tests/test_time_provider.py::test_sleep_in_nursery[raw]
0.73s call tests/test_cli.py::test_pki_enrollment[mock_parsec_ext]
0.19s call tests/test_cli.py::test_bootstrap_sequester
0.04s setup tests/test_cli.py::test_share_workspace[NONE]
0.03s setup tests/test_cli.py::test_share_workspace[OWNER]
0.02s call tests/test_time_provider.py::test_sleep_with_mock
0.02s call tests/test_cli.py::test_reencrypt_workspace
0.02s setup tests/test_logging.py::test_sentry_structlog_integration
0.02s setup tests/test_logging.py::test_sentry_stdlib_integration
0.02s call tests/test_cli.py::test_share_workspace[OWNER]
=========================== short test summary info ============================
FAILED tests/test_time_provider.py::test_sleep_in_nursery[raw] - trio.TooSlow...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!!
================== 1 failed, 106 passed, 12 skipped in 17.16s ==================
```
https://github.com/Scille/parsec-cloud/actions/runs/3240517897/jobs/5311238818
|
non_process
|
flaky test tests test time provider py test sleep in nursery failures test sleep in nursery linux python home runner cache pypoetry virtualenvs parsec cloud dz bin python trio multierror cancelled cancelled details of embedded exception traceback most recent call last file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py line in fail at yield scope file home runner work parsec cloud parsec cloud tests test time provider py line in test sleep in nursery await wait for sleeping stat tp file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in aexit raise combined error from nursery file home runner work parsec cloud parsec cloud tests test time provider py line in test sleep in nursery await wait for sleeping stat tp file home runner work parsec cloud parsec cloud tests test time provider py line in wait for sleeping stat await trio sleep file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py line in sleep await trio lowlevel checkpoint file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in checkpoint await core wait task rescheduled lambda core abort succeeded file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core traps py line in wait task rescheduled return await async yield waittaskrescheduled abort func unwrap file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages outcome impl py line in unwrap raise captured error file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in raise cancel raise cancelled create trio cancelled cancelled details of embedded exception traceback most recent call last file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py line in fail at yield scope file home runner work parsec cloud parsec cloud tests test time provider py line in test sleep in nursery await wait for sleeping stat tp file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in aexit raise combined error from nursery file home runner work parsec cloud parsec cloud tests test time provider py line in test sleep in nursery await wait for sleeping stat tp file home runner work parsec cloud parsec cloud tests test time provider py line in wait for sleeping stat await trio sleep file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py line in sleep await trio lowlevel checkpoint file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in checkpoint await core wait task rescheduled lambda core abort succeeded file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core traps py line in wait task rescheduled return await async yield waittaskrescheduled abort func unwrap file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages outcome impl py line in unwrap raise captured error file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in raise cancel raise cancelled create trio cancelled cancelled during handling of the above exception another exception occurred traceback most recent call last file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py line in fail at yield scope file home runner work parsec cloud parsec cloud tests test time provider py line in test sleep in nursery await wait for sleeping stat tp file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in aexit raise combined error from nursery file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in nested child finished await checkpoint file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in checkpoint await core wait task rescheduled lambda core abort succeeded file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core traps py line in wait task rescheduled return await async yield waittaskrescheduled abort func unwrap file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages outcome impl py line in unwrap raise captured error file home runner cache pypoetry virtualenvs parsec cloud dz lib site packages trio core run py line in raise cancel raise cancelled create trio cancelled cancelled during handling of the above exception another exception occurred value async def yield value none return await yield value cache pypoetry virtualenvs parsec cloud dz lib site packages async generator impl py cache pypoetry virtualenvs parsec cloud dz lib site packages async generator impl py in yield return yield wrap value tests test time provider py in test sleep in nursery await wait for sleeping stat tp opt hostedtoolcache python lib contextlib py in exit self gen throw typ value traceback deadline contextmanager def fail at deadline creates a cancel scope with the given deadline and raises an error if it is actually cancelled this function and func move on at are similar in that both create a cancel scope with a given absolute deadline and if the deadline expires then both will cause exc cancelled to be raised within the scope the difference is that when the exc cancelled exception reaches func move on at it s caught and discarded when it reaches func fail at then it s caught and exc tooslowerror is raised in its place raises tooslowerror if a exc cancelled exception is raised in this scope and caught by the context manager with move on at deadline as scope yield scope if scope cancelled caught raise tooslowerror e trio tooslowerror cache pypoetry virtualenvs parsec cloud dz lib site packages trio timeouts py tooslowerror slowest durations call tests test time provider py test sleep in nursery call tests test cli py test pki enrollment call tests test cli py test bootstrap sequester setup tests test cli py test share workspace setup tests test cli py test share workspace call tests test time provider py test sleep with mock call tests test cli py test reencrypt workspace setup tests test logging py test sentry structlog integration setup tests test logging py test sentry stdlib integration call tests test cli py test share workspace short test summary info failed tests test time provider py test sleep in nursery trio tooslow stopping after failures xdist dsession interrupted stopping after failures failed passed skipped in
| 0
|
102,465
| 16,574,555,508
|
IssuesEvent
|
2021-05-31 01:01:11
|
snowdensb/vets-website
|
https://api.github.com/repos/snowdensb/vets-website
|
opened
|
CVE-2021-33587 (Medium) detected in css-what-2.1.0.tgz, css-what-3.4.2.tgz
|
security vulnerability
|
## CVE-2021-33587 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>css-what-2.1.0.tgz</b>, <b>css-what-3.4.2.tgz</b></p></summary>
<p>
<details><summary><b>css-what-2.1.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.0.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.0.tgz</a></p>
<p>Path to dependency file: vets-website/node_modules/css-what/package.json</p>
<p>Path to vulnerable library: vets-website/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-1.0.0-rc.3.tgz (Root Library)
- css-select-1.2.0.tgz
- :x: **css-what-2.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: vets-website/node_modules/css-what/package.json</p>
<p>Path to vulnerable library: vets-website/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"2.1.0","packageFilePaths":["/node_modules/css-what/package.json"],"isTransitiveDependency":true,"dependencyTree":"cheerio:1.0.0-rc.3;css-select:1.2.0;css-what:2.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"},{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"3.4.2","packageFilePaths":["/node_modules/css-what/package.json"],"isTransitiveDependency":true,"dependencyTree":"cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-svgo:4.0.2;svgo:1.3.2;css-select:2.1.0;css-what:3.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-33587","vulnerabilityDetails":"The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-33587 (Medium) detected in css-what-2.1.0.tgz, css-what-3.4.2.tgz - ## CVE-2021-33587 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>css-what-2.1.0.tgz</b>, <b>css-what-3.4.2.tgz</b></p></summary>
<p>
<details><summary><b>css-what-2.1.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.0.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.0.tgz</a></p>
<p>Path to dependency file: vets-website/node_modules/css-what/package.json</p>
<p>Path to vulnerable library: vets-website/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-1.0.0-rc.3.tgz (Root Library)
- css-select-1.2.0.tgz
- :x: **css-what-2.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: vets-website/node_modules/css-what/package.json</p>
<p>Path to vulnerable library: vets-website/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"2.1.0","packageFilePaths":["/node_modules/css-what/package.json"],"isTransitiveDependency":true,"dependencyTree":"cheerio:1.0.0-rc.3;css-select:1.2.0;css-what:2.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"},{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"3.4.2","packageFilePaths":["/node_modules/css-what/package.json"],"isTransitiveDependency":true,"dependencyTree":"cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-svgo:4.0.2;svgo:1.3.2;css-select:2.1.0;css-what:3.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-33587","vulnerabilityDetails":"The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in css what tgz css what tgz cve medium severity vulnerability vulnerable libraries css what tgz css what tgz css what tgz a css selector parser library home page a href path to dependency file vets website node modules css what package json path to vulnerable library vets website node modules css what package json dependency hierarchy cheerio rc tgz root library css select tgz x css what tgz vulnerable library css what tgz a css selector parser library home page a href path to dependency file vets website node modules css what package json path to vulnerable library vets website node modules css what package json dependency hierarchy cssnano tgz root library cssnano preset default tgz postcss svgo tgz svgo tgz css select tgz x css what tgz vulnerable library found in base branch master vulnerability details the css what package before for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree cheerio rc css select css what isminimumfixversionavailable true minimumfixversion css what packagetype javascript node js packagename css what packageversion packagefilepaths istransitivedependency true dependencytree cssnano cssnano preset default postcss svgo svgo css select css what isminimumfixversionavailable true minimumfixversion css what basebranches vulnerabilityidentifier cve vulnerabilitydetails the css what package before for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input vulnerabilityurl
| 0
|
371,356
| 10,965,364,448
|
IssuesEvent
|
2019-11-28 02:34:58
|
CMPUT301F19T34/MOODeration
|
https://api.github.com/repos/CMPUT301F19T34/MOODeration
|
closed
|
Add optional picture field to Mood Event.
|
5 story points high risk medium priority
|
**US 02.02.01**
**As a** participant, **I want** to express the reason why for a mood event using a photograph **so that** I can attribute a picture to a given mood event
|
1.0
|
Add optional picture field to Mood Event. - **US 02.02.01**
**As a** participant, **I want** to express the reason why for a mood event using a photograph **so that** I can attribute a picture to a given mood event
|
non_process
|
add optional picture field to mood event us as a participant i want to express the reason why for a mood event using a photograph so that i can attribute a picture to a given mood event
| 0
|
20,376
| 27,030,321,243
|
IssuesEvent
|
2023-02-12 04:57:24
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
data dependency should not be required for building a target
|
type: support / not a bug (process) team-ExternalDeps
|
### Description of the problem / feature request:
Building a `cc_test` target also requires the `data` dependencies, even though the `data` dependencies are only needed for running/testing.
We are using data dependencies for large test data and currently these large files need to be downloaded even when just building.
### Feature requests: what underlying problem are you trying to solve with this feature?
Specify an external `data` dependency, and only have it downloaded when it's actually needed.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
$ cat WORKSPACE
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "largefile",
url = "https://github.com/bazelbuild/bazel/archive/refs/tags/6.0.0-pre.20220223.1.zip"
)
$ cat BUILD.bazel
load("@rules_cc//cc:defs.bzl", "cc_test")
cc_test(
name = "test",
data = ["@largefile"],
srcs = glob([
"*.cpp",
"*.hpp",
]),
)
$ cat x.test.cpp
int main() {
return 0;
$ bazel build test
```
Running the bazel command downloads the zip file, even though it's not needed for the build.
### What operating system are you running Bazel on?
Ubuntu 20.04
### What's the output of `bazel info release`?
release 5.0.0
I also tried it with today's master, same behavior there (8dcf27e590ce77241a15fd2f2f8b9889a3d7731b)
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
--
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
--
### Have you found anything relevant by searching the web?
According to the documentation `data` does not affect how the target is built:
```
A build target might need some data files to run correctly. These data files aren't source code: they don't affect how the target is built.
```
https://docs.bazel.build/versions/main/build-ref.html#data
### Any other information, logs, or outputs that you want to share?
--
|
1.0
|
data dependency should not be required for building a target - ### Description of the problem / feature request:
Building a `cc_test` target also requires the `data` dependencies, even though the `data` dependencies are only needed for running/testing.
We are using data dependencies for large test data and currently these large files need to be downloaded even when just building.
### Feature requests: what underlying problem are you trying to solve with this feature?
Specify an external `data` dependency, and only have it downloaded when it's actually needed.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
$ cat WORKSPACE
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "largefile",
url = "https://github.com/bazelbuild/bazel/archive/refs/tags/6.0.0-pre.20220223.1.zip"
)
$ cat BUILD.bazel
load("@rules_cc//cc:defs.bzl", "cc_test")
cc_test(
name = "test",
data = ["@largefile"],
srcs = glob([
"*.cpp",
"*.hpp",
]),
)
$ cat x.test.cpp
int main() {
return 0;
$ bazel build test
```
Running the bazel command downloads the zip file, even though it's not needed for the build.
### What operating system are you running Bazel on?
Ubuntu 20.04
### What's the output of `bazel info release`?
release 5.0.0
I also tried it with today's master, same behavior there (8dcf27e590ce77241a15fd2f2f8b9889a3d7731b)
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
--
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
--
### Have you found anything relevant by searching the web?
According to the documentation `data` does not affect how the target is built:
```
A build target might need some data files to run correctly. These data files aren't source code: they don't affect how the target is built.
```
https://docs.bazel.build/versions/main/build-ref.html#data
### Any other information, logs, or outputs that you want to share?
--
|
process
|
data dependency should not be required for building a target description of the problem feature request building a cc test target also requires the data dependencies even though the data dependencies are only needed for running testing we are using data dependencies for large test data and currently these large files need to be downloaded even when just building feature requests what underlying problem are you trying to solve with this feature specify an external data dependency and only have it downloaded when it s actually needed bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible cat workspace load bazel tools tools build defs repo http bzl http archive http archive name largefile url cat build bazel load rules cc cc defs bzl cc test cc test name test data srcs glob cpp hpp cat x test cpp int main return bazel build test running the bazel command downloads the zip file even though it s not needed for the build what operating system are you running bazel on ubuntu what s the output of bazel info release release i also tried it with today s master same behavior there if bazel info release returns development version or non git tell us how you built bazel what s the output of git remote get url origin git rev parse master git rev parse head have you found anything relevant by searching the web according to the documentation data does not affect how the target is built a build target might need some data files to run correctly these data files aren t source code they don t affect how the target is built any other information logs or outputs that you want to share
| 1
|
10,082
| 13,044,161,979
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDurationAndDuration` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDurationAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDurationAndDuration` from TiDB -
## Description
Port the scalar function `SubDurationAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdurationandduration from tidb description port the scalar function subdurationandduration from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
52,020
| 27,337,944,473
|
IssuesEvent
|
2023-02-26 12:56:55
|
llvm/llvm-project
|
https://api.github.com/repos/llvm/llvm-project
|
opened
|
[X86] Garbage in undemanded vector elements can cause fdiv performance drops
|
backend:X86 performance
|
This is related to #60632
We've noticed that when dealing with partially demanded or short vectors, the values in the undemanded elements can cause performance drops in some fp instructions (most notable in fdiv but also fsqrt/divps), even with DAZ/FTZ enabled. This has been noticed most on btver2 targets, but I expect there's other CPUs that can be affected in other ways.
Sometimes this appears to be values that would raise fp-exceptions (fdivzero etc. - even if they've been disabled), other times its just because the values are particularly large or poorly canonicalized - basically if the element's bits don't represent a typical float value then it seems some weaker fdiv units are likely to drop to a slower execution path.
Pulling out exact examples is proving to be tricky, but something like:
```
define <2 x float> @fdiv_post_shuffle(<2 x float> %a0, <2 x float> %a1) {
%d = fdiv <2 x float> %a0, %a1
%s = shufflevector <2 x float> %d, <2 x float> poison, <2 x i32> <i32 1, i32 0>
ret <2 x float> %s
}
fdiv_post_shuffle:
vdivps %xmm1, %xmm0, %xmm0
vpermilps $225, %xmm0, %xmm0 # xmm0 = xmm0[1,0,2,3]
retq
```
would be better if actually performed as something like:
```
fdiv_pre_shuffle:
vpermilps $17, %xmm0, %xmm0 # xmm0 = xmm0[1,0,1,0]
vpermilps $17, %xmm1, %xmm1 # xmm1 = xmm1[1,0,1,0]
vdivps %xmm1, %xmm0, %xmm0
retq
```
|
True
|
[X86] Garbage in undemanded vector elements can cause fdiv performance drops - This is related to #60632
We've noticed that when dealing with partially demanded or short vectors, the values in the undemanded elements can cause performance drops in some fp instructions (most notable in fdiv but also fsqrt/divps), even with DAZ/FTZ enabled. This has been noticed most on btver2 targets, but I expect there's other CPUs that can be affected in other ways.
Sometimes this appears to be values that would raise fp-exceptions (fdivzero etc. - even if they've been disabled), other times its just because the values are particularly large or poorly canonicalized - basically if the element's bits don't represent a typical float value then it seems some weaker fdiv units are likely to drop to a slower execution path.
Pulling out exact examples is proving to be tricky, but something like:
```
define <2 x float> @fdiv_post_shuffle(<2 x float> %a0, <2 x float> %a1) {
%d = fdiv <2 x float> %a0, %a1
%s = shufflevector <2 x float> %d, <2 x float> poison, <2 x i32> <i32 1, i32 0>
ret <2 x float> %s
}
fdiv_post_shuffle:
vdivps %xmm1, %xmm0, %xmm0
vpermilps $225, %xmm0, %xmm0 # xmm0 = xmm0[1,0,2,3]
retq
```
would be better if actually performed as something like:
```
fdiv_pre_shuffle:
vpermilps $17, %xmm0, %xmm0 # xmm0 = xmm0[1,0,1,0]
vpermilps $17, %xmm1, %xmm1 # xmm1 = xmm1[1,0,1,0]
vdivps %xmm1, %xmm0, %xmm0
retq
```
|
non_process
|
garbage in undemanded vector elements can cause fdiv performance drops this is related to we ve noticed that when dealing with partially demanded or short vectors the values in the undemanded elements can cause performance drops in some fp instructions most notable in fdiv but also fsqrt divps even with daz ftz enabled this has been noticed most on targets but i expect there s other cpus that can be affected in other ways sometimes this appears to be values that would raise fp exceptions fdivzero etc even if they ve been disabled other times its just because the values are particularly large or poorly canonicalized basically if the element s bits don t represent a typical float value then it seems some weaker fdiv units are likely to drop to a slower execution path pulling out exact examples is proving to be tricky but something like define fdiv post shuffle d fdiv s shufflevector d poison ret s fdiv post shuffle vdivps vpermilps retq would be better if actually performed as something like fdiv pre shuffle vpermilps vpermilps vdivps retq
| 0
|
86,471
| 15,755,666,332
|
IssuesEvent
|
2021-03-31 02:10:59
|
attesch/zencart
|
https://api.github.com/repos/attesch/zencart
|
opened
|
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz
|
security vulnerability
|
## CVE-2019-20920 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /zencart/admin/includes/template/javascript/gridstack.js-master/package.json</p>
<p>Path to vulnerable library: zencart/admin/includes/template/javascript/gridstack.js-master/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-1.1.2.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2020-10-15</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz - ## CVE-2019-20920 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /zencart/admin/includes/template/javascript/gridstack.js-master/package.json</p>
<p>Path to vulnerable library: zencart/admin/includes/template/javascript/gridstack.js-master/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-1.1.2.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2020-10-15</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file zencart admin includes template javascript gridstack js master package json path to vulnerable library zencart admin includes template javascript gridstack js master node modules handlebars package json dependency hierarchy karma coverage tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim s browser effectively serving as xss publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
671,858
| 22,778,982,208
|
IssuesEvent
|
2022-07-08 17:22:23
|
stormk539/CavemanCooking
|
https://api.github.com/repos/stormk539/CavemanCooking
|
opened
|
Ingredient Blueprints
|
Priority: High 3
|
As the player, I must be able to collect ingredients for my dishes so that I am able to continue running my restaurant, as well as gain capital to improve it.
CoS:
1) Player is able to obtain food from berry bushes and random veggies in the ground
2) Short timer for how long it takes to pick food items
3) Tags for each ingredient to make the cooking process smoother
|
1.0
|
Ingredient Blueprints - As the player, I must be able to collect ingredients for my dishes so that I am able to continue running my restaurant, as well as gain capital to improve it.
CoS:
1) Player is able to obtain food from berry bushes and random veggies in the ground
2) Short timer for how long it takes to pick food items
3) Tags for each ingredient to make the cooking process smoother
|
non_process
|
ingredient blueprints as the player i must be able to collect ingredients for my dishes so that i am able to continue running my restaurant as well as gain capital to improve it cos player is able to obtain food from berry bushes and random veggies in the ground short timer for how long it takes to pick food items tags for each ingredient to make the cooking process smoother
| 0
|
624,892
| 19,712,220,302
|
IssuesEvent
|
2022-01-13 07:13:12
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
m.imgur.com - site is not usable
|
browser-firefox-mobile priority-critical engine-gecko QA_triaged
|
<!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/98111 -->
**URL**: https://m.imgur.com/
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
If it ac no it ru in vs see ridged earthiness
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/b9b7ae31-34c8-49c7-b65a-a0877fd2054c.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211215221728</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/27df1523-181e-4226-a94d-8a5e8a211af7)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
m.imgur.com - site is not usable - <!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/98111 -->
**URL**: https://m.imgur.com/
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
If it ac no it ru in vs see ridged earthiness
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/b9b7ae31-34c8-49c7-b65a-a0877fd2054c.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211215221728</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/27df1523-181e-4226-a94d-8a5e8a211af7)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
m imgur com site is not usable url browser version firefox mobile operating system android tested another browser yes safari problem type site is not usable description browser unsupported steps to reproduce if it ac no it ru in vs see ridged earthiness view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
20,591
| 27,256,676,031
|
IssuesEvent
|
2023-02-22 12:03:48
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
Balance : Adjust pollution values for consistency (feel free to add)
|
balance needs investigation mod:pycoalprocessing
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [ ] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [X] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
Crystal Mine MK 01 -> 0.06/sec
Soil Extractor MK 01 -> 0.06/sec
Destructive Distillation Column -> 0.06/sec
### Steps to reproduce
Use your favorite tool to read pollution values
### Additional context
_No response_
### Log file
_No response_
|
1.0
|
Balance : Adjust pollution values for consistency (feel free to add) - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [ ] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [X] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
Crystal Mine MK 01 -> 0.06/sec
Soil Extractor MK 01 -> 0.06/sec
Destructive Distillation Column -> 0.06/sec
### Steps to reproduce
Use your favorite tool to read pollution values
### Additional context
_No response_
### Log file
_No response_
|
process
|
balance adjust pollution values for consistency feel free to add mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem crystal mine mk sec soil extractor mk sec destructive distillation column sec steps to reproduce use your favorite tool to read pollution values additional context no response log file no response
| 1
|
16,167
| 3,509,176,100
|
IssuesEvent
|
2016-01-08 21:24:37
|
OpenGeoscience/geojs
|
https://api.github.com/repos/OpenGeoscience/geojs
|
closed
|
Investigate other coverage tools
|
testing
|
Several tests fail in strange ways when they are instrumented with blanket. It looks like blanket particularly doesn't like the global `inherit` function we use. There are a couple of alternative coverage reporters that might work:
1. [istanbul](https://github.com/gotwarlost/istanbul)
2. [jscoverage](https://github.com/fishbar/jscoverage)
|
1.0
|
Investigate other coverage tools - Several tests fail in strange ways when they are instrumented with blanket. It looks like blanket particularly doesn't like the global `inherit` function we use. There are a couple of alternative coverage reporters that might work:
1. [istanbul](https://github.com/gotwarlost/istanbul)
2. [jscoverage](https://github.com/fishbar/jscoverage)
|
non_process
|
investigate other coverage tools several tests fail in strange ways when they are instrumented with blanket it looks like blanket particularly doesn t like the global inherit function we use there are a couple of alternative coverage reporters that might work
| 0
|
325,782
| 24,061,319,628
|
IssuesEvent
|
2022-09-16 23:19:25
|
fleetdm/fleet
|
https://api.github.com/repos/fleetdm/fleet
|
opened
|
Create example query which finds files of a certain type being transmitted over the wire
|
:improve documentation
|
As a security engineer,
I want to know whether files of a certain type are being transmitted,
so that I can respond to them during special situations
|
1.0
|
Create example query which finds files of a certain type being transmitted over the wire - As a security engineer,
I want to know whether files of a certain type are being transmitted,
so that I can respond to them during special situations
|
non_process
|
create example query which finds files of a certain type being transmitted over the wire as a security engineer i want to know whether files of a certain type are being transmitted so that i can respond to them during special situations
| 0
|
268,987
| 8,418,786,767
|
IssuesEvent
|
2018-10-15 02:53:09
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
Unmounting extsd brings AnkiDroid to error screen
|
Priority-High bug
|
Originally reported on Google Code with ID 1342
```
What steps will reproduce the problem?
1. Have the collection in the standard place, /mnt/sdcard/AnkiDroid
2. Have a memory card mounted at /mnt/extsd
3. Review in AnkiDroid
4. In the Android settings, unmount the extsd
5. Go back to AnkiDroid
What is the expected output? What do you see instead?
AnkiDroid should not care about the disappeared extsd.
Instead, you get the blue error screen with "SD-Karte ist nicht eingebunden." (in German).
Something like "SD card not mounted."
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
Every time.
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
On what version of Android? (Home screen > menu > About phone > Android
version)
AnkiDroid 2.0.beta16
Android 4.0.3
(...)
Please provide any additional information below.
From the error screen, when you click on the "back" button in the bottom left, AnkiDroid
recovers. Still, i guess there is no need for this screen at all.
```
Reported by `ospalh` on 2012-08-25 17:15:52
<hr>
- _Attachment: [bildschirmfoto(1).jpg](https://storage.googleapis.com/google-code-attachments/ankidroid/issue-1342/comment-0/bildschirmfoto%281%29.jpg)_
|
1.0
|
Unmounting extsd brings AnkiDroid to error screen - Originally reported on Google Code with ID 1342
```
What steps will reproduce the problem?
1. Have the collection in the standard place, /mnt/sdcard/AnkiDroid
2. Have a memory card mounted at /mnt/extsd
3. Review in AnkiDroid
4. In the Android settings, unmount the extsd
5. Go back to AnkiDroid
What is the expected output? What do you see instead?
AnkiDroid should not care about the disappeared extsd.
Instead, you get the blue error screen with "SD-Karte ist nicht eingebunden." (in German).
Something like "SD card not mounted."
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
Every time.
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
On what version of Android? (Home screen > menu > About phone > Android
version)
AnkiDroid 2.0.beta16
Android 4.0.3
(...)
Please provide any additional information below.
From the error screen, when you click on the "back" button in the bottom left, AnkiDroid
recovers. Still, i guess there is no need for this screen at all.
```
Reported by `ospalh` on 2012-08-25 17:15:52
<hr>
- _Attachment: [bildschirmfoto(1).jpg](https://storage.googleapis.com/google-code-attachments/ankidroid/issue-1342/comment-0/bildschirmfoto%281%29.jpg)_
|
non_process
|
unmounting extsd brings ankidroid to error screen originally reported on google code with id what steps will reproduce the problem have the collection in the standard place mnt sdcard ankidroid have a memory card mounted at mnt extsd review in ankidroid in the android settings unmount the extsd go back to ankidroid what is the expected output what do you see instead ankidroid should not care about the disappeared extsd instead you get the blue error screen with sd karte ist nicht eingebunden in german something like sd card not mounted does it happen again every time you repeat the steps above or did it happen only one time every time what version of ankidroid are you using decks list menu about look at the title on what version of android home screen menu about phone android version ankidroid android please provide any additional information below from the error screen when you click on the back button in the bottom left ankidroid recovers still i guess there is no need for this screen at all reported by ospalh on attachment
| 0
|
20,710
| 27,401,921,189
|
IssuesEvent
|
2023-03-01 01:47:23
|
vnphanquang/svelte-put
|
https://api.github.com/repos/vnphanquang/svelte-put
|
closed
|
Using variable for data-inline-src returns error
|
op:question op:duplicate scope:preprocess-inline-svg
|
I have an array of strings that has all of my icon names in it.
Whenever trying to use:
` <svg data-inline-src=${realIconList[3]} />
`
I get the error: cannot find svg source for ${realIconList[3]}
|
1.0
|
Using variable for data-inline-src returns error - I have an array of strings that has all of my icon names in it.
Whenever trying to use:
` <svg data-inline-src=${realIconList[3]} />
`
I get the error: cannot find svg source for ${realIconList[3]}
|
process
|
using variable for data inline src returns error i have an array of strings that has all of my icon names in it whenever trying to use i get the error cannot find svg source for realiconlist
| 1
|
12,526
| 14,968,223,112
|
IssuesEvent
|
2021-01-27 16:35:16
|
CATcher-org/CATcher
|
https://api.github.com/repos/CATcher-org/CATcher
|
closed
|
Set Up Staging Site for CATcher Upon New Commits
|
aspect-Process
|
As a developer for CATcher, it would definitely be great that we are able to test and see the changes on CATcher after every commit to `master` branch. We should be able to use **Github Actions** to help us do so in this case.
We can deploy this on a separate Github Pages while leaving `CATcher-org.github.io/CATcher` for our release version of CATcher.
|
1.0
|
Set Up Staging Site for CATcher Upon New Commits - As a developer for CATcher, it would definitely be great that we are able to test and see the changes on CATcher after every commit to `master` branch. We should be able to use **Github Actions** to help us do so in this case.
We can deploy this on a separate Github Pages while leaving `CATcher-org.github.io/CATcher` for our release version of CATcher.
|
process
|
set up staging site for catcher upon new commits as a developer for catcher it would definitely be great that we are able to test and see the changes on catcher after every commit to master branch we should be able to use github actions to help us do so in this case we can deploy this on a separate github pages while leaving catcher org github io catcher for our release version of catcher
| 1
|
1,336
| 3,899,725,837
|
IssuesEvent
|
2016-04-17 22:23:09
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
closed
|
can not connect to ubuntu 14.0.4
|
bug component:data processing priority: high
|
```
Caused by: org.apache.sshd.common.SshException: No more authentication methods available
at org.apache.sshd.client.session.ClientUserAuthService.tryNext(ClientUserAuthService.java:315) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.client.session.ClientUserAuthService.processUserAuth(ClientUserAuthService.java:252) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:199) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.doHandleMessage(AbstractSession.java:530) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.handleMessage(AbstractSession.java:463) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.decode(AbstractSession.java:1325) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.messageReceived(AbstractSession.java:424) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:67) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session.handleReadCycleCompletion(Nio2Session.java:285) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session$2.onCompleted(Nio2Session.java:265) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session$2.onCompleted(Nio2Session.java:262) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:37) ~[sshd-core-1.2.0.jar:1.2.0]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_72]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:34) ~[sshd-core-1.2.0.jar:1.2.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126) ~[na:1.8.0_72]
at sun.nio.ch.Invoker$2.run(Invoker.java:218) ~[na:1.8.0_72]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) ~[na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_72]
... 1 common frames omitted
```
|
1.0
|
can not connect to ubuntu 14.0.4 - ```
Caused by: org.apache.sshd.common.SshException: No more authentication methods available
at org.apache.sshd.client.session.ClientUserAuthService.tryNext(ClientUserAuthService.java:315) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.client.session.ClientUserAuthService.processUserAuth(ClientUserAuthService.java:252) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:199) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.doHandleMessage(AbstractSession.java:530) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.handleMessage(AbstractSession.java:463) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.decode(AbstractSession.java:1325) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSession.messageReceived(AbstractSession.java:424) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.session.helpers.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:67) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session.handleReadCycleCompletion(Nio2Session.java:285) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session$2.onCompleted(Nio2Session.java:265) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2Session$2.onCompleted(Nio2Session.java:262) ~[sshd-core-1.2.0.jar:1.2.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:37) ~[sshd-core-1.2.0.jar:1.2.0]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_72]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:34) ~[sshd-core-1.2.0.jar:1.2.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126) ~[na:1.8.0_72]
at sun.nio.ch.Invoker$2.run(Invoker.java:218) ~[na:1.8.0_72]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) ~[na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_72]
... 1 common frames omitted
```
|
process
|
can not connect to ubuntu caused by org apache sshd common sshexception no more authentication methods available at org apache sshd client session clientuserauthservice trynext clientuserauthservice java at org apache sshd client session clientuserauthservice processuserauth clientuserauthservice java at org apache sshd client session clientuserauthservice process clientuserauthservice java at org apache sshd common session helpers abstractsession dohandlemessage abstractsession java at org apache sshd common session helpers abstractsession handlemessage abstractsession java at org apache sshd common session helpers abstractsession decode abstractsession java at org apache sshd common session helpers abstractsession messagereceived abstractsession java at org apache sshd common session helpers abstractsessioniohandler messagereceived abstractsessioniohandler java at org apache sshd common io handlereadcyclecompletion java at org apache sshd common io oncompleted java at org apache sshd common io oncompleted java at org apache sshd common io run java at java security accesscontroller doprivileged native method at org apache sshd common io completed java at sun nio ch invoker invokeunchecked invoker java at sun nio ch invoker run invoker java at sun nio ch asynchronouschannelgroupimpl run asynchronouschannelgroupimpl java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java common frames omitted
| 1
|
70,081
| 22,841,716,296
|
IssuesEvent
|
2022-07-12 22:53:38
|
coder/coder
|
https://api.github.com/repos/coder/coder
|
opened
|
"By" in workspace page not helpful
|
ux-defect
|
It's usually autostop, autostart, or myself. Why do I need to see that everywhere
<img width="1229" alt="Screen Shot 2022-07-12 at 5 53 26 PM" src="https://user-images.githubusercontent.com/7416144/178611852-9dfcebcc-50e1-42c1-8662-1496b0c66fe9.png">
?
|
1.0
|
"By" in workspace page not helpful - It's usually autostop, autostart, or myself. Why do I need to see that everywhere
<img width="1229" alt="Screen Shot 2022-07-12 at 5 53 26 PM" src="https://user-images.githubusercontent.com/7416144/178611852-9dfcebcc-50e1-42c1-8662-1496b0c66fe9.png">
?
|
non_process
|
by in workspace page not helpful it s usually autostop autostart or myself why do i need to see that everywhere img width alt screen shot at pm src
| 0
|
1,114
| 3,590,347,414
|
IssuesEvent
|
2016-02-01 04:56:48
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
TestProcessStartTime test is failing on OS X
|
bug Mac OSX System.Diagnostics.Process test bug
|
The TestProcessStartTime test in System.Diagnostic.Process.ProcessTests class is failing with the following error:
System.Diagnostics.ProcessTests.ProcessTests.TestProcessStartTime [FAIL]
Assert+WrapperXunitException : File path: Y:\Repositories\personal\dotnet\corefx\src\System.Diagnostics.Process\tests\ProcessTests.cs. Line: 387
---- Assert.InRange() Failure
Range: (635766571991261530 - 635766571991772530)
Actual: 635766571650984930
Stack Trace:
at Assert.WrapException(Exception e, String callerFilePath, Int32 callerLineNumber)
at Assert.InRange[T](T actual, T low, T high, String path, Int32 line)
at System.Diagnostics.ProcessTests.ProcessTests.TestProcessStartTime()
----- Inner Stack Trace -----
at Assert.InRange[T](T actual, T low, T high, String path, Int32 line)
|
1.0
|
TestProcessStartTime test is failing on OS X - The TestProcessStartTime test in System.Diagnostic.Process.ProcessTests class is failing with the following error:
System.Diagnostics.ProcessTests.ProcessTests.TestProcessStartTime [FAIL]
Assert+WrapperXunitException : File path: Y:\Repositories\personal\dotnet\corefx\src\System.Diagnostics.Process\tests\ProcessTests.cs. Line: 387
---- Assert.InRange() Failure
Range: (635766571991261530 - 635766571991772530)
Actual: 635766571650984930
Stack Trace:
at Assert.WrapException(Exception e, String callerFilePath, Int32 callerLineNumber)
at Assert.InRange[T](T actual, T low, T high, String path, Int32 line)
at System.Diagnostics.ProcessTests.ProcessTests.TestProcessStartTime()
----- Inner Stack Trace -----
at Assert.InRange[T](T actual, T low, T high, String path, Int32 line)
|
process
|
testprocessstarttime test is failing on os x the testprocessstarttime test in system diagnostic process processtests class is failing with the following error system diagnostics processtests processtests testprocessstarttime assert wrapperxunitexception file path y repositories personal dotnet corefx src system diagnostics process tests processtests cs line assert inrange failure range actual stack trace at assert wrapexception exception e string callerfilepath callerlinenumber at assert inrange t actual t low t high string path line at system diagnostics processtests processtests testprocessstarttime inner stack trace at assert inrange t actual t low t high string path line
| 1
|
13,173
| 15,596,774,705
|
IssuesEvent
|
2021-03-18 16:12:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Connection error with pgBouncer and programmatically setting connection URL
|
bug/1-repro-available kind/bug process/candidate team/client topic: pgbouncer topic: postgresql topic: prisma-client
|
## Bug description
_Note: some connection URL details like password and host have been redacted for security reasons_
When trying to use `pgbouncer=true` in a connection string for Postgres ([programmatically set](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference/#datasources)) the following error results
```
19:34:50.116 | Invalid `prisma.series.findMany()` invocation:
19:34:50.116 | Database `fi-postgres-pool-preview-1.public` does not exist on the database server at `xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060`.
```
It appears that `.public` is added to the end of the database/pool name, perhaps this is the crux of the issue?
## How to reproduce
The application this issue is appearing with is a Next.js app hosted on Vercel. The Postgres database is a DigitalOcean managed instance.
Here is the connection URL specified in an environmental variable and used by `prisma/schema.prisma`
```
POSTGRES_URL="postgresql://preview:xxxxxxxxx@xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060/preview?sslmode=require"
```
`prisma/schema.prisma` (excluding models)
```
generator client {
provider = "prisma-client-js"
previewFeatures = ["createMany"]
}
datasource db {
provider = "postgresql"
url = env("POSTGRES_URL")
}
```
The consensus I gleaned from other issues and Slack threads was that `prisma deploy` and `prisma generate` should not use a connection string with `pgbouncer=true`, so that flag is omitted in the env variable and the regular database is used.
When instantiating the client in Next.js, here is the code used
```
prismaClient = new PrismaClient({
datasources: {
db: {
url: `postgresql://preview:xxxxxxxxx@xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060/fi-postgres-pool-preview-1?sslmode=require&pgbouncer=true`,
},
},
});
}
```
I have attempted to forego programmatically changing the database URL and instead applying appropriate environmental variables through the postinstall/prebuild/build/postbuild chain, but the same issue arose. It seems that the format of the connection string and/or potentially DigitalOcean-specific issues are at play here – _I suspect_.
## Expected behavior
Successful connection to the Digital Ocean Postgres connection pool via pgbouncer.
## Environment & setup
- OS: Linux (Vercel)
- Database: PostgreSQL
- Node.js version: 12.x
- Prisma version: 2.19.0
|
1.0
|
Connection error with pgBouncer and programmatically setting connection URL - ## Bug description
_Note: some connection URL details like password and host have been redacted for security reasons_
When trying to use `pgbouncer=true` in a connection string for Postgres ([programmatically set](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference/#datasources)) the following error results
```
19:34:50.116 | Invalid `prisma.series.findMany()` invocation:
19:34:50.116 | Database `fi-postgres-pool-preview-1.public` does not exist on the database server at `xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060`.
```
It appears that `.public` is added to the end of the database/pool name, perhaps this is the crux of the issue?
## How to reproduce
The application this issue is appearing with is a Next.js app hosted on Vercel. The Postgres database is a DigitalOcean managed instance.
Here is the connection URL specified in an environmental variable and used by `prisma/schema.prisma`
```
POSTGRES_URL="postgresql://preview:xxxxxxxxx@xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060/preview?sslmode=require"
```
`prisma/schema.prisma` (excluding models)
```
generator client {
provider = "prisma-client-js"
previewFeatures = ["createMany"]
}
datasource db {
provider = "postgresql"
url = env("POSTGRES_URL")
}
```
The consensus I gleaned from other issues and Slack threads was that `prisma deploy` and `prisma generate` should not use a connection string with `pgbouncer=true`, so that flag is omitted in the env variable and the regular database is used.
When instantiating the client in Next.js, here is the code used
```
prismaClient = new PrismaClient({
datasources: {
db: {
url: `postgresql://preview:xxxxxxxxx@xxxxxxxxxx-xxxxxxxx-0.b.db.ondigitalocean.com:25060/fi-postgres-pool-preview-1?sslmode=require&pgbouncer=true`,
},
},
});
}
```
I have attempted to forego programmatically changing the database URL and instead applying appropriate environmental variables through the postinstall/prebuild/build/postbuild chain, but the same issue arose. It seems that the format of the connection string and/or potentially DigitalOcean-specific issues are at play here – _I suspect_.
## Expected behavior
Successful connection to the Digital Ocean Postgres connection pool via pgbouncer.
## Environment & setup
- OS: Linux (Vercel)
- Database: PostgreSQL
- Node.js version: 12.x
- Prisma version: 2.19.0
|
process
|
connection error with pgbouncer and programmatically setting connection url bug description note some connection url details like password and host have been redacted for security reasons when trying to use pgbouncer true in a connection string for postgres the following error results invalid prisma series findmany invocation database fi postgres pool preview public does not exist on the database server at xxxxxxxxxx xxxxxxxx b db ondigitalocean com it appears that public is added to the end of the database pool name perhaps this is the crux of the issue how to reproduce the application this issue is appearing with is a next js app hosted on vercel the postgres database is a digitalocean managed instance here is the connection url specified in an environmental variable and used by prisma schema prisma postgres url postgresql preview xxxxxxxxx xxxxxxxxxx xxxxxxxx b db ondigitalocean com preview sslmode require prisma schema prisma excluding models generator client provider prisma client js previewfeatures datasource db provider postgresql url env postgres url the consensus i gleaned from other issues and slack threads was that prisma deploy and prisma generate should not use a connection string with pgbouncer true so that flag is omitted in the env variable and the regular database is used when instantiating the client in next js here is the code used prismaclient new prismaclient datasources db url postgresql preview xxxxxxxxx xxxxxxxxxx xxxxxxxx b db ondigitalocean com fi postgres pool preview sslmode require pgbouncer true i have attempted to forego programmatically changing the database url and instead applying appropriate environmental variables through the postinstall prebuild build postbuild chain but the same issue arose it seems that the format of the connection string and or potentially digitalocean specific issues are at play here – i suspect expected behavior successful connection to the digital ocean postgres connection pool via pgbouncer environment setup os linux vercel database postgresql node js version x prisma version
| 1
|
18,297
| 24,406,311,891
|
IssuesEvent
|
2022-10-05 08:19:23
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Failed to capture behaviors that are composed of inherited APIs
|
issue-processing-state-01
|
**Describe the bug**
The summary report of [this sample](https://bazaar.abuse.ch/download/3f00206aaed4612ce4655152b972aeb2787ca4133aeacc8c9acd8c4d38ea3f79/) shows that the following rule that uses inherited APIs reaches 60%. But after looking into the decompiled Smali, Quark seems to miss some calling sequences in this sample.

```json
{
"crime": "Get MediaProjectionManager and create intent for screen capture",
"permission": [],
"api": [
{
"class": "Landroid/content/Context;",
"method": "getSystemService",
"descriptor": "(Ljava/lang/String;)Ljava/lang/Object;"
},
{
"class": "Landroid/media/projection/MediaProjectionManager;",
"method": "createScreenCaptureIntent",
"descriptor": "()Landroid/content/Intent;"
}
],
"score": 1,
"label": []
}
```
If we look into the smali code of the method `Lanubis/bot/myapplication/API/Screenshot/ActivityScreenshot;->onCreate(Landroid/os/Bundle;)V`, we can find that the APIs are called in the exact order described by the rule. But, Quark seems not to capture this calling sequence.

**Expected behavior**
The above rule achieves 80% or 100%.
**To Reproduce**
```bash
quark -a RemotePayload.apk -s <PATH/TO/THE/ABOVE/RULE>
```
|
1.0
|
Failed to capture behaviors that are composed of inherited APIs - **Describe the bug**
The summary report of [this sample](https://bazaar.abuse.ch/download/3f00206aaed4612ce4655152b972aeb2787ca4133aeacc8c9acd8c4d38ea3f79/) shows that the following rule that uses inherited APIs reaches 60%. But after looking into the decompiled Smali, Quark seems to miss some calling sequences in this sample.

```json
{
"crime": "Get MediaProjectionManager and create intent for screen capture",
"permission": [],
"api": [
{
"class": "Landroid/content/Context;",
"method": "getSystemService",
"descriptor": "(Ljava/lang/String;)Ljava/lang/Object;"
},
{
"class": "Landroid/media/projection/MediaProjectionManager;",
"method": "createScreenCaptureIntent",
"descriptor": "()Landroid/content/Intent;"
}
],
"score": 1,
"label": []
}
```
If we look into the smali code of the method `Lanubis/bot/myapplication/API/Screenshot/ActivityScreenshot;->onCreate(Landroid/os/Bundle;)V`, we can find that the APIs are called in the exact order described by the rule. But, Quark seems not to capture this calling sequence.

**Expected behavior**
The above rule achieves 80% or 100%.
**To Reproduce**
```bash
quark -a RemotePayload.apk -s <PATH/TO/THE/ABOVE/RULE>
```
|
process
|
failed to capture behaviors that are composed of inherited apis describe the bug the summary report of shows that the following rule that uses inherited apis reaches but after looking into the decompiled smali quark seems to miss some calling sequences in this sample json crime get mediaprojectionmanager and create intent for screen capture permission api class landroid content context method getsystemservice descriptor ljava lang string ljava lang object class landroid media projection mediaprojectionmanager method createscreencaptureintent descriptor landroid content intent score label if we look into the smali code of the method lanubis bot myapplication api screenshot activityscreenshot oncreate landroid os bundle v we can find that the apis are called in the exact order described by the rule but quark seems not to capture this calling sequence expected behavior the above rule achieves or to reproduce bash quark a remotepayload apk s
| 1
|
9,601
| 12,544,246,651
|
IssuesEvent
|
2020-06-05 16:53:49
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Deprecate the include tag and standardize around the render tag
|
enhancement process
|
As part of the work on Liquid Components in PR #26, we will be deprecating usage of the `include` tag and recommending a migration to `render`, `rendercontent`, and in certain cases custom tags.
The goal is to remove `include` entirely upon the release of Bridgetown 1.0 so we have a stable API for partials/components moving forward. FYI, this will forever break template compatibility with Jekyll…although there's no reason someone can't produce a `bridgetown-includes` plugin which restores the Jekyll-like functionality.
|
1.0
|
Deprecate the include tag and standardize around the render tag - As part of the work on Liquid Components in PR #26, we will be deprecating usage of the `include` tag and recommending a migration to `render`, `rendercontent`, and in certain cases custom tags.
The goal is to remove `include` entirely upon the release of Bridgetown 1.0 so we have a stable API for partials/components moving forward. FYI, this will forever break template compatibility with Jekyll…although there's no reason someone can't produce a `bridgetown-includes` plugin which restores the Jekyll-like functionality.
|
process
|
deprecate the include tag and standardize around the render tag as part of the work on liquid components in pr we will be deprecating usage of the include tag and recommending a migration to render rendercontent and in certain cases custom tags the goal is to remove include entirely upon the release of bridgetown so we have a stable api for partials components moving forward fyi this will forever break template compatibility with jekyll…although there s no reason someone can t produce a bridgetown includes plugin which restores the jekyll like functionality
| 1
|
6,885
| 10,025,686,807
|
IssuesEvent
|
2019-07-17 03:21:30
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Internal Server Error Source Control - Azure Automation in Portal
|
automation/svc process-automation/subsvc product-issue triaged
|
## Internal Server Error while integrating Source Control with Azure Automation Account.
Internal Server Error While connecting Source Control with Azure Automation Account.
Attaching Reference video for reference. The User has Run-As- Account rights.
[Azure-Automation-SourceControl Integration Error.zip](https://github.com/MicrosoftDocs/azure-docs/files/3386868/Azure-Automation-SourceControl.Integration.Error.zip)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Internal Server Error Source Control - Azure Automation in Portal - ## Internal Server Error while integrating Source Control with Azure Automation Account.
Internal Server Error While connecting Source Control with Azure Automation Account.
Attaching Reference video for reference. The User has Run-As- Account rights.
[Azure-Automation-SourceControl Integration Error.zip](https://github.com/MicrosoftDocs/azure-docs/files/3386868/Azure-Automation-SourceControl.Integration.Error.zip)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Source Control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
internal server error source control azure automation in portal internal server error while integrating source control with azure automation account internal server error while connecting source control with azure automation account attaching reference video for reference the user has run as account rights document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
5,731
| 8,576,025,208
|
IssuesEvent
|
2018-11-12 19:04:20
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
Particular/NServiceBus Possible message loss with TimeToBeReceived on transactional MSMQ endpoints.
|
ADA C# wrong processing
|
Issue: `https://github.com/Particular/NServiceBus/issues/3093`
PR: `null`
|
1.0
|
Particular/NServiceBus Possible message loss with TimeToBeReceived on transactional MSMQ endpoints. - Issue: `https://github.com/Particular/NServiceBus/issues/3093`
PR: `null`
|
process
|
particular nservicebus possible message loss with timetobereceived on transactional msmq endpoints issue pr null
| 1
|
169,042
| 26,736,895,853
|
IssuesEvent
|
2023-01-30 10:07:09
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
☂️ Missing Material Icons
|
framework f: material design proposal
|
This issue tracks missing Material icons.
If you were linked here from the Google Fonts website, please check a few things:
* Is the icon actually available on the stable channel? https://api.flutter.dev/flutter/material/Icons-class.html
* Is the icon actually available on the master channel? https://master-api.flutter.dev/flutter/material/Icons-class.html
* it should be available in the next stable release
* The icon might be added in an open PR or recently merged PR
* Are you in fact looking at Material Symbols? For more, follow https://github.com/flutter/flutter/issues/102560

If no to all of the above, please leave a comment with the icon name
|
1.0
|
☂️ Missing Material Icons - This issue tracks missing Material icons.
If you were linked here from the Google Fonts website, please check a few things:
* Is the icon actually available on the stable channel? https://api.flutter.dev/flutter/material/Icons-class.html
* Is the icon actually available on the master channel? https://master-api.flutter.dev/flutter/material/Icons-class.html
* it should be available in the next stable release
* The icon might be added in an open PR or recently merged PR
* Are you in fact looking at Material Symbols? For more, follow https://github.com/flutter/flutter/issues/102560

If no to all of the above, please leave a comment with the icon name
|
non_process
|
☂️ missing material icons this issue tracks missing material icons if you were linked here from the google fonts website please check a few things is the icon actually available on the stable channel is the icon actually available on the master channel it should be available in the next stable release the icon might be added in an open pr or recently merged pr are you in fact looking at material symbols for more follow if no to all of the above please leave a comment with the icon name
| 0
|
1,815
| 4,561,863,228
|
IssuesEvent
|
2016-09-14 13:15:11
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
closed
|
Noticed several Arakoon_etcd.ProcessFailure(_) when issuing list-presets / add-preset cli commands
|
priority_critical process_wontfix type_bug
|
Cause: due to Etcd timeout, the alba-cli command returns Arakoon_etcd.ProcessFailure()
Noticed in backend suite on:
- single node env: http://testrail.openvstorage.com/index.php?/plans/view/32791
- grid env: http://testrail.openvstorage.com/index.php?/runs/view/32786
|
1.0
|
Noticed several Arakoon_etcd.ProcessFailure(_) when issuing list-presets / add-preset cli commands - Cause: due to Etcd timeout, the alba-cli command returns Arakoon_etcd.ProcessFailure()
Noticed in backend suite on:
- single node env: http://testrail.openvstorage.com/index.php?/plans/view/32791
- grid env: http://testrail.openvstorage.com/index.php?/runs/view/32786
|
process
|
noticed several arakoon etcd processfailure when issuing list presets add preset cli commands cause due to etcd timeout the alba cli command returns arakoon etcd processfailure noticed in backend suite on single node env grid env
| 1
|
392,106
| 26,924,514,344
|
IssuesEvent
|
2023-02-07 12:54:00
|
frank0434/PhD
|
https://api.github.com/repos/frank0434/PhD
|
closed
|
processing porometer data
|
documentation
|
the porometer data will be incorrect if process in a stacked data frame.
It is better to process each file first and merge later.
but can also be done by setting the column names and skip line with none. this approach is used.
|
1.0
|
processing porometer data - the porometer data will be incorrect if process in a stacked data frame.
It is better to process each file first and merge later.
but can also be done by setting the column names and skip line with none. this approach is used.
|
non_process
|
processing porometer data the porometer data will be incorrect if process in a stacked data frame it is better to process each file first and merge later but can also be done by setting the column names and skip line with none this approach is used
| 0
|
15,612
| 19,753,050,942
|
IssuesEvent
|
2022-01-15 09:01:20
|
googleapis/google-cloud-php
|
https://api.github.com/repos/googleapis/google-cloud-php
|
opened
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryDataTransfer/.repo-metadata.json
* api_shortname field missing from BigQueryDataTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Billing/.repo-metadata.json
* release_level must be equal to one of the allowed values in Billing/.repo-metadata.json
* api_shortname field missing from Billing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Container/.repo-metadata.json
* release_level must be equal to one of the allowed values in Container/.repo-metadata.json
* api_shortname field missing from Container/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dlp/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dlp/.repo-metadata.json
* api_shortname field missing from Dlp/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DocumentAi/.repo-metadata.json
* release_level must be equal to one of the allowed values in DocumentAi/.repo-metadata.json
* api_shortname field missing from DocumentAi/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Kms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Kms/.repo-metadata.json
* api_shortname field missing from Kms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in LifeSciences/.repo-metadata.json
* release_level must be equal to one of the allowed values in LifeSciences/.repo-metadata.json
* api_shortname field missing from LifeSciences/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Memcache/.repo-metadata.json
* release_level must be equal to one of the allowed values in Memcache/.repo-metadata.json
* api_shortname field missing from Memcache/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Notebooks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Notebooks/.repo-metadata.json
* api_shortname field missing from Notebooks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityCenter/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityCenter/.repo-metadata.json
* api_shortname field missing from SecurityCenter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Shell/.repo-metadata.json
* release_level must be equal to one of the allowed values in Shell/.repo-metadata.json
* api_shortname field missing from Shell/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoTranscoder/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoTranscoder/.repo-metadata.json
* api_shortname field missing from VideoTranscoder/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Vision/.repo-metadata.json
* release_level must be equal to one of the allowed values in Vision/.repo-metadata.json
* api_shortname field missing from Vision/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryDataTransfer/.repo-metadata.json
* api_shortname field missing from BigQueryDataTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Billing/.repo-metadata.json
* release_level must be equal to one of the allowed values in Billing/.repo-metadata.json
* api_shortname field missing from Billing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Container/.repo-metadata.json
* release_level must be equal to one of the allowed values in Container/.repo-metadata.json
* api_shortname field missing from Container/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dlp/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dlp/.repo-metadata.json
* api_shortname field missing from Dlp/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DocumentAi/.repo-metadata.json
* release_level must be equal to one of the allowed values in DocumentAi/.repo-metadata.json
* api_shortname field missing from DocumentAi/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Kms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Kms/.repo-metadata.json
* api_shortname field missing from Kms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in LifeSciences/.repo-metadata.json
* release_level must be equal to one of the allowed values in LifeSciences/.repo-metadata.json
* api_shortname field missing from LifeSciences/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Memcache/.repo-metadata.json
* release_level must be equal to one of the allowed values in Memcache/.repo-metadata.json
* api_shortname field missing from Memcache/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Notebooks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Notebooks/.repo-metadata.json
* api_shortname field missing from Notebooks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityCenter/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityCenter/.repo-metadata.json
* api_shortname field missing from SecurityCenter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Shell/.repo-metadata.json
* release_level must be equal to one of the allowed values in Shell/.repo-metadata.json
* api_shortname field missing from Shell/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoTranscoder/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoTranscoder/.repo-metadata.json
* api_shortname field missing from VideoTranscoder/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Vision/.repo-metadata.json
* release_level must be equal to one of the allowed values in Vision/.repo-metadata.json
* api_shortname field missing from Vision/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 client documentation must match pattern in accessapproval repo metadata json release level must be equal to one of the allowed values in accessapproval repo metadata json api shortname field missing from accessapproval repo metadata json client documentation must match pattern in accesscontextmanager repo metadata json release level must be equal to one of the allowed values in accesscontextmanager repo metadata json api shortname field missing from accesscontextmanager repo metadata json release level must be equal to one of the allowed values in analyticsadmin repo metadata json api shortname field missing from analyticsadmin repo metadata json release level must be equal to one of the allowed values in analyticsdata repo metadata json api shortname field missing from analyticsdata repo metadata json client documentation must match pattern in apigateway repo metadata json release level must be equal to one of the allowed values in apigateway repo metadata json api shortname field missing from apigateway repo metadata json client documentation must match pattern in apigeeconnect repo metadata json release level must be equal to one of the allowed values in apigeeconnect repo metadata json api shortname field missing from apigeeconnect repo metadata json client documentation must match pattern in appengineadmin repo metadata json release level must be equal to one of the allowed values in appengineadmin repo metadata json api shortname field missing from appengineadmin repo metadata json client documentation must match pattern in artifactregistry repo metadata json release level must be equal to one of the allowed values in artifactregistry repo metadata json api shortname field missing from artifactregistry repo metadata json client documentation must match pattern in asset repo metadata json release level must be equal to one of the allowed values in asset repo metadata json api shortname field missing from asset repo metadata json release level must be equal to one of the allowed values in assuredworkloads repo metadata json api shortname field missing from assuredworkloads repo metadata json client documentation must match pattern in automl repo metadata json release level must be equal to one of the allowed values in automl repo metadata json api shortname field missing from automl repo metadata json release level must be equal to one of the allowed values in bigquery repo metadata json api shortname field missing from bigquery repo metadata json release level must be equal to one of the allowed values in bigqueryconnection repo metadata json api shortname field missing from bigqueryconnection repo metadata json release level must be equal to one of the allowed values in bigquerydatatransfer repo metadata json api shortname field missing from bigquerydatatransfer repo metadata json release level must be equal to one of the allowed values in bigqueryreservation repo metadata json api shortname field missing from bigqueryreservation repo metadata json client documentation must match pattern in bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigquerystorage repo metadata json api shortname field missing from bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigtable repo metadata json api shortname field missing from bigtable repo metadata json client documentation must match pattern in billing repo metadata json release level must be equal to one of the allowed values in billing repo metadata json api shortname field missing from billing repo metadata json client documentation must match pattern in billingbudgets repo metadata json release level must be equal to one of the allowed values in billingbudgets repo metadata json api shortname field missing from billingbudgets repo metadata json client documentation must match pattern in binaryauthorization repo metadata json release level must be equal to one of the allowed values in binaryauthorization repo metadata json api shortname field missing from binaryauthorization repo metadata json client documentation must match pattern in build repo metadata json release level must be equal to one of the allowed values in build repo metadata json api shortname field missing from build repo metadata json client documentation must match pattern in channel repo metadata json release level must be equal to one of the allowed values in channel repo metadata json api shortname field missing from channel repo metadata json client documentation must match pattern in commonprotos repo metadata json release level must be equal to one of the allowed values in commonprotos repo metadata json release level must be equal to one of the allowed values in compute repo metadata json api shortname field missing from compute repo metadata json client documentation must match pattern in contactcenterinsights repo metadata json release level must be equal to one of the allowed values in contactcenterinsights repo metadata json api shortname field missing from contactcenterinsights repo metadata json client documentation must match pattern in container repo metadata json release level must be equal to one of the allowed values in container repo metadata json api shortname field missing from container repo metadata json client documentation must match pattern in containeranalysis repo metadata json release level must be equal to one of the allowed values in containeranalysis repo metadata json api shortname field missing from containeranalysis repo metadata json client documentation must match pattern in core repo metadata json release level must be equal to one of the allowed values in core repo metadata json client documentation must match pattern in datacatalog repo metadata json release level must be equal to one of the allowed values in datacatalog repo metadata json api shortname field missing from datacatalog repo metadata json client documentation must match pattern in datafusion repo metadata json release level must be equal to one of the allowed values in datafusion repo metadata json api shortname field missing from datafusion repo metadata json client documentation must match pattern in datalabeling repo metadata json release level must be equal to one of the allowed values in datalabeling repo metadata json api shortname field missing from datalabeling repo metadata json client documentation must match pattern in dataflow repo metadata json release level must be equal to one of the allowed values in dataflow repo metadata json api shortname field missing from dataflow repo metadata json client documentation must match pattern in dataproc repo metadata json release level must be equal to one of the allowed values in dataproc repo metadata json api shortname field missing from dataproc repo metadata json client documentation must match pattern in dataprocmetastore repo metadata json release level must be equal to one of the allowed values in dataprocmetastore repo metadata json api shortname field missing from dataprocmetastore repo metadata json client documentation must match pattern in datastore repo metadata json release level must be equal to one of the allowed values in datastore repo metadata json api shortname field missing from datastore repo metadata json client documentation must match pattern in datastoreadmin repo metadata json release level must be equal to one of the allowed values in datastoreadmin repo metadata json api shortname field missing from datastoreadmin repo metadata json client documentation must match pattern in debugger repo metadata json release level must be equal to one of the allowed values in debugger repo metadata json api shortname field missing from debugger repo metadata json client documentation must match pattern in deploy repo metadata json release level must be equal to one of the allowed values in deploy repo metadata json api shortname field missing from deploy repo metadata json client documentation must match pattern in dialogflow repo metadata json release level must be equal to one of the allowed values in dialogflow repo metadata json api shortname field missing from dialogflow repo metadata json client documentation must match pattern in dlp repo metadata json release level must be equal to one of the allowed values in dlp repo metadata json api shortname field missing from dlp repo metadata json client documentation must match pattern in dms repo metadata json release level must be equal to one of the allowed values in dms repo metadata json api shortname field missing from dms repo metadata json client documentation must match pattern in documentai repo metadata json release level must be equal to one of the allowed values in documentai repo metadata json api shortname field missing from documentai repo metadata json client documentation must match pattern in domains repo metadata json release level must be equal to one of the allowed values in domains repo metadata json api shortname field missing from domains repo metadata json client documentation must match pattern in errorreporting repo metadata json release level must be equal to one of the allowed values in errorreporting repo metadata json api shortname field missing from errorreporting repo metadata json client documentation must match pattern in essentialcontacts repo metadata json release level must be equal to one of the allowed values in essentialcontacts repo metadata json api shortname field missing from essentialcontacts repo metadata json client documentation must match pattern in eventarc repo metadata json release level must be equal to one of the allowed values in eventarc repo metadata json api shortname field missing from eventarc repo metadata json client documentation must match pattern in filestore repo metadata json release level must be equal to one of the allowed values in filestore repo metadata json api shortname field missing from filestore repo metadata json client documentation must match pattern in firestore repo metadata json release level must be equal to one of the allowed values in firestore repo metadata json api shortname field missing from firestore repo metadata json client documentation must match pattern in functions repo metadata json release level must be equal to one of the allowed values in functions repo metadata json api shortname field missing from functions repo metadata json client documentation must match pattern in gaming repo metadata json release level must be equal to one of the allowed values in gaming repo metadata json api shortname field missing from gaming repo metadata json client documentation must match pattern in gkeconnectgateway repo metadata json release level must be equal to one of the allowed values in gkeconnectgateway repo metadata json api shortname field missing from gkeconnectgateway repo metadata json client documentation must match pattern in gkehub repo metadata json release level must be equal to one of the allowed values in gkehub repo metadata json api shortname field missing from gkehub repo metadata json client documentation must match pattern in grafeas repo metadata json release level must be equal to one of the allowed values in grafeas repo metadata json api shortname field missing from grafeas repo metadata json client documentation must match pattern in iamcredentials repo metadata json release level must be equal to one of the allowed values in iamcredentials repo metadata json api shortname field missing from iamcredentials repo metadata json client documentation must match pattern in iap repo metadata json release level must be equal to one of the allowed values in iap repo metadata json api shortname field missing from iap repo metadata json client documentation must match pattern in iot repo metadata json release level must be equal to one of the allowed values in iot repo metadata json api shortname field missing from iot repo metadata json client documentation must match pattern in kms repo metadata json release level must be equal to one of the allowed values in kms repo metadata json api shortname field missing from kms repo metadata json client documentation must match pattern in language repo metadata json release level must be equal to one of the allowed values in language repo metadata json api shortname field missing from language repo metadata json client documentation must match pattern in lifesciences repo metadata json release level must be equal to one of the allowed values in lifesciences repo metadata json api shortname field missing from lifesciences repo metadata json client documentation must match pattern in logging repo metadata json release level must be equal to one of the allowed values in logging repo metadata json api shortname field missing from logging repo metadata json client documentation must match pattern in managedidentities repo metadata json release level must be equal to one of the allowed values in managedidentities repo metadata json api shortname field missing from managedidentities repo metadata json client documentation must match pattern in mediatranslation repo metadata json release level must be equal to one of the allowed values in mediatranslation repo metadata json api shortname field missing from mediatranslation repo metadata json client documentation must match pattern in memcache repo metadata json release level must be equal to one of the allowed values in memcache repo metadata json api shortname field missing from memcache repo metadata json client documentation must match pattern in monitoring repo metadata json release level must be equal to one of the allowed values in monitoring repo metadata json api shortname field missing from monitoring repo metadata json client documentation must match pattern in networkconnectivity repo metadata json release level must be equal to one of the allowed values in networkconnectivity repo metadata json api shortname field missing from networkconnectivity repo metadata json client documentation must match pattern in networkmanagement repo metadata json release level must be equal to one of the allowed values in networkmanagement repo metadata json api shortname field missing from networkmanagement repo metadata json client documentation must match pattern in networksecurity repo metadata json release level must be equal to one of the allowed values in networksecurity repo metadata json api shortname field missing from networksecurity repo metadata json client documentation must match pattern in notebooks repo metadata json release level must be equal to one of the allowed values in notebooks repo metadata json api shortname field missing from notebooks repo metadata json client documentation must match pattern in orchestrationairflow repo metadata json release level must be equal to one of the allowed values in orchestrationairflow repo metadata json api shortname field missing from orchestrationairflow repo metadata json client documentation must match pattern in orgpolicy repo metadata json release level must be equal to one of the allowed values in orgpolicy repo metadata json api shortname field missing from orgpolicy repo metadata json client documentation must match pattern in osconfig repo metadata json release level must be equal to one of the allowed values in osconfig repo metadata json api shortname field missing from osconfig repo metadata json client documentation must match pattern in oslogin repo metadata json release level must be equal to one of the allowed values in oslogin repo metadata json api shortname field missing from oslogin repo metadata json client documentation must match pattern in policytroubleshooter repo metadata json release level must be equal to one of the allowed values in policytroubleshooter repo metadata json api shortname field missing from policytroubleshooter repo metadata json client documentation must match pattern in privatecatalog repo metadata json release level must be equal to one of the allowed values in privatecatalog repo metadata json api shortname field missing from privatecatalog repo metadata json client documentation must match pattern in profiler repo metadata json release level must be equal to one of the allowed values in profiler repo metadata json api shortname field missing from profiler repo metadata json client documentation must match pattern in pubsub repo metadata json release level must be equal to one of the allowed values in pubsub repo metadata json api shortname field missing from pubsub repo metadata json client documentation must match pattern in recaptchaenterprise repo metadata json release level must be equal to one of the allowed values in recaptchaenterprise repo metadata json api shortname field missing from recaptchaenterprise repo metadata json client documentation must match pattern in recommendationengine repo metadata json release level must be equal to one of the allowed values in recommendationengine repo metadata json api shortname field missing from recommendationengine repo metadata json client documentation must match pattern in recommender repo metadata json release level must be equal to one of the allowed values in recommender repo metadata json api shortname field missing from recommender repo metadata json client documentation must match pattern in redis repo metadata json release level must be equal to one of the allowed values in redis repo metadata json api shortname field missing from redis repo metadata json client documentation must match pattern in resourcemanager repo metadata json release level must be equal to one of the allowed values in resourcemanager repo metadata json api shortname field missing from resourcemanager repo metadata json client documentation must match pattern in resourcesettings repo metadata json release level must be equal to one of the allowed values in resourcesettings repo metadata json api shortname field missing from resourcesettings repo metadata json client documentation must match pattern in retail repo metadata json release level must be equal to one of the allowed values in retail repo metadata json api shortname field missing from retail repo metadata json client documentation must match pattern in scheduler repo metadata json release level must be equal to one of the allowed values in scheduler repo metadata json api shortname field missing from scheduler repo metadata json client documentation must match pattern in secretmanager repo metadata json release level must be equal to one of the allowed values in secretmanager repo metadata json api shortname field missing from secretmanager repo metadata json client documentation must match pattern in securitycenter repo metadata json release level must be equal to one of the allowed values in securitycenter repo metadata json api shortname field missing from securitycenter repo metadata json client documentation must match pattern in securityprivateca repo metadata json release level must be equal to one of the allowed values in securityprivateca repo metadata json api shortname field missing from securityprivateca repo metadata json client documentation must match pattern in servicecontrol repo metadata json release level must be equal to one of the allowed values in servicecontrol repo metadata json api shortname field missing from servicecontrol repo metadata json client documentation must match pattern in servicedirectory repo metadata json release level must be equal to one of the allowed values in servicedirectory repo metadata json api shortname field missing from servicedirectory repo metadata json client documentation must match pattern in servicemanagement repo metadata json release level must be equal to one of the allowed values in servicemanagement repo metadata json api shortname field missing from servicemanagement repo metadata json client documentation must match pattern in serviceusage repo metadata json release level must be equal to one of the allowed values in serviceusage repo metadata json api shortname field missing from serviceusage repo metadata json client documentation must match pattern in shell repo metadata json release level must be equal to one of the allowed values in shell repo metadata json api shortname field missing from shell repo metadata json client documentation must match pattern in spanner repo metadata json release level must be equal to one of the allowed values in spanner repo metadata json api shortname field missing from spanner repo metadata json client documentation must match pattern in speech repo metadata json release level must be equal to one of the allowed values in speech repo metadata json api shortname field missing from speech repo metadata json client documentation must match pattern in sqladmin repo metadata json release level must be equal to one of the allowed values in sqladmin repo metadata json api shortname field missing from sqladmin repo metadata json client documentation must match pattern in storage repo metadata json release level must be equal to one of the allowed values in storage repo metadata json api shortname field missing from storage repo metadata json client documentation must match pattern in storagetransfer repo metadata json release level must be equal to one of the allowed values in storagetransfer repo metadata json api shortname field missing from storagetransfer repo metadata json client documentation must match pattern in talent repo metadata json release level must be equal to one of the allowed values in talent repo metadata json api shortname field missing from talent repo metadata json client documentation must match pattern in tasks repo metadata json release level must be equal to one of the allowed values in tasks repo metadata json api shortname field missing from tasks repo metadata json client documentation must match pattern in texttospeech repo metadata json release level must be equal to one of the allowed values in texttospeech repo metadata json api shortname field missing from texttospeech repo metadata json client documentation must match pattern in tpu repo metadata json release level must be equal to one of the allowed values in tpu repo metadata json api shortname field missing from tpu repo metadata json client documentation must match pattern in trace repo metadata json release level must be equal to one of the allowed values in trace repo metadata json api shortname field missing from trace repo metadata json client documentation must match pattern in translate repo metadata json release level must be equal to one of the allowed values in translate repo metadata json api shortname field missing from translate repo metadata json client documentation must match pattern in videointelligence repo metadata json release level must be equal to one of the allowed values in videointelligence repo metadata json api shortname field missing from videointelligence repo metadata json client documentation must match pattern in videotranscoder repo metadata json release level must be equal to one of the allowed values in videotranscoder repo metadata json api shortname field missing from videotranscoder repo metadata json client documentation must match pattern in vision repo metadata json release level must be equal to one of the allowed values in vision repo metadata json api shortname field missing from vision repo metadata json client documentation must match pattern in vpcaccess repo metadata json release level must be equal to one of the allowed values in vpcaccess repo metadata json api shortname field missing from vpcaccess repo metadata json client documentation must match pattern in webrisk repo metadata json release level must be equal to one of the allowed values in webrisk repo metadata json api shortname field missing from webrisk repo metadata json client documentation must match pattern in websecurityscanner repo metadata json release level must be equal to one of the allowed values in websecurityscanner repo metadata json api shortname field missing from websecurityscanner repo metadata json client documentation must match pattern in workflows repo metadata json release level must be equal to one of the allowed values in workflows repo metadata json api shortname field missing from workflows repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
2,063
| 4,866,884,794
|
IssuesEvent
|
2016-11-15 01:37:02
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Create implementation of HandleCount on Unix
|
area-System.Diagnostics.Process netstandard API netstandard2.0
|
Once https://github.com/dotnet/corefx/pull/12765 is checked in, create the Unix implementation for HandleCount.
|
1.0
|
Create implementation of HandleCount on Unix - Once https://github.com/dotnet/corefx/pull/12765 is checked in, create the Unix implementation for HandleCount.
|
process
|
create implementation of handlecount on unix once is checked in create the unix implementation for handlecount
| 1
|
275,817
| 30,309,308,010
|
IssuesEvent
|
2023-07-10 11:44:21
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[Cloud Security] Vulnerability dashboard test tracking
|
Team:Cloud Security
|
## Summary
Here we will track the reviewed and approved tests progress according to the [RTC](https://docs.google.com/spreadsheets/d/1-uqDi7z9GdVob2rHK8EHcrcIqhBGUcR0K5MqMQiLQho/edit#gid=1133699557)
## API Test
```[tasklist]
### Vulnerability Dashboard API
- [ ] [FTR] When making an API call, if the data received matches the pre-defined mock, the API responds with a 200 status code
- [ ] [FTR] When an unauthorized call is made, the API returns a 401 error
- [ ] [FTR] If called without permissions, the API responds with a 403 error
- [ ] [FTR] When called without sufficient permissions, the API returns a 403 error
- [ ] [FTR] When the necessary indices are nonexistent, the API returns a 400 error without providing any data
- [ ] [FTR] When the required indices are empty, the API responds with a 200 status code, but the data returned is empty
```
## Functional Tests
```[tasklist]
### Trend Graph
- [ ] [FTR] Clicking on the view all button links to an unfiltered vulnerability findings page, sorted by severity and CVSS
```
|
True
|
[Cloud Security] Vulnerability dashboard test tracking - ## Summary
Here we will track the reviewed and approved tests progress according to the [RTC](https://docs.google.com/spreadsheets/d/1-uqDi7z9GdVob2rHK8EHcrcIqhBGUcR0K5MqMQiLQho/edit#gid=1133699557)
## API Test
```[tasklist]
### Vulnerability Dashboard API
- [ ] [FTR] When making an API call, if the data received matches the pre-defined mock, the API responds with a 200 status code
- [ ] [FTR] When an unauthorized call is made, the API returns a 401 error
- [ ] [FTR] If called without permissions, the API responds with a 403 error
- [ ] [FTR] When called without sufficient permissions, the API returns a 403 error
- [ ] [FTR] When the necessary indices are nonexistent, the API returns a 400 error without providing any data
- [ ] [FTR] When the required indices are empty, the API responds with a 200 status code, but the data returned is empty
```
## Functional Tests
```[tasklist]
### Trend Graph
- [ ] [FTR] Clicking on the view all button links to an unfiltered vulnerability findings page, sorted by severity and CVSS
```
|
non_process
|
vulnerability dashboard test tracking summary here we will track the reviewed and approved tests progress according to the api test vulnerability dashboard api when making an api call if the data received matches the pre defined mock the api responds with a status code when an unauthorized call is made the api returns a error if called without permissions the api responds with a error when called without sufficient permissions the api returns a error when the necessary indices are nonexistent the api returns a error without providing any data when the required indices are empty the api responds with a status code but the data returned is empty functional tests trend graph clicking on the view all button links to an unfiltered vulnerability findings page sorted by severity and cvss
| 0
|
249,476
| 7,962,388,096
|
IssuesEvent
|
2018-07-13 14:10:12
|
status-im/status-react
|
https://api.github.com/repos/status-im/status-react
|
closed
|
Click on 'Add to contact' doesn't add user to contact
|
bug chat desktop high-priority
|
### User Story
As a user, I want to add a person to contacts - thus share my name and profile photo.
### Description
*Type*: Bug
*Summary*: when a user A from other device start chat with user B (on status-desktop) - user B cannot add user A to his contacts.
#### Expected behavior
'Add to contacts' is working; after click user A can see profile name and photo of User B
#### Actual behavior
no action after click on 'Add to contacts'

### Reproduction
**Prerequisites**:
1) User A is using `status-mobile`
2) User B is using `status-desktop`
- User A starts a new chat with user B
- User A sends several messages
- User B opens chat with User B
### Additional Information
* Status version: StatusIm desktop (version 2018-05-07)
* Operating System: MacOS High Sierra 10.13.4, Ubuntu 18.04
* Video: http://take.ms/R8ouy
<blockquote><img src="https://api.monosnap.com/rpc/file/download?id=qAAqsrEhCGBZSyZSIDNPSlB0Vbgzvu&type=preview" width="48" align="right"><div>Monosnap screenshot tool</div><div><strong><a href="https://monosnap.com/file/qAAqsrEhCGBZSyZSIDNPSlB0Vbgzvu">File "screencast 2018-05-15 14-53-59.mp4"</a></strong></div><div>Monosnap — the best tool to take & share your screenshots.</div></blockquote>
|
1.0
|
Click on 'Add to contact' doesn't add user to contact - ### User Story
As a user, I want to add a person to contacts - thus share my name and profile photo.
### Description
*Type*: Bug
*Summary*: when a user A from other device start chat with user B (on status-desktop) - user B cannot add user A to his contacts.
#### Expected behavior
'Add to contacts' is working; after click user A can see profile name and photo of User B
#### Actual behavior
no action after click on 'Add to contacts'

### Reproduction
**Prerequisites**:
1) User A is using `status-mobile`
2) User B is using `status-desktop`
- User A starts a new chat with user B
- User A sends several messages
- User B opens chat with User B
### Additional Information
* Status version: StatusIm desktop (version 2018-05-07)
* Operating System: MacOS High Sierra 10.13.4, Ubuntu 18.04
* Video: http://take.ms/R8ouy
<blockquote><img src="https://api.monosnap.com/rpc/file/download?id=qAAqsrEhCGBZSyZSIDNPSlB0Vbgzvu&type=preview" width="48" align="right"><div>Monosnap screenshot tool</div><div><strong><a href="https://monosnap.com/file/qAAqsrEhCGBZSyZSIDNPSlB0Vbgzvu">File "screencast 2018-05-15 14-53-59.mp4"</a></strong></div><div>Monosnap — the best tool to take & share your screenshots.</div></blockquote>
|
non_process
|
click on add to contact doesn t add user to contact user story as a user i want to add a person to contacts thus share my name and profile photo description type bug summary when a user a from other device start chat with user b on status desktop user b cannot add user a to his contacts expected behavior add to contacts is working after click user a can see profile name and photo of user b actual behavior no action after click on add to contacts reproduction prerequisites user a is using status mobile user b is using status desktop user a starts a new chat with user b user a sends several messages user b opens chat with user b additional information status version statusim desktop version operating system macos high sierra ubuntu video monosnap screenshot tool monosnap — the best tool to take share your screenshots
| 0
|
5,197
| 5,517,891,526
|
IssuesEvent
|
2017-03-18 02:41:47
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
build-managed.cmd -packages doesn't work in release/1.1.0
|
area-Infrastructure question
|
I download both
tags: v1.1.0
and
branch: release/1.1.0
But in release/1.1.0
build-managed.cmd -packages doesn't work.
Is tag v1.1.0 the correct tag for correct release 1.1.0?
|
1.0
|
build-managed.cmd -packages doesn't work in release/1.1.0 - I download both
tags: v1.1.0
and
branch: release/1.1.0
But in release/1.1.0
build-managed.cmd -packages doesn't work.
Is tag v1.1.0 the correct tag for correct release 1.1.0?
|
non_process
|
build managed cmd packages doesn t work in release i download both tags and branch release but in release build managed cmd packages doesn t work is tag the correct tag for correct release
| 0
|
10,330
| 13,162,978,728
|
IssuesEvent
|
2020-08-10 22:59:01
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Migrate google-cloud-logging to the microgenerator
|
type: process
|
Migrate google-cloud-logging to the microgenerator. This involves the following steps:
* [x] Write synth file and generate `google-cloud-logging-v2`
* [x] Make sure the new libraries are configured in kokoro
* [x] Release `google-cloud-logging-v2`
* [ ] Switch `google-cloud-logging` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-logging-v2` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-logging` update
Important: The [Google fluentd plugin](https://github.com/GoogleCloudPlatform/fluent-plugin-google-cloud) uses the low-level API in `google-cloud-logging`. Luckily, I believe they pin the `google-cloud-logging` version so they won't break when we migrate. However, we should inform @igorpeshansky of this change when it happens so they're aware they need to update their code when they move their pinned `google-cloud-logging` past this update.
I do not believe samples need to be updated, unless they invoke the low-level interface directly.
|
1.0
|
Migrate google-cloud-logging to the microgenerator - Migrate google-cloud-logging to the microgenerator. This involves the following steps:
* [x] Write synth file and generate `google-cloud-logging-v2`
* [x] Make sure the new libraries are configured in kokoro
* [x] Release `google-cloud-logging-v2`
* [ ] Switch `google-cloud-logging` backend to the versioned gems. That is:
* Rip out synth and all the generated code
* Add `google-cloud-logging-v2` as a dependency
* Update the veneer code to the microgenerator usage
* [ ] Release `google-cloud-logging` update
Important: The [Google fluentd plugin](https://github.com/GoogleCloudPlatform/fluent-plugin-google-cloud) uses the low-level API in `google-cloud-logging`. Luckily, I believe they pin the `google-cloud-logging` version so they won't break when we migrate. However, we should inform @igorpeshansky of this change when it happens so they're aware they need to update their code when they move their pinned `google-cloud-logging` past this update.
I do not believe samples need to be updated, unless they invoke the low-level interface directly.
|
process
|
migrate google cloud logging to the microgenerator migrate google cloud logging to the microgenerator this involves the following steps write synth file and generate google cloud logging make sure the new libraries are configured in kokoro release google cloud logging switch google cloud logging backend to the versioned gems that is rip out synth and all the generated code add google cloud logging as a dependency update the veneer code to the microgenerator usage release google cloud logging update important the uses the low level api in google cloud logging luckily i believe they pin the google cloud logging version so they won t break when we migrate however we should inform igorpeshansky of this change when it happens so they re aware they need to update their code when they move their pinned google cloud logging past this update i do not believe samples need to be updated unless they invoke the low level interface directly
| 1
|
12,948
| 15,308,575,761
|
IssuesEvent
|
2021-02-24 22:42:25
|
radis/radis
|
https://api.github.com/repos/radis/radis
|
closed
|
slit function - apply_slit - get_slit_function - slit in array or spectrum format + documentation
|
enhancement good first issue post-process
|
1) slit_function doesn't accept array or Spectrum to define the slit function
2) convolve_with_slit doesn't allow slit functions that takes negative values (problem when the slit function is a sinc (cardinal sine) or any other shape that crosses zero) -> maybe make a test to check if area is positive instead?
3) Bad documentation of get_slit_function and apply_slit : at this point, one can give a tuple to define a trapezoidal_slit
- [x] Add possibility to declare slit as array
- [ ] add possibility to declare slit as a Spectrum object
- [ ] fix the negative behaviour
- [ ] increment documentation based on the previous points
|
1.0
|
slit function - apply_slit - get_slit_function - slit in array or spectrum format + documentation - 1) slit_function doesn't accept array or Spectrum to define the slit function
2) convolve_with_slit doesn't allow slit functions that takes negative values (problem when the slit function is a sinc (cardinal sine) or any other shape that crosses zero) -> maybe make a test to check if area is positive instead?
3) Bad documentation of get_slit_function and apply_slit : at this point, one can give a tuple to define a trapezoidal_slit
- [x] Add possibility to declare slit as array
- [ ] add possibility to declare slit as a Spectrum object
- [ ] fix the negative behaviour
- [ ] increment documentation based on the previous points
|
process
|
slit function apply slit get slit function slit in array or spectrum format documentation slit function doesn t accept array or spectrum to define the slit function convolve with slit doesn t allow slit functions that takes negative values problem when the slit function is a sinc cardinal sine or any other shape that crosses zero maybe make a test to check if area is positive instead bad documentation of get slit function and apply slit at this point one can give a tuple to define a trapezoidal slit add possibility to declare slit as array add possibility to declare slit as a spectrum object fix the negative behaviour increment documentation based on the previous points
| 1
|
282,019
| 24,445,443,591
|
IssuesEvent
|
2022-10-06 17:32:50
|
wazuh/wazuh-kibana-app
|
https://api.github.com/repos/wazuh/wazuh-kibana-app
|
closed
|
Kibana - Release 4.3.9 - Release Candidate 1 - Testing
|
release test/4.3.9
|
## Description
Issue created to track the effort to perform a smoke test on Kibana packages for Wazuh 4.3.9.
We are waiting for the packages to start the test
## Tasks
- [x] Smoke test - Wazuh Dashboard 4.3.9
- [x] Smoke test - Kibana 7.17.6 Xpack
- [x] Smoke test - Kibana 7.17.5 Xpack
- [x] Smoke test - Kibana 7.17.4 Xpack
- [x] Smoke test - Kibana 7.16.3 Xpack
- [x] Smoke test - Kibana 7.10.2 ODFE
|
1.0
|
Kibana - Release 4.3.9 - Release Candidate 1 - Testing - ## Description
Issue created to track the effort to perform a smoke test on Kibana packages for Wazuh 4.3.9.
We are waiting for the packages to start the test
## Tasks
- [x] Smoke test - Wazuh Dashboard 4.3.9
- [x] Smoke test - Kibana 7.17.6 Xpack
- [x] Smoke test - Kibana 7.17.5 Xpack
- [x] Smoke test - Kibana 7.17.4 Xpack
- [x] Smoke test - Kibana 7.16.3 Xpack
- [x] Smoke test - Kibana 7.10.2 ODFE
|
non_process
|
kibana release release candidate testing description issue created to track the effort to perform a smoke test on kibana packages for wazuh we are waiting for the packages to start the test tasks smoke test wazuh dashboard smoke test kibana xpack smoke test kibana xpack smoke test kibana xpack smoke test kibana xpack smoke test kibana odfe
| 0
|
6,565
| 9,651,797,442
|
IssuesEvent
|
2019-05-18 11:23:29
|
Gelbpunkt/IdleRPG
|
https://api.github.com/repos/Gelbpunkt/IdleRPG
|
closed
|
IndexError: Shards command is not working
|
bug multiprocessing
|
See the bug tracker `/idlerpg/issues/727/`
**Executed command**: `rpg shards`
## TODO:
Complete command remake.
## Exception:
```
IndexError: list index out of range
File "IdleRPG/cogs/error_handler.py", line 116, in _on_command_error
raise error.original
File "discord/ext/commands/core.py", line 63, in wrapped
ret = await coro(*args, **kwargs)
File "IdleRPG/cogs/owner.py", line 208, in shards
res += f"Shard **{s}** ({len([g for g in self.bot.guilds if g.shard_id==s])} Servers). Ping: {round(self.bot.latencies[s][1]*1000, 2)}ms\n"
```
|
1.0
|
IndexError: Shards command is not working - See the bug tracker `/idlerpg/issues/727/`
**Executed command**: `rpg shards`
## TODO:
Complete command remake.
## Exception:
```
IndexError: list index out of range
File "IdleRPG/cogs/error_handler.py", line 116, in _on_command_error
raise error.original
File "discord/ext/commands/core.py", line 63, in wrapped
ret = await coro(*args, **kwargs)
File "IdleRPG/cogs/owner.py", line 208, in shards
res += f"Shard **{s}** ({len([g for g in self.bot.guilds if g.shard_id==s])} Servers). Ping: {round(self.bot.latencies[s][1]*1000, 2)}ms\n"
```
|
process
|
indexerror shards command is not working see the bug tracker idlerpg issues executed command rpg shards todo complete command remake exception indexerror list index out of range file idlerpg cogs error handler py line in on command error raise error original file discord ext commands core py line in wrapped ret await coro args kwargs file idlerpg cogs owner py line in shards res f shard s len servers ping round self bot latencies ms n
| 1
|
361,071
| 10,703,802,046
|
IssuesEvent
|
2019-10-24 10:18:02
|
DXHeroes/dx-scanner
|
https://api.github.com/repos/DXHeroes/dx-scanner
|
opened
|
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unhandledRejection listeners added to [process]. Use emitter.setMaxListeners() to increase limit
|
Difficulty: Unknown or N/A Priority: Medium Status: Blocked Type: Bug
|
I run the DX Scanner with a cmd yarn start https://github.com/yarnpkg/yarn and the app throws a warning
`MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unhandledRejection listeners added to [process]. Use emitter.setMaxListeners() to increase limit`
It's caused by `npm-check-updates` library that adds event listeners in every call of `run` function.
Related to my issue in: https://github.com/tjunnone/npm-check-updates/issues/597
|
1.0
|
MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unhandledRejection listeners added to [process]. Use emitter.setMaxListeners() to increase limit - I run the DX Scanner with a cmd yarn start https://github.com/yarnpkg/yarn and the app throws a warning
`MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 unhandledRejection listeners added to [process]. Use emitter.setMaxListeners() to increase limit`
It's caused by `npm-check-updates` library that adds event listeners in every call of `run` function.
Related to my issue in: https://github.com/tjunnone/npm-check-updates/issues/597
|
non_process
|
maxlistenersexceededwarning possible eventemitter memory leak detected unhandledrejection listeners added to use emitter setmaxlisteners to increase limit i run the dx scanner with a cmd yarn start and the app throws a warning maxlistenersexceededwarning possible eventemitter memory leak detected unhandledrejection listeners added to use emitter setmaxlisteners to increase limit it s caused by npm check updates library that adds event listeners in every call of run function related to my issue in
| 0
|
38,448
| 5,187,701,555
|
IssuesEvent
|
2017-01-20 17:35:52
|
emfoundation/ce100-app
|
https://api.github.com/repos/emfoundation/ce100-app
|
closed
|
Browse: in challenge list, no description should be shown.
|
bug please-test priority-3 T1h
|
Tapping the challenge opens a new view which also shows the description.
See [here](https://zpl.io/ZEOhGN) and [here](https://zpl.io/Z2kmBO6).
|
1.0
|
Browse: in challenge list, no description should be shown. - Tapping the challenge opens a new view which also shows the description.
See [here](https://zpl.io/ZEOhGN) and [here](https://zpl.io/Z2kmBO6).
|
non_process
|
browse in challenge list no description should be shown tapping the challenge opens a new view which also shows the description see and
| 0
|
76,407
| 15,496,004,772
|
IssuesEvent
|
2021-03-11 01:53:30
|
yhuangsh/50pm
|
https://api.github.com/repos/yhuangsh/50pm
|
opened
|
CVE-2021-24033 (Medium) detected in react-dev-utils-8.0.0.tgz
|
security vulnerability
|
## CVE-2021-24033 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-dev-utils-8.0.0.tgz</b></p></summary>
<p>Webpack utilities used by Create React App</p>
<p>Library home page: <a href="https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-8.0.0.tgz">https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-8.0.0.tgz</a></p>
<p>Path to dependency file: /50pm/frontend/50pm/package.json</p>
<p>Path to vulnerable library: 50pm/frontend/50pm/node_modules/react-dev-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- :x: **react-dev-utils-8.0.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
react-dev-utils prior to v11.0.4 exposes a function, getProcessForPort, where an input argument is concatenated into a command string to be executed. This function is typically used from react-scripts (in Create React App projects), where the usage is safe. Only when this function is manually invoked with user-provided values (ie: by custom code) is there the potential for command injection. If you're consuming it from react-scripts then this issue does not affect you.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24033>CVE-2021-24033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.facebook.com/security/advisories/cve-2021-24033">https://www.facebook.com/security/advisories/cve-2021-24033</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: react-dev-utils-11.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-24033 (Medium) detected in react-dev-utils-8.0.0.tgz - ## CVE-2021-24033 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-dev-utils-8.0.0.tgz</b></p></summary>
<p>Webpack utilities used by Create React App</p>
<p>Library home page: <a href="https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-8.0.0.tgz">https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-8.0.0.tgz</a></p>
<p>Path to dependency file: /50pm/frontend/50pm/package.json</p>
<p>Path to vulnerable library: 50pm/frontend/50pm/node_modules/react-dev-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- :x: **react-dev-utils-8.0.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
react-dev-utils prior to v11.0.4 exposes a function, getProcessForPort, where an input argument is concatenated into a command string to be executed. This function is typically used from react-scripts (in Create React App projects), where the usage is safe. Only when this function is manually invoked with user-provided values (ie: by custom code) is there the potential for command injection. If you're consuming it from react-scripts then this issue does not affect you.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24033>CVE-2021-24033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.facebook.com/security/advisories/cve-2021-24033">https://www.facebook.com/security/advisories/cve-2021-24033</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: react-dev-utils-11.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in react dev utils tgz cve medium severity vulnerability vulnerable library react dev utils tgz webpack utilities used by create react app library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules react dev utils package json dependency hierarchy react scripts tgz root library x react dev utils tgz vulnerable library vulnerability details react dev utils prior to exposes a function getprocessforport where an input argument is concatenated into a command string to be executed this function is typically used from react scripts in create react app projects where the usage is safe only when this function is manually invoked with user provided values ie by custom code is there the potential for command injection if you re consuming it from react scripts then this issue does not affect you publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution react dev utils step up your open source security game with whitesource
| 0
|
1,491
| 4,063,707,150
|
IssuesEvent
|
2016-05-26 01:20:57
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Mac OS: Abort trap 6 error when spawning script from within Node
|
child_process os x unconfirmed
|
Seeing the following errors:
```
./runTests.sh: line 14: 39479 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39483 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39484 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39485 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39486 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
```
The culprit line in my node js app seems to be:
```
var test = spawn("./runTests.sh", [url, server.nick], {
cwd : frisbyRoot,
stdio : [0, fs.openSync(outfile, "a"), //stdio
fs.openSync(outfile, "a") //stderr
]
});
```
The script itself is:
```
#!/bin/bash
if [[ $# -ne 2 ]]; then
echo "Usage: $0 URL NICKNAME" >&2
exit 1
fi
#FILES=./*_spec.js
FILES=`find . -name "*_spec.js"`
output=$2
for f in $FILES
do
jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
done
```
version info:
Jeffs-WebTesting-MacBook-Pro:tests test$ npm ls -depth=0
tests@1.0.0 /Users/test/unit-tests/frisby/tests
├── frisby@0.8.5
├── jasmine@2.4.1
├── lodash@4.13.1
└── xml2js@0.4.16
Jeffs-WebTesting-MacBook-Pro:tests test$ npm -g ls -depth=0
/opt/local/lib
├── jasmine-node@1.14.5
├── junit-viewer@3.2.0
└── npm@2.15.3
node version: 4.4.3
OS X version: 10.10.5 (Yosemite)
Verified that I can run the jasmine tests from the command line via runTests.sh without error.
|
1.0
|
Mac OS: Abort trap 6 error when spawning script from within Node - Seeing the following errors:
```
./runTests.sh: line 14: 39479 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39483 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39484 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39485 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
./runTests.sh: line 14: 39486 Abort trap: 6 jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
```
The culprit line in my node js app seems to be:
```
var test = spawn("./runTests.sh", [url, server.nick], {
cwd : frisbyRoot,
stdio : [0, fs.openSync(outfile, "a"), //stdio
fs.openSync(outfile, "a") //stderr
]
});
```
The script itself is:
```
#!/bin/bash
if [[ $# -ne 2 ]]; then
echo "Usage: $0 URL NICKNAME" >&2
exit 1
fi
#FILES=./*_spec.js
FILES=`find . -name "*_spec.js"`
output=$2
for f in $FILES
do
jasmine-node $f --junitreport --config server $1 --output "reports/${output}"
done
```
version info:
Jeffs-WebTesting-MacBook-Pro:tests test$ npm ls -depth=0
tests@1.0.0 /Users/test/unit-tests/frisby/tests
├── frisby@0.8.5
├── jasmine@2.4.1
├── lodash@4.13.1
└── xml2js@0.4.16
Jeffs-WebTesting-MacBook-Pro:tests test$ npm -g ls -depth=0
/opt/local/lib
├── jasmine-node@1.14.5
├── junit-viewer@3.2.0
└── npm@2.15.3
node version: 4.4.3
OS X version: 10.10.5 (Yosemite)
Verified that I can run the jasmine tests from the command line via runTests.sh without error.
|
process
|
mac os abort trap error when spawning script from within node seeing the following errors runtests sh line abort trap jasmine node f junitreport config server output reports output runtests sh line abort trap jasmine node f junitreport config server output reports output runtests sh line abort trap jasmine node f junitreport config server output reports output runtests sh line abort trap jasmine node f junitreport config server output reports output runtests sh line abort trap jasmine node f junitreport config server output reports output the culprit line in my node js app seems to be var test spawn runtests sh cwd frisbyroot stdio fs opensync outfile a stdio fs opensync outfile a stderr the script itself is bin bash if then echo usage url nickname exit fi files spec js files find name spec js output for f in files do jasmine node f junitreport config server output reports output done version info jeffs webtesting macbook pro tests test npm ls depth tests users test unit tests frisby tests ├── frisby ├── jasmine ├── lodash └── jeffs webtesting macbook pro tests test npm g ls depth opt local lib ├── jasmine node ├── junit viewer └── npm node version os x version yosemite verified that i can run the jasmine tests from the command line via runtests sh without error
| 1
|
50,790
| 3,006,619,599
|
IssuesEvent
|
2015-07-27 11:42:47
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
Crash when trying to load utf-8 xml file with a BOM
|
affected: master auto-transferred bug category: none priority: normal
|
Transferred from http://code.opencv.org/issues/4486
```
|| Lior Da on 2015-07-13 21:18
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
Crash when trying to load utf-8 xml file with a BOM
-----------
```
When calling the next line:
cv::FileStorage root("_root.xml", cv::FileStorage::READ);
the program crashes.
If I change _root.xml to be encoded without BOM, it does not crash.
```
History
-------
|
1.0
|
Crash when trying to load utf-8 xml file with a BOM - Transferred from http://code.opencv.org/issues/4486
```
|| Lior Da on 2015-07-13 21:18
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
Crash when trying to load utf-8 xml file with a BOM
-----------
```
When calling the next line:
cv::FileStorage root("_root.xml", cv::FileStorage::READ);
the program crashes.
If I change _root.xml to be encoded without BOM, it does not crash.
```
History
-------
|
non_process
|
crash when trying to load utf xml file with a bom transferred from lior da on priority normal affected branch master dev category none tracker bug difficulty pr platform windows crash when trying to load utf xml file with a bom when calling the next line cv filestorage root root xml cv filestorage read the program crashes if i change root xml to be encoded without bom it does not crash history
| 0
|
12,734
| 15,101,778,505
|
IssuesEvent
|
2021-02-08 08:03:48
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: effector-mediated supression of host defenses by symbiont
|
multi-species process
|
NTR: effector-mediated supression of host defenses by symbiont
Definition
A process mediated by a molecule secreted by a symbiont that results in the modulation supression of a defense response. The host is defined as the larger of the organisms involved in a symbiotic interaction.
ref PMID:28082413
parents
GO:0140415 effector-mediated modulation of host defenses by symbiont
fix sp. in def suppresion -> supression
and
GO:0044414 suppression of host defenses by symbiont
|
1.0
|
NTR: effector-mediated supression of host defenses by symbiont -
NTR: effector-mediated supression of host defenses by symbiont
Definition
A process mediated by a molecule secreted by a symbiont that results in the modulation supression of a defense response. The host is defined as the larger of the organisms involved in a symbiotic interaction.
ref PMID:28082413
parents
GO:0140415 effector-mediated modulation of host defenses by symbiont
fix sp. in def suppresion -> supression
and
GO:0044414 suppression of host defenses by symbiont
|
process
|
ntr effector mediated supression of host defenses by symbiont ntr effector mediated supression of host defenses by symbiont definition a process mediated by a molecule secreted by a symbiont that results in the modulation supression of a defense response the host is defined as the larger of the organisms involved in a symbiotic interaction ref pmid parents go effector mediated modulation of host defenses by symbiont fix sp in def suppresion supression and go suppression of host defenses by symbiont
| 1
|
637,734
| 20,676,367,033
|
IssuesEvent
|
2022-03-10 09:40:18
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Getting "Error occured while trying to update the application error" when trying to update the callback url for console application from management console env:postgres11.5
|
Priority/Highest Severity/Critical bug UI Component/Application Management Affected-5.12.0 QA-Reported
|
**How to reproduce:**
1. Setup postgres 11.5 as the primary db
2. User store store configured for database_unique_id
3. Go to deployment.toml and set port offset as 1
```
[server]
offset = "1"
```
4. Access managemen console https://localhost:9444/carbon/
5. Login as admin:admin
6. List service providers > Console > Oauth configs and try to edit them
7. Try to update the callback url to a different port and click on update
Getting below errors



```
[2022-01-11 12:49:16,708] [541fa1ec-eb87-40b3-b723-12acd77d6761] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - System application update is not allowed. Client id: CONSOLE java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.oauth.stub.OAuthAdminServiceStub.updateConsumerApplication(OAuthAdminServiceStub.java:3708)
at org.wso2.carbon.identity.oauth.ui.client.OAuthAdminClient.updateOAuthApplicationData(OAuthAdminClient.java:123)
at org.apache.jsp.oauth.edit_002dfinish_002dajaxprocessor_jsp._jspService(edit_002dfinish_002dajaxprocessor_jsp.java:344)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.oauth.IdentityOAuthClientException: System application update is not allowed. Client id: CONSOLE
at org.wso2.identity.apps.common.listner.AppPortalOAuthAppMgtListener.doPreUpdateConsumerApplication(AppPortalOAuthAppMgtListener.java:67)
at org.wso2.carbon.identity.oauth.OAuthAdminServiceImpl.updateConsumerApplication(OAuthAdminServiceImpl.java:432)
at org.wso2.carbon.identity.oauth.OAuthAdminService.updateConsumerApplication(OAuthAdminService.java:148)
... 81 more
[2022-01-11 12:49:17,489] [8bf00788-f253-4248-b216-1f75c077f841] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - Update of system applications are not allowed. Application name: Console java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.application.mgt.stub.IdentityApplicationManagementServiceStub.updateApplication(IdentityApplicationManagementServiceStub.java:1052)
at org.wso2.carbon.identity.application.mgt.ui.client.ApplicationManagementServiceClient.updateApplicationData(ApplicationManagementServiceClient.java:236)
at org.apache.jsp.application.configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp._jspService(configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp.java:219)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.application.common.IdentityApplicationManagementClientException: Update of system applications are not allowed. Application name: Console
at org.wso2.identity.apps.common.listner.AppPortalApplicationMgtListener.doPreUpdateApplication(AppPortalApplicationMgtListener.java:76)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementServiceImpl.updateApplication(ApplicationManagementServiceImpl.java:624)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementAdminService.updateApplication(ApplicationManagementAdminService.java:397)
... 81 more
[2022-01-11 12:49:44,192] [05131212-f16e-4a48-9979-f2350800099b] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - System application update is not allowed. Client id: CONSOLE java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.oauth.stub.OAuthAdminServiceStub.updateConsumerApplication(OAuthAdminServiceStub.java:3708)
at org.wso2.carbon.identity.oauth.ui.client.OAuthAdminClient.updateOAuthApplicationData(OAuthAdminClient.java:123)
at org.apache.jsp.oauth.edit_002dfinish_002dajaxprocessor_jsp._jspService(edit_002dfinish_002dajaxprocessor_jsp.java:344)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.oauth.IdentityOAuthClientException: System application update is not allowed. Client id: CONSOLE
at org.wso2.identity.apps.common.listner.AppPortalOAuthAppMgtListener.doPreUpdateConsumerApplication(AppPortalOAuthAppMgtListener.java:67)
at org.wso2.carbon.identity.oauth.OAuthAdminServiceImpl.updateConsumerApplication(OAuthAdminServiceImpl.java:432)
at org.wso2.carbon.identity.oauth.OAuthAdminService.updateConsumerApplication(OAuthAdminService.java:148)
... 81 more
[2022-01-11 12:49:44,580] [b7c4964f-adbb-4038-b231-4cd39a881c13] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - Update of system applications are not allowed. Application name: Console java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.application.mgt.stub.IdentityApplicationManagementServiceStub.updateApplication(IdentityApplicationManagementServiceStub.java:1052)
at org.wso2.carbon.identity.application.mgt.ui.client.ApplicationManagementServiceClient.updateApplicationData(ApplicationManagementServiceClient.java:236)
at org.apache.jsp.application.configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp._jspService(configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp.java:219)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.application.common.IdentityApplicationManagementClientException: Update of system applications are not allowed. Application name: Console
at org.wso2.identity.apps.common.listner.AppPortalApplicationMgtListener.doPreUpdateApplication(AppPortalApplicationMgtListener.java:76)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementServiceImpl.updateApplication(ApplicationManagementServiceImpl.java:624)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementAdminService.updateApplication(ApplicationManagementAdminService.java:397)
... 81 more
```
**Environment information**
5.12.0 alpha 9
postgres 11.5
chrome 65
jdk 1.8.0_291
Ubuntu 20.04.3 LTS
|
1.0
|
Getting "Error occured while trying to update the application error" when trying to update the callback url for console application from management console env:postgres11.5 - **How to reproduce:**
1. Setup postgres 11.5 as the primary db
2. User store store configured for database_unique_id
3. Go to deployment.toml and set port offset as 1
```
[server]
offset = "1"
```
4. Access managemen console https://localhost:9444/carbon/
5. Login as admin:admin
6. List service providers > Console > Oauth configs and try to edit them
7. Try to update the callback url to a different port and click on update
Getting below errors



```
[2022-01-11 12:49:16,708] [541fa1ec-eb87-40b3-b723-12acd77d6761] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - System application update is not allowed. Client id: CONSOLE java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.oauth.stub.OAuthAdminServiceStub.updateConsumerApplication(OAuthAdminServiceStub.java:3708)
at org.wso2.carbon.identity.oauth.ui.client.OAuthAdminClient.updateOAuthApplicationData(OAuthAdminClient.java:123)
at org.apache.jsp.oauth.edit_002dfinish_002dajaxprocessor_jsp._jspService(edit_002dfinish_002dajaxprocessor_jsp.java:344)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.oauth.IdentityOAuthClientException: System application update is not allowed. Client id: CONSOLE
at org.wso2.identity.apps.common.listner.AppPortalOAuthAppMgtListener.doPreUpdateConsumerApplication(AppPortalOAuthAppMgtListener.java:67)
at org.wso2.carbon.identity.oauth.OAuthAdminServiceImpl.updateConsumerApplication(OAuthAdminServiceImpl.java:432)
at org.wso2.carbon.identity.oauth.OAuthAdminService.updateConsumerApplication(OAuthAdminService.java:148)
... 81 more
[2022-01-11 12:49:17,489] [8bf00788-f253-4248-b216-1f75c077f841] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - Update of system applications are not allowed. Application name: Console java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.application.mgt.stub.IdentityApplicationManagementServiceStub.updateApplication(IdentityApplicationManagementServiceStub.java:1052)
at org.wso2.carbon.identity.application.mgt.ui.client.ApplicationManagementServiceClient.updateApplicationData(ApplicationManagementServiceClient.java:236)
at org.apache.jsp.application.configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp._jspService(configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp.java:219)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.application.common.IdentityApplicationManagementClientException: Update of system applications are not allowed. Application name: Console
at org.wso2.identity.apps.common.listner.AppPortalApplicationMgtListener.doPreUpdateApplication(AppPortalApplicationMgtListener.java:76)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementServiceImpl.updateApplication(ApplicationManagementServiceImpl.java:624)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementAdminService.updateApplication(ApplicationManagementAdminService.java:397)
... 81 more
[2022-01-11 12:49:44,192] [05131212-f16e-4a48-9979-f2350800099b] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - System application update is not allowed. Client id: CONSOLE java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.oauth.stub.OAuthAdminServiceStub.updateConsumerApplication(OAuthAdminServiceStub.java:3708)
at org.wso2.carbon.identity.oauth.ui.client.OAuthAdminClient.updateOAuthApplicationData(OAuthAdminClient.java:123)
at org.apache.jsp.oauth.edit_002dfinish_002dajaxprocessor_jsp._jspService(edit_002dfinish_002dajaxprocessor_jsp.java:344)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.oauth.IdentityOAuthClientException: System application update is not allowed. Client id: CONSOLE
at org.wso2.identity.apps.common.listner.AppPortalOAuthAppMgtListener.doPreUpdateConsumerApplication(AppPortalOAuthAppMgtListener.java:67)
at org.wso2.carbon.identity.oauth.OAuthAdminServiceImpl.updateConsumerApplication(OAuthAdminServiceImpl.java:432)
at org.wso2.carbon.identity.oauth.OAuthAdminService.updateConsumerApplication(OAuthAdminService.java:148)
... 81 more
[2022-01-11 12:49:44,580] [b7c4964f-adbb-4038-b231-4cd39a881c13] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} - Update of system applications are not allowed. Application name: Console java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:170)
at org.apache.axis2.transport.local.LocalTransportReceiver.processMessage(LocalTransportReceiver.java:82)
at org.wso2.carbon.core.transports.local.CarbonLocalTransportSender.finalizeSendWithToAddress(CarbonLocalTransportSender.java:45)
at org.apache.axis2.transport.local.LocalTransportSender.invoke(LocalTransportSender.java:77)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:442)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:228)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.identity.application.mgt.stub.IdentityApplicationManagementServiceStub.updateApplication(IdentityApplicationManagementServiceStub.java:1052)
at org.wso2.carbon.identity.application.mgt.ui.client.ApplicationManagementServiceClient.updateApplicationData(ApplicationManagementServiceClient.java:236)
at org.apache.jsp.application.configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp._jspService(configure_002dservice_002dprovider_002dupdate_002dajaxprocessor_jsp.java:219)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:466)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:379)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:327)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:207)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.owasp.csrfguard.CsrfGuardFilter.doFilter(CsrfGuardFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.SameSiteCookieValve.invoke(SameSiteCookieValve.java:38)
at org.wso2.carbon.identity.cors.valve.CORSValve.invoke(CORSValve.java:89)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:117)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:117)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:106)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:67)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:145)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:59)
at org.wso2.carbon.tomcat.ext.valves.RequestEncodingValve.invoke(RequestEncodingValve.java:49)
at org.wso2.carbon.tomcat.ext.valves.RequestCorrelationIdValve.invoke(RequestCorrelationIdValve.java:126)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1726)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.identity.application.common.IdentityApplicationManagementClientException: Update of system applications are not allowed. Application name: Console
at org.wso2.identity.apps.common.listner.AppPortalApplicationMgtListener.doPreUpdateApplication(AppPortalApplicationMgtListener.java:76)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementServiceImpl.updateApplication(ApplicationManagementServiceImpl.java:624)
at org.wso2.carbon.identity.application.mgt.ApplicationManagementAdminService.updateApplication(ApplicationManagementAdminService.java:397)
... 81 more
```
**Environment information**
5.12.0 alpha 9
postgres 11.5
chrome 65
jdk 1.8.0_291
Ubuntu 20.04.3 LTS
|
non_process
|
getting error occured while trying to update the application error when trying to update the callback url for console application from management console env how to reproduce setup postgres as the primary db user store store configured for database unique id go to deployment toml and set port offset as offset access managemen console login as admin admin list service providers console oauth configs and try to edit them try to update the callback url to a different port and click on update getting below errors error org apache rpc receivers rpcmessagereceiver system application update is not allowed client id console java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache rpc receivers rpcutil invokeserviceclass rpcutil java at org apache rpc receivers rpcmessagereceiver invokebusinesslogic rpcmessagereceiver java at org apache receivers abstractinoutmessagereceiver invokebusinesslogic abstractinoutmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org carbon core transports local carbonlocaltransportsender finalizesendwithtoaddress carbonlocaltransportsender java at org apache transport local localtransportsender invoke localtransportsender java at org apache engine axisengine send axisengine java at org apache description outinaxisoperationclient send outinaxisoperation java at org apache description outinaxisoperationclient executeimpl outinaxisoperation java at org apache client operationclient execute operationclient java at org carbon identity oauth stub oauthadminservicestub updateconsumerapplication oauthadminservicestub java at org carbon identity oauth ui client oauthadminclient updateoauthapplicationdata oauthadminclient java at org apache jsp oauth edit jsp jspservice edit jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org carbon ui jspservlet service jspservlet java at org carbon ui tilesjspservlet service tilesjspservlet java at javax servlet http httpservlet service httpservlet java at org eclipse equinox http helper contextpathservletadaptor service contextpathservletadaptor java at org eclipse equinox http servlet internal servletregistration service servletregistration java at org eclipse equinox http servlet internal proxyservlet processalias proxyservlet java at org eclipse equinox http servlet internal proxyservlet service proxyservlet java at javax servlet http httpservlet service httpservlet java at org carbon tomcat ext servlet delegationservlet service delegationservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org owasp csrfguard csrfguardfilter dofilter csrfguardfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon tomcat ext filter charactersetfilter dofilter charactersetfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon tomcat ext valves samesitecookievalve invoke samesitecookievalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by org carbon identity oauth identityoauthclientexception system application update is not allowed client id console at org identity apps common listner appportaloauthappmgtlistener dopreupdateconsumerapplication appportaloauthappmgtlistener java at org carbon identity oauth oauthadminserviceimpl updateconsumerapplication oauthadminserviceimpl java at org carbon identity oauth oauthadminservice updateconsumerapplication oauthadminservice java more error org apache rpc receivers rpcmessagereceiver update of system applications are not allowed application name console java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache rpc receivers rpcutil invokeserviceclass rpcutil java at org apache rpc receivers rpcmessagereceiver invokebusinesslogic rpcmessagereceiver java at org apache receivers abstractinoutmessagereceiver invokebusinesslogic abstractinoutmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org carbon core transports local carbonlocaltransportsender finalizesendwithtoaddress carbonlocaltransportsender java at org apache transport local localtransportsender invoke localtransportsender java at org apache engine axisengine send axisengine java at org apache description outinaxisoperationclient send outinaxisoperation java at org apache description outinaxisoperationclient executeimpl outinaxisoperation java at org apache client operationclient execute operationclient java at org carbon identity application mgt stub identityapplicationmanagementservicestub updateapplication identityapplicationmanagementservicestub java at org carbon identity application mgt ui client applicationmanagementserviceclient updateapplicationdata applicationmanagementserviceclient java at org apache jsp application configure jsp jspservice configure jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org carbon ui jspservlet service jspservlet java at org carbon ui tilesjspservlet service tilesjspservlet java at javax servlet http httpservlet service httpservlet java at org eclipse equinox http helper contextpathservletadaptor service contextpathservletadaptor java at org eclipse equinox http servlet internal servletregistration service servletregistration java at org eclipse equinox http servlet internal proxyservlet processalias proxyservlet java at org eclipse equinox http servlet internal proxyservlet service proxyservlet java at javax servlet http httpservlet service httpservlet java at org carbon tomcat ext servlet delegationservlet service delegationservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org owasp csrfguard csrfguardfilter dofilter csrfguardfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon tomcat ext filter charactersetfilter dofilter charactersetfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon tomcat ext valves samesitecookievalve invoke samesitecookievalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by org carbon identity application common identityapplicationmanagementclientexception update of system applications are not allowed application name console at org identity apps common listner appportalapplicationmgtlistener dopreupdateapplication appportalapplicationmgtlistener java at org carbon identity application mgt applicationmanagementserviceimpl updateapplication applicationmanagementserviceimpl java at org carbon identity application mgt applicationmanagementadminservice updateapplication applicationmanagementadminservice java more error org apache rpc receivers rpcmessagereceiver system application update is not allowed client id console java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache rpc receivers rpcutil invokeserviceclass rpcutil java at org apache rpc receivers rpcmessagereceiver invokebusinesslogic rpcmessagereceiver java at org apache receivers abstractinoutmessagereceiver invokebusinesslogic abstractinoutmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org carbon core transports local carbonlocaltransportsender finalizesendwithtoaddress carbonlocaltransportsender java at org apache transport local localtransportsender invoke localtransportsender java at org apache engine axisengine send axisengine java at org apache description outinaxisoperationclient send outinaxisoperation java at org apache description outinaxisoperationclient executeimpl outinaxisoperation java at org apache client operationclient execute operationclient java at org carbon identity oauth stub oauthadminservicestub updateconsumerapplication oauthadminservicestub java at org carbon identity oauth ui client oauthadminclient updateoauthapplicationdata oauthadminclient java at org apache jsp oauth edit jsp jspservice edit jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org carbon ui jspservlet service jspservlet java at org carbon ui tilesjspservlet service tilesjspservlet java at javax servlet http httpservlet service httpservlet java at org eclipse equinox http helper contextpathservletadaptor service contextpathservletadaptor java at org eclipse equinox http servlet internal servletregistration service servletregistration java at org eclipse equinox http servlet internal proxyservlet processalias proxyservlet java at org eclipse equinox http servlet internal proxyservlet service proxyservlet java at javax servlet http httpservlet service httpservlet java at org carbon tomcat ext servlet delegationservlet service delegationservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org owasp csrfguard csrfguardfilter dofilter csrfguardfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon tomcat ext filter charactersetfilter dofilter charactersetfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon tomcat ext valves samesitecookievalve invoke samesitecookievalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by org carbon identity oauth identityoauthclientexception system application update is not allowed client id console at org identity apps common listner appportaloauthappmgtlistener dopreupdateconsumerapplication appportaloauthappmgtlistener java at org carbon identity oauth oauthadminserviceimpl updateconsumerapplication oauthadminserviceimpl java at org carbon identity oauth oauthadminservice updateconsumerapplication oauthadminservice java more error org apache rpc receivers rpcmessagereceiver update of system applications are not allowed application name console java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache rpc receivers rpcutil invokeserviceclass rpcutil java at org apache rpc receivers rpcmessagereceiver invokebusinesslogic rpcmessagereceiver java at org apache receivers abstractinoutmessagereceiver invokebusinesslogic abstractinoutmessagereceiver java at org apache receivers abstractmessagereceiver receive abstractmessagereceiver java at org apache engine axisengine receive axisengine java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org apache transport local localtransportreceiver processmessage localtransportreceiver java at org carbon core transports local carbonlocaltransportsender finalizesendwithtoaddress carbonlocaltransportsender java at org apache transport local localtransportsender invoke localtransportsender java at org apache engine axisengine send axisengine java at org apache description outinaxisoperationclient send outinaxisoperation java at org apache description outinaxisoperationclient executeimpl outinaxisoperation java at org apache client operationclient execute operationclient java at org carbon identity application mgt stub identityapplicationmanagementservicestub updateapplication identityapplicationmanagementservicestub java at org carbon identity application mgt ui client applicationmanagementserviceclient updateapplicationdata applicationmanagementserviceclient java at org apache jsp application configure jsp jspservice configure jsp java at org apache jasper runtime httpjspbase service httpjspbase java at javax servlet http httpservlet service httpservlet java at org apache jasper servlet jspservletwrapper service jspservletwrapper java at org apache jasper servlet jspservlet servicejspfile jspservlet java at org apache jasper servlet jspservlet service jspservlet java at javax servlet http httpservlet service httpservlet java at org carbon ui jspservlet service jspservlet java at org carbon ui tilesjspservlet service tilesjspservlet java at javax servlet http httpservlet service httpservlet java at org eclipse equinox http helper contextpathservletadaptor service contextpathservletadaptor java at org eclipse equinox http servlet internal servletregistration service servletregistration java at org eclipse equinox http servlet internal proxyservlet processalias proxyservlet java at org eclipse equinox http servlet internal proxyservlet service proxyservlet java at javax servlet http httpservlet service httpservlet java at org carbon tomcat ext servlet delegationservlet service delegationservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org owasp csrfguard csrfguardfilter dofilter csrfguardfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon tomcat ext filter charactersetfilter dofilter charactersetfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon tomcat ext valves samesitecookievalve invoke samesitecookievalve java at org carbon identity cors valve corsvalve invoke corsvalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves abstractaccesslogvalve invoke abstractaccesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org carbon tomcat ext valves requestencodingvalve invoke requestencodingvalve java at org carbon tomcat ext valves requestcorrelationidvalve invoke requestcorrelationidvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote service java at org apache coyote abstractprocessorlight process abstractprocessorlight java at org apache coyote abstractprotocol connectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net socketprocessorbase run socketprocessorbase java at org apache tomcat util threads threadpoolexecutor runworker threadpoolexecutor java at org apache tomcat util threads threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by org carbon identity application common identityapplicationmanagementclientexception update of system applications are not allowed application name console at org identity apps common listner appportalapplicationmgtlistener dopreupdateapplication appportalapplicationmgtlistener java at org carbon identity application mgt applicationmanagementserviceimpl updateapplication applicationmanagementserviceimpl java at org carbon identity application mgt applicationmanagementadminservice updateapplication applicationmanagementadminservice java more environment information alpha postgres chrome jdk ubuntu lts
| 0
|
12,959
| 15,340,202,130
|
IssuesEvent
|
2021-02-27 05:48:27
|
raxod502/straight.el
|
https://api.github.com/repos/raxod502/straight.el
|
closed
|
To hide straight-process buffer by default
|
customize process buffer support waiting on response
|
Hello,
I wanted to disable *straight-process* buffer by default and tried to customize with adding a leading space to the name nothing happened. Would you mind correcting me?
```
(custom-set-variables
'(straight-process-buffer " *straight-process*"))
```
Thanks in advance!
|
1.0
|
To hide straight-process buffer by default - Hello,
I wanted to disable *straight-process* buffer by default and tried to customize with adding a leading space to the name nothing happened. Would you mind correcting me?
```
(custom-set-variables
'(straight-process-buffer " *straight-process*"))
```
Thanks in advance!
|
process
|
to hide straight process buffer by default hello i wanted to disable straight process buffer by default and tried to customize with adding a leading space to the name nothing happened would you mind correcting me custom set variables straight process buffer straight process thanks in advance
| 1
|
5,270
| 8,059,543,486
|
IssuesEvent
|
2018-08-02 22:27:17
|
edgi-govdata-archiving/web-monitoring
|
https://api.github.com/repos/edgi-govdata-archiving/web-monitoring
|
closed
|
Implement dependency monitoring
|
db processing ui
|
Implement services to monitoring for package updates, dependency conflicts, security alerts, etc.
Possible services:
https://gemnasium.com/
https://libraries.io/
https://github.com/apps/greenkeeper
Should create separate issues for each repo?
|
1.0
|
Implement dependency monitoring - Implement services to monitoring for package updates, dependency conflicts, security alerts, etc.
Possible services:
https://gemnasium.com/
https://libraries.io/
https://github.com/apps/greenkeeper
Should create separate issues for each repo?
|
process
|
implement dependency monitoring implement services to monitoring for package updates dependency conflicts security alerts etc possible services should create separate issues for each repo
| 1
|
12,889
| 15,280,393,332
|
IssuesEvent
|
2021-02-23 06:14:01
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Placement fo search bar
|
P3 ShapeupProcess challenge- recommender-tool
|
Placement fo search bar has moved
<img width="1440" alt="Screenshot 2021-02-23 at 11 42 21 AM" src="https://user-images.githubusercontent.com/58783823/108808313-5da7fa80-75cc-11eb-9a0d-159b5dbfec78.png">
|
1.0
|
Placement fo search bar - Placement fo search bar has moved
<img width="1440" alt="Screenshot 2021-02-23 at 11 42 21 AM" src="https://user-images.githubusercontent.com/58783823/108808313-5da7fa80-75cc-11eb-9a0d-159b5dbfec78.png">
|
process
|
placement fo search bar placement fo search bar has moved img width alt screenshot at am src
| 1
|
299,320
| 25,896,153,384
|
IssuesEvent
|
2022-12-14 22:42:17
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: tpcc/multiregion/survive=region/chaos=true failed
|
C-test-failure O-robot O-roachtest branch-release-22.2
|
roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7952917?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7952917?buildTab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on release-22.2 @ [bd54db769eed3e1f2a91cf63fcbb2d36182f0901](https://github.com/cockroachdb/cockroach/commits/bd54db769eed3e1f2a91cf63fcbb2d36182f0901):
```
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1435
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus.Init
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus/prometheus.go:253
| github.com/cockroachdb/cockroach/pkg/roachprod.StartGrafana
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:1400
| main.(*clusterImpl).StartGrafana
| main/pkg/cmd/roachtest/cluster.go:2441
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupPrometheusForTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:1548
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:205
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerTPCC.func10
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:670
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:930
Wraps: (2) syncedCluster.PutString
Wraps: (3) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Put
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1679
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).PutString
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1435
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus.Init
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus/prometheus.go:253
| github.com/cockroachdb/cockroach/pkg/roachprod.StartGrafana
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:1400
| main.(*clusterImpl).StartGrafana
| main/pkg/cmd/roachtest/cluster.go:2441
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupPrometheusForTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:1548
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:205
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerTPCC.func10
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:670
| [...repeated from below...]
Wraps: (4) put /tmp/prometheus.yml1069566885 failed
Wraps: (5) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).scp
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2146
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Put.func3
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1588
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (6) ~ scp -r -C -o StrictHostKeyChecking=no -i /home/roach/.ssh/id_rsa -i /home/roach/.ssh/google_compute_engine /tmp/prometheus.yml1069566885 ubuntu@35.231.32.39:/tmp/prometheus/prometheus.yml
| Warning: Permanently added '35.231.32.39' (ECDSA) to the list of known hosts.
| client_loop: send disconnect: Broken pipe
| lost connection
Wraps: (7) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.withPrefix (7) *exec.ExitError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=true</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/multiregion
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-22442
|
2.0
|
roachtest: tpcc/multiregion/survive=region/chaos=true failed - roachtest.tpcc/multiregion/survive=region/chaos=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7952917?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7952917?buildTab=artifacts#/tpcc/multiregion/survive=region/chaos=true) on release-22.2 @ [bd54db769eed3e1f2a91cf63fcbb2d36182f0901](https://github.com/cockroachdb/cockroach/commits/bd54db769eed3e1f2a91cf63fcbb2d36182f0901):
```
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1435
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus.Init
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus/prometheus.go:253
| github.com/cockroachdb/cockroach/pkg/roachprod.StartGrafana
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:1400
| main.(*clusterImpl).StartGrafana
| main/pkg/cmd/roachtest/cluster.go:2441
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupPrometheusForTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:1548
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:205
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerTPCC.func10
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:670
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:930
Wraps: (2) syncedCluster.PutString
Wraps: (3) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Put
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1679
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).PutString
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1435
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus.Init
| github.com/cockroachdb/cockroach/pkg/roachprod/prometheus/prometheus.go:253
| github.com/cockroachdb/cockroach/pkg/roachprod.StartGrafana
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:1400
| main.(*clusterImpl).StartGrafana
| main/pkg/cmd/roachtest/cluster.go:2441
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupPrometheusForTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:1548
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:205
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerTPCC.func10
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:670
| [...repeated from below...]
Wraps: (4) put /tmp/prometheus.yml1069566885 failed
Wraps: (5) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).scp
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2146
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Put.func3
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1588
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (6) ~ scp -r -C -o StrictHostKeyChecking=no -i /home/roach/.ssh/id_rsa -i /home/roach/.ssh/google_compute_engine /tmp/prometheus.yml1069566885 ubuntu@35.231.32.39:/tmp/prometheus/prometheus.yml
| Warning: Permanently added '35.231.32.39' (ECDSA) to the list of known hosts.
| client_loop: send disconnect: Broken pipe
| lost connection
Wraps: (7) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.withPrefix (7) *exec.ExitError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=true</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/multiregion
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpcc/multiregion/survive=region/chaos=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-22442
|
non_process
|
roachtest tpcc multiregion survive region chaos true failed roachtest tpcc multiregion survive region chaos true with on release github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod prometheus init github com cockroachdb cockroach pkg roachprod prometheus prometheus go github com cockroachdb cockroach pkg roachprod startgrafana github com cockroachdb cockroach pkg roachprod roachprod go main clusterimpl startgrafana main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests setupprometheusfortpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests runtpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests registertpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go main testrunner runtest main pkg cmd roachtest test runner go wraps syncedcluster putstring wraps attached stack trace stack trace github com cockroachdb cockroach pkg roachprod install syncedcluster put github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster putstring github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod prometheus init github com cockroachdb cockroach pkg roachprod prometheus prometheus go github com cockroachdb cockroach pkg roachprod startgrafana github com cockroachdb cockroach pkg roachprod roachprod go main clusterimpl startgrafana main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests setupprometheusfortpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests runtpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests registertpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go wraps put tmp prometheus failed wraps attached stack trace stack trace github com cockroachdb cockroach pkg roachprod install syncedcluster scp github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster put github com cockroachdb cockroach pkg roachprod install cluster synced go runtime goexit goroot src runtime asm s wraps scp r c o stricthostkeychecking no i home roach ssh id rsa i home roach ssh google compute engine tmp prometheus ubuntu tmp prometheus prometheus yml warning permanently added ecdsa to the list of known hosts client loop send disconnect broken pipe lost connection wraps exit status error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil withprefix exec exiterror parameters roachtest cloud gce roachtest cpu roachtest encrypted true roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb multiregion jira issue crdb
| 0
|
5,430
| 8,290,396,252
|
IssuesEvent
|
2018-09-19 17:14:16
|
aspnet/IISIntegration
|
https://api.github.com/repos/aspnet/IISIntegration
|
closed
|
Implement IHttpLifetimeFeature
|
cost: L enhancement in-process
|
Currently the abort logic only works for managed to native. When native sends a signal for ClientDisconnected, we need to stop reading and writing and cleanup the connection once fired.
|
1.0
|
Implement IHttpLifetimeFeature - Currently the abort logic only works for managed to native. When native sends a signal for ClientDisconnected, we need to stop reading and writing and cleanup the connection once fired.
|
process
|
implement ihttplifetimefeature currently the abort logic only works for managed to native when native sends a signal for clientdisconnected we need to stop reading and writing and cleanup the connection once fired
| 1
|
1,102
| 3,576,051,949
|
IssuesEvent
|
2016-01-27 18:03:40
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
NTR: vasodilation by nitric oxide
|
BHF-UCL miRNA New term request RNA processes
|
Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figure 6 in this paper that miR-92a expression affects the regulation of vasodilation by nitric oxide.
Therefore, I wish to request the term: ‘vasodilation by nitric oxide’
I am planning to subsequently request regulation terms via TermGenie in order to capture the effect of miR-92a on this process.
These terms would subsequently become siblings to terms such as (as well as others):
GO:0003121: regulation of vasodilation by epinephrine
GO:0003122: regulation of vasodilation by norepinephrine
GO:0003124: regulation of vasodilation by neuronal epinephrine
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
|
1.0
|
NTR: vasodilation by nitric oxide - Dear Biocurators,
I am writing to request a new GO term, which arose whilst annotating paper PMID: 21768538 (Wu et al., 2011).
It is demonstrated in Figure 6 in this paper that miR-92a expression affects the regulation of vasodilation by nitric oxide.
Therefore, I wish to request the term: ‘vasodilation by nitric oxide’
I am planning to subsequently request regulation terms via TermGenie in order to capture the effect of miR-92a on this process.
These terms would subsequently become siblings to terms such as (as well as others):
GO:0003121: regulation of vasodilation by epinephrine
GO:0003122: regulation of vasodilation by norepinephrine
GO:0003124: regulation of vasodilation by neuronal epinephrine
DbxREFs: GOC:BHF, GOC:BHF_miRNA, GOC:bc
I will look forward to hearing from you with regard to my request.
Thank you,
Barbara
cc: @RLovering
cc: @rachhuntley
|
process
|
ntr vasodilation by nitric oxide dear biocurators i am writing to request a new go term which arose whilst annotating paper pmid wu et al it is demonstrated in figure in this paper that mir expression affects the regulation of vasodilation by nitric oxide therefore i wish to request the term ‘vasodilation by nitric oxide’ i am planning to subsequently request regulation terms via termgenie in order to capture the effect of mir on this process these terms would subsequently become siblings to terms such as as well as others go regulation of vasodilation by epinephrine go regulation of vasodilation by norepinephrine go regulation of vasodilation by neuronal epinephrine dbxrefs goc bhf goc bhf mirna goc bc i will look forward to hearing from you with regard to my request thank you barbara cc rlovering cc rachhuntley
| 1
|
172,289
| 6,501,382,818
|
IssuesEvent
|
2017-08-23 09:24:39
|
aleastChs/scalajs-google-charts
|
https://api.github.com/repos/aleastChs/scalajs-google-charts
|
closed
|
setOnLoadCallback
|
bug facade PRIORITY: HIGH
|
Error: GoogleChartsLoaded is undefined
Fix: Implement google.setOnLoadCallback(function...)
|
1.0
|
setOnLoadCallback - Error: GoogleChartsLoaded is undefined
Fix: Implement google.setOnLoadCallback(function...)
|
non_process
|
setonloadcallback error googlechartsloaded is undefined fix implement google setonloadcallback function
| 0
|
3,898
| 6,821,591,722
|
IssuesEvent
|
2017-11-07 17:12:13
|
ontop/ontop
|
https://api.github.com/repos/ontop/ontop
|
opened
|
Interface for mapping serializer
|
status: accepted topic: mapping processing type: enhancement
|
Design a interface for the mapping serializer, which will work both for R2RML and the Ontop native mapping language.
|
1.0
|
Interface for mapping serializer - Design a interface for the mapping serializer, which will work both for R2RML and the Ontop native mapping language.
|
process
|
interface for mapping serializer design a interface for the mapping serializer which will work both for and the ontop native mapping language
| 1
|
651,763
| 21,509,632,774
|
IssuesEvent
|
2022-04-28 02:02:14
|
wso2/product-microgateway
|
https://api.github.com/repos/wso2/product-microgateway
|
opened
|
Null "InvocationContext " in Java sample interceptor service
|
Type/Bug Priority/Normal
|
### Description:
Null is returned for "InvocationContext" from sample Java interceptor service when `invocation_context` is added in `includes` section of OAS.
#### Temporary fix
Change `@JsonProperty("InvocationContext") -> @JsonProperty("invocationContext")` in
https://github.com/wso2/product-microgateway/blob/d92ea73e255b3642e0f12eb44619b981d0013d35/samples/interceptors/java/spring-server-generated/src/main/java/io/swagger/model/RequestHandlerRequestBody.java#L31
### Steps to reproduce:
- Add sample java interceptor service
- Additionally add `invocation_context` in includes
- Check invocation_context field in the model object.
```
x-wso2-request-interceptor:
serviceURL: [http|https]://<host>[:<port>]
includes: # any of following
- request_headers
- request_body
- request_trailers
- invocation_context
```
### Affected Product Version:
Choreo Connect 1.0.0
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
|
1.0
|
Null "InvocationContext " in Java sample interceptor service - ### Description:
Null is returned for "InvocationContext" from sample Java interceptor service when `invocation_context` is added in `includes` section of OAS.
#### Temporary fix
Change `@JsonProperty("InvocationContext") -> @JsonProperty("invocationContext")` in
https://github.com/wso2/product-microgateway/blob/d92ea73e255b3642e0f12eb44619b981d0013d35/samples/interceptors/java/spring-server-generated/src/main/java/io/swagger/model/RequestHandlerRequestBody.java#L31
### Steps to reproduce:
- Add sample java interceptor service
- Additionally add `invocation_context` in includes
- Check invocation_context field in the model object.
```
x-wso2-request-interceptor:
serviceURL: [http|https]://<host>[:<port>]
includes: # any of following
- request_headers
- request_body
- request_trailers
- invocation_context
```
### Affected Product Version:
Choreo Connect 1.0.0
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
|
non_process
|
null invocationcontext in java sample interceptor service description null is returned for invocationcontext from sample java interceptor service when invocation context is added in includes section of oas temporary fix change jsonproperty invocationcontext jsonproperty invocationcontext in steps to reproduce add sample java interceptor service additionally add invocation context in includes check invocation context field in the model object x request interceptor serviceurl includes any of following request headers request body request trailers invocation context affected product version choreo connect environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
| 0
|
3,153
| 6,204,674,617
|
IssuesEvent
|
2017-07-06 14:40:14
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: positive regulation for GO:0060236 regulation of mitotic spindle organization
|
cell cycle and DNA processes
|
Xrefs:
PMID:17576815
GOC:bhm
Example Comment: e.g. in STIL (Q8JGS1) in danre [I can't remember the syntax for the sentence now that TG is switched off - sorry]
Do you need anything else?
Thanks,
Birgit
|
1.0
|
NTR: positive regulation for GO:0060236 regulation of mitotic spindle organization - Xrefs:
PMID:17576815
GOC:bhm
Example Comment: e.g. in STIL (Q8JGS1) in danre [I can't remember the syntax for the sentence now that TG is switched off - sorry]
Do you need anything else?
Thanks,
Birgit
|
process
|
ntr positive regulation for go regulation of mitotic spindle organization xrefs pmid goc bhm example comment e g in stil in danre do you need anything else thanks birgit
| 1
|
4,810
| 7,700,702,269
|
IssuesEvent
|
2018-05-20 05:32:45
|
pelias/schema
|
https://api.github.com/repos/pelias/schema
|
closed
|
Travis CI tests intermittently fail
|
bug help wanted processed
|
This is likely due to timing issues for Elasticsearch operations such as starting up and initializing indexes, but more research is needed.
|
1.0
|
Travis CI tests intermittently fail - This is likely due to timing issues for Elasticsearch operations such as starting up and initializing indexes, but more research is needed.
|
process
|
travis ci tests intermittently fail this is likely due to timing issues for elasticsearch operations such as starting up and initializing indexes but more research is needed
| 1
|
8,288
| 11,599,236,345
|
IssuesEvent
|
2020-02-25 01:34:09
|
department-of-veterans-affairs/caseflow
|
https://api.github.com/repos/department-of-veterans-affairs/caseflow
|
closed
|
Motion to Vacate | Associate PostDecisionMotion to Appeal
|
BVA Post-AMA Requirement backend foxtrot priority-medium
|
## Description
Change the PostDecisionMotion schema so that it is associated with an appeal with type "Vacate", not a task.
## Acceptance criteria
- [ ] ~~Remove post_decision_motions.task_id~~
- [ ] Add post_decision_motions.appeal_id
- [ ] For motions to vacate, the associated appeal should be the "Vacate" stream
## Context
The PostDecisionMotion is currently associated with a task, because until recently, MTV-related tasks were all part of the task tree of the original appeal. The switch to creating a separate appeal stream with type "Vacate" now makes it easier to tell which tasks are related to the original appeal and which are related to MTV. The new appeal stream is therefore a more convenient object for the PostDecisionMotion to reference.
## Technical notes
A non-exhaustive list of places in the codebase to update:
- DB migration to remove the old column and add the new
- Appeal, wherever it deals with a PostDecisionMotion
- PostDecisionMotion and PostDecisionMotionUpdater
|
1.0
|
Motion to Vacate | Associate PostDecisionMotion to Appeal - ## Description
Change the PostDecisionMotion schema so that it is associated with an appeal with type "Vacate", not a task.
## Acceptance criteria
- [ ] ~~Remove post_decision_motions.task_id~~
- [ ] Add post_decision_motions.appeal_id
- [ ] For motions to vacate, the associated appeal should be the "Vacate" stream
## Context
The PostDecisionMotion is currently associated with a task, because until recently, MTV-related tasks were all part of the task tree of the original appeal. The switch to creating a separate appeal stream with type "Vacate" now makes it easier to tell which tasks are related to the original appeal and which are related to MTV. The new appeal stream is therefore a more convenient object for the PostDecisionMotion to reference.
## Technical notes
A non-exhaustive list of places in the codebase to update:
- DB migration to remove the old column and add the new
- Appeal, wherever it deals with a PostDecisionMotion
- PostDecisionMotion and PostDecisionMotionUpdater
|
non_process
|
motion to vacate associate postdecisionmotion to appeal description change the postdecisionmotion schema so that it is associated with an appeal with type vacate not a task acceptance criteria remove post decision motions task id add post decision motions appeal id for motions to vacate the associated appeal should be the vacate stream context the postdecisionmotion is currently associated with a task because until recently mtv related tasks were all part of the task tree of the original appeal the switch to creating a separate appeal stream with type vacate now makes it easier to tell which tasks are related to the original appeal and which are related to mtv the new appeal stream is therefore a more convenient object for the postdecisionmotion to reference technical notes a non exhaustive list of places in the codebase to update db migration to remove the old column and add the new appeal wherever it deals with a postdecisionmotion postdecisionmotion and postdecisionmotionupdater
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.