Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
554,989
| 16,444,785,767
|
IssuesEvent
|
2021-05-20 18:12:38
|
LBNL-ETA/BEDES-Manager
|
https://api.github.com/repos/LBNL-ETA/BEDES-Manager
|
opened
|
multiple copies of same composite term should not be created upon import
|
bug medium priority
|
When I import an application mapping using a .csv file, I can have multiple application terms that are mapped to the same composite term (e.g., multiple rows with the same values in the "BEDES Term" and "BEDES Atomic Term Mapping" columns). Currently, when I import such a .csv file, the result is that multiple copies of the same composite term are created (each with different UUIDs), and each app term is mapped to a separate copy of the composite term. Instead, only one copy of the composite term should be created, and all of the app terms should be mapped to that same copy of the composite term.
Note: This issue applies to the behavior when "BEDES Composite Term UUID" is left empty in the .csv file. Different behavior is expected when the UUID of a composite term is specified in the .csv file, but that is currently not possible (see issue #174). Once that issue is fixed, I will do more tests with UUIDs specified and open separate issues if necessary.
|
1.0
|
multiple copies of same composite term should not be created upon import - When I import an application mapping using a .csv file, I can have multiple application terms that are mapped to the same composite term (e.g., multiple rows with the same values in the "BEDES Term" and "BEDES Atomic Term Mapping" columns). Currently, when I import such a .csv file, the result is that multiple copies of the same composite term are created (each with different UUIDs), and each app term is mapped to a separate copy of the composite term. Instead, only one copy of the composite term should be created, and all of the app terms should be mapped to that same copy of the composite term.
Note: This issue applies to the behavior when "BEDES Composite Term UUID" is left empty in the .csv file. Different behavior is expected when the UUID of a composite term is specified in the .csv file, but that is currently not possible (see issue #174). Once that issue is fixed, I will do more tests with UUIDs specified and open separate issues if necessary.
|
non_process
|
multiple copies of same composite term should not be created upon import when i import an application mapping using a csv file i can have multiple application terms that are mapped to the same composite term e g multiple rows with the same values in the bedes term and bedes atomic term mapping columns currently when i import such a csv file the result is that multiple copies of the same composite term are created each with different uuids and each app term is mapped to a separate copy of the composite term instead only one copy of the composite term should be created and all of the app terms should be mapped to that same copy of the composite term note this issue applies to the behavior when bedes composite term uuid is left empty in the csv file different behavior is expected when the uuid of a composite term is specified in the csv file but that is currently not possible see issue once that issue is fixed i will do more tests with uuids specified and open separate issues if necessary
| 0
|
18,265
| 24,346,959,354
|
IssuesEvent
|
2022-10-02 12:52:03
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
How to monitor files in azure storage account, like blobs or fileshares using this example? Currently it is just watching some windows directory on some vm?
|
automation/svc triaged cxp product-question process-automation/subsvc Pri2
|
How to monitor files in azure storage account, like blobs or fileshares using this example? Currently it is just watching some windows directory on some vm?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 73598b10-eae3-6387-682b-82a6fa0316e3
* Version Independent ID: 30d7c985-3fef-c574-acc8-b71c46460944
* Content: [Track updated files with an Azure Automation watcher task](https://learn.microsoft.com/en-us/azure/automation/automation-scenario-using-watcher-task)
* Content Source: [articles/automation/automation-scenario-using-watcher-task.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-scenario-using-watcher-task.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
How to monitor files in azure storage account, like blobs or fileshares using this example? Currently it is just watching some windows directory on some vm? - How to monitor files in azure storage account, like blobs or fileshares using this example? Currently it is just watching some windows directory on some vm?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 73598b10-eae3-6387-682b-82a6fa0316e3
* Version Independent ID: 30d7c985-3fef-c574-acc8-b71c46460944
* Content: [Track updated files with an Azure Automation watcher task](https://learn.microsoft.com/en-us/azure/automation/automation-scenario-using-watcher-task)
* Content Source: [articles/automation/automation-scenario-using-watcher-task.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-scenario-using-watcher-task.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
how to monitor files in azure storage account like blobs or fileshares using this example currently it is just watching some windows directory on some vm how to monitor files in azure storage account like blobs or fileshares using this example currently it is just watching some windows directory on some vm document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
264,902
| 28,214,106,786
|
IssuesEvent
|
2023-04-05 07:39:23
|
hshivhare67/platform_device_renesas_kernel_v4.19.72
|
https://api.github.com/repos/hshivhare67/platform_device_renesas_kernel_v4.19.72
|
closed
|
CVE-2022-4095 (High) detected in linuxlinux-4.19.279 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in Linux kernel before 5.19.2. This issue occurs in cmd_hdl_filter in drivers/staging/rtl8712/rtl8712_cmd.c, allowing an attacker to launch a local denial of service attack and gain escalation of privileges.
<p>Publish Date: 2023-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-4095 (High) detected in linuxlinux-4.19.279 - autoclosed - ## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in Linux kernel before 5.19.2. This issue occurs in cmd_hdl_filter in drivers/staging/rtl8712/rtl8712_cmd.c, allowing an attacker to launch a local denial of service attack and gain escalation of privileges.
<p>Publish Date: 2023-03-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch main vulnerable source files drivers staging cmd c drivers staging cmd c vulnerability details a use after free flaw was found in linux kernel before this issue occurs in cmd hdl filter in drivers staging cmd c allowing an attacker to launch a local denial of service attack and gain escalation of privileges publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
20,547
| 27,197,142,728
|
IssuesEvent
|
2023-02-20 06:37:56
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
closed
|
std::process::Command::output is unacceptably slow
|
I-slow C-bug O-unix A-process
|
`std::process::Command::output` is unacceptably slow. Consider this code:
https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=333b57442699553257f0273bb1bba3ac
When trying this code, make sure you have enabled release mode. On my local machine I see this output:
```text
1565 ms
100000000
103 ms
100000000
```
So `std::process::Command::output` is 15x slower than `std::io::BufReader::read_to_end()`. My rustc version is `rustc 1.69.0-nightly (75a0be98f 2023-02-05)`. My OS is Debian Stretch x86_64
|
1.0
|
std::process::Command::output is unacceptably slow - `std::process::Command::output` is unacceptably slow. Consider this code:
https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=333b57442699553257f0273bb1bba3ac
When trying this code, make sure you have enabled release mode. On my local machine I see this output:
```text
1565 ms
100000000
103 ms
100000000
```
So `std::process::Command::output` is 15x slower than `std::io::BufReader::read_to_end()`. My rustc version is `rustc 1.69.0-nightly (75a0be98f 2023-02-05)`. My OS is Debian Stretch x86_64
|
process
|
std process command output is unacceptably slow std process command output is unacceptably slow consider this code when trying this code make sure you have enabled release mode on my local machine i see this output text ms ms so std process command output is slower than std io bufreader read to end my rustc version is rustc nightly my os is debian stretch
| 1
|
6,916
| 10,073,232,030
|
IssuesEvent
|
2019-07-24 09:09:54
|
googleapis/gax-dotnet
|
https://api.github.com/repos/googleapis/gax-dotnet
|
opened
|
Revisit platform integration tests
|
priority: p2 type: process
|
- Get them working again
- Add Cloud Run support
- Ideally run in Kokoro
|
1.0
|
Revisit platform integration tests - - Get them working again
- Add Cloud Run support
- Ideally run in Kokoro
|
process
|
revisit platform integration tests get them working again add cloud run support ideally run in kokoro
| 1
|
21,289
| 28,485,090,640
|
IssuesEvent
|
2023-04-18 07:16:58
|
ethereumclassic/ECIPs
|
https://api.github.com/repos/ethereumclassic/ECIPs
|
closed
|
Remove atoulme from ECIP editors
|
meta:3 process
|
Hello all,
I have not been active and given my lack of contributions, I think it is time for me to step down as ECIP editor. Please let me know if this is appropriate and accept my resignation. I will follow steps per process.
|
1.0
|
Remove atoulme from ECIP editors - Hello all,
I have not been active and given my lack of contributions, I think it is time for me to step down as ECIP editor. Please let me know if this is appropriate and accept my resignation. I will follow steps per process.
|
process
|
remove atoulme from ecip editors hello all i have not been active and given my lack of contributions i think it is time for me to step down as ecip editor please let me know if this is appropriate and accept my resignation i will follow steps per process
| 1
|
17,811
| 23,739,771,844
|
IssuesEvent
|
2022-08-31 11:22:57
|
cloudfoundry/korifi
|
https://api.github.com/repos/cloudfoundry/korifi
|
opened
|
[Feature]: Developer can push apps using the top-level `health-check-invocation-timeout` field in the manifest
|
Top-level process config
|
### Blockers/Dependencies
_No response_
### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
* **GIVEN** I have the following node app:
```js
var http = require('http');
// https://stackoverflow.com/a/53590610
function sleep(millis) {
return new Promise(resolve => setTimeout(resolve, millis));
}
http.createServer(function (request, response) {
sleep(5000).then(() => {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('ok');
});
}).listen(process.env.PORT);
```
with the following `manifest.yml`:
```yaml
---
applications:
- name: real-app
health-check-invocation-timeout: 6
processes:
- type: web
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push succeeds with an output similar to this:
```
name: test
requested state: started
routes: test.vcap.me
last uploaded: Mon 29 Aug 16:28:36 UTC 2022
stack: cflinuxfs3
buildpacks:
name version detect output buildpack name
nodejs_buildpack 1.7.61 nodejs nodejs
type: web
sidecars:
instances: 1/1
memory usage: 256M
start command: npm start
state since cpu memory disk details
#0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G
```
* **GIVEN** I have the same app with the following manifest:
```yaml
---
applications:
- name: my-app
health-check-invocation-timeout: 4
processes:
- type: web
health-check-type: http
health-check-invocation-timeout: 6
```
**WHEN I** `cf push`
**THEN I** see the push succeeds with the same output as above
### Dev Notes
_No response_
|
1.0
|
[Feature]: Developer can push apps using the top-level `health-check-invocation-timeout` field in the manifest - ### Blockers/Dependencies
_No response_
### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
* **GIVEN** I have the following node app:
```js
var http = require('http');
// https://stackoverflow.com/a/53590610
function sleep(millis) {
return new Promise(resolve => setTimeout(resolve, millis));
}
http.createServer(function (request, response) {
sleep(5000).then(() => {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('ok');
});
}).listen(process.env.PORT);
```
with the following `manifest.yml`:
```yaml
---
applications:
- name: real-app
health-check-invocation-timeout: 6
processes:
- type: web
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push succeeds with an output similar to this:
```
name: test
requested state: started
routes: test.vcap.me
last uploaded: Mon 29 Aug 16:28:36 UTC 2022
stack: cflinuxfs3
buildpacks:
name version detect output buildpack name
nodejs_buildpack 1.7.61 nodejs nodejs
type: web
sidecars:
instances: 1/1
memory usage: 256M
start command: npm start
state since cpu memory disk details
#0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G
```
* **GIVEN** I have the same app with the following manifest:
```yaml
---
applications:
- name: my-app
health-check-invocation-timeout: 4
processes:
- type: web
health-check-type: http
health-check-invocation-timeout: 6
```
**WHEN I** `cf push`
**THEN I** see the push succeeds with the same output as above
### Dev Notes
_No response_
|
process
|
developer can push apps using the top level health check invocation timeout field in the manifest blockers dependencies no response background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the following node app js var http require http function sleep millis return new promise resolve settimeout resolve millis http createserver function request response sleep then response writehead content type text plain response end ok listen process env port with the following manifest yml yaml applications name real app health check invocation timeout processes type web health check type http when i cf push then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command npm start state since cpu memory disk details running of of given i have the same app with the following manifest yaml applications name my app health check invocation timeout processes type web health check type http health check invocation timeout when i cf push then i see the push succeeds with the same output as above dev notes no response
| 1
|
560,916
| 16,606,328,691
|
IssuesEvent
|
2021-06-02 04:38:21
|
alephdata/memorious
|
https://api.github.com/repos/alephdata/memorious
|
closed
|
Add user authentication and scraper namespacing to Memorious
|
priority-low
|
We want other people to run their own scrapers on our platform. Ideally, they will have the permissions to see, run and update their own crawlers. We will have 3 tiers of permissions for users:
- `public`: You can see the public crawlers; but can't run/cancel them
- `authorized`: You can see, run and cancel your own crawlers
- `admin`: God mode
Scraper Namespacing: Scrapers should be part of a scraper group or namespace. We can figure out which namespace or group a user belongs to from req headers provided by Keycloak. The headers are put into the request by Keycloak Gatekeeper.
|
1.0
|
Add user authentication and scraper namespacing to Memorious - We want other people to run their own scrapers on our platform. Ideally, they will have the permissions to see, run and update their own crawlers. We will have 3 tiers of permissions for users:
- `public`: You can see the public crawlers; but can't run/cancel them
- `authorized`: You can see, run and cancel your own crawlers
- `admin`: God mode
Scraper Namespacing: Scrapers should be part of a scraper group or namespace. We can figure out which namespace or group a user belongs to from req headers provided by Keycloak. The headers are put into the request by Keycloak Gatekeeper.
|
non_process
|
add user authentication and scraper namespacing to memorious we want other people to run their own scrapers on our platform ideally they will have the permissions to see run and update their own crawlers we will have tiers of permissions for users public you can see the public crawlers but can t run cancel them authorized you can see run and cancel your own crawlers admin god mode scraper namespacing scrapers should be part of a scraper group or namespace we can figure out which namespace or group a user belongs to from req headers provided by keycloak the headers are put into the request by keycloak gatekeeper
| 0
|
44,363
| 12,120,791,355
|
IssuesEvent
|
2020-04-22 08:16:18
|
cf-convention/cf-conventions
|
https://api.github.com/repos/cf-convention/cf-conventions
|
opened
|
udunits supports ppm, but documentation states it does not
|
defect
|
# Title
udunits supports ppm, but documentation states it does not
# Moderator
None at the moment
# Requirement Summary
The documentation states that udunits does not support dimensionless ratios like parts-per-million. But udunits does support such units.
# Technical Proposal Summary
Change sentence to state that ppm, ppb etc are supported, or at least remove the statement that these units are not supported.
# Benefits
Every reader working with dimensionless ratios will benefit from the correction.
# Status Quo
The current working draft (1.9) states that ppm etc. are not supported by udunits.
# Detailed Proposal
The chapter "Description of the Data" - "Units" (ch03.adoc) states:
> The Udunits package defines a few dimensionless units, such as `percent`, but is lacking commonly used units such as ppm (parts per million).
The last part is wrong. UDUNITS-2 does support ppm, ppb, ppt, and ppq. In addition, percent is also supported. Other similar units, like permille are not supported. In fact, these units were supported since udunits2 version 2.1.24 from August 2011. The change occurred here:
https://github.com/Unidata/UDUNITS-2/commit/789147fd3b2e4714e27cd198040ea114ac503861
Please update the documentation to reflect the support of ppm etc.
|
1.0
|
udunits supports ppm, but documentation states it does not - # Title
udunits supports ppm, but documentation states it does not
# Moderator
None at the moment
# Requirement Summary
The documentation states that udunits does not support dimensionless ratios like parts-per-million. But udunits does support such units.
# Technical Proposal Summary
Change sentence to state that ppm, ppb etc are supported, or at least remove the statement that these units are not supported.
# Benefits
Every reader working with dimensionless ratios will benefit from the correction.
# Status Quo
The current working draft (1.9) states that ppm etc. are not supported by udunits.
# Detailed Proposal
The chapter "Description of the Data" - "Units" (ch03.adoc) states:
> The Udunits package defines a few dimensionless units, such as `percent`, but is lacking commonly used units such as ppm (parts per million).
The last part is wrong. UDUNITS-2 does support ppm, ppb, ppt, and ppq. In addition, percent is also supported. Other similar units, like permille are not supported. In fact, these units were supported since udunits2 version 2.1.24 from August 2011. The change occurred here:
https://github.com/Unidata/UDUNITS-2/commit/789147fd3b2e4714e27cd198040ea114ac503861
Please update the documentation to reflect the support of ppm etc.
|
non_process
|
udunits supports ppm but documentation states it does not title udunits supports ppm but documentation states it does not moderator none at the moment requirement summary the documentation states that udunits does not support dimensionless ratios like parts per million but udunits does support such units technical proposal summary change sentence to state that ppm ppb etc are supported or at least remove the statement that these units are not supported benefits every reader working with dimensionless ratios will benefit from the correction status quo the current working draft states that ppm etc are not supported by udunits detailed proposal the chapter description of the data units adoc states the udunits package defines a few dimensionless units such as percent but is lacking commonly used units such as ppm parts per million the last part is wrong udunits does support ppm ppb ppt and ppq in addition percent is also supported other similar units like permille are not supported in fact these units were supported since version from august the change occurred here please update the documentation to reflect the support of ppm etc
| 0
|
121,661
| 26,010,359,273
|
IssuesEvent
|
2022-12-21 00:41:04
|
WordPress/openverse-catalog
|
https://api.github.com/repos/WordPress/openverse-catalog
|
opened
|
Alert when a non-dated DAG ingests unexpected amount of data
|
🟨 priority: medium ✨ goal: improvement 💻 aspect: code
|
## Problem
A non-dated DAG ingests all available data from its provider on each run. Therefore, we expect these DAGs to ingest the same amount _or more_ data on each run. If a non-dated DAG ingests _fewer_ records than it has on previous runs, this is highly unusual and worthy of investigation. (It should be noted that it's possible for data to be deleted on the provider between DagRuns, but a net decrease in records is at least a red flag).
Our DAG record reporting lets us know how many records were ingested on a particular run, but unfortunately it's unlikely that a discrepancy will be noticed unless an observer happens to be familiar with the typical amount of data for a provider. For example, Smithsonian has provided very different counts (typically ~3.8million, but sometimes as low as 60k), but this was not noticed until a thorough investigation of some unrelated errors in that DAG was performed. #929 describes a similar discrepancy with Jamendo which was also only noticed by chance.
## Description
<!-- Describe the feature and how it solves the problem. -->
We could keep track of the number of records returned by each non-dated DAG, and alert in Slack when a DagRun falls short of this number.
|
1.0
|
Alert when a non-dated DAG ingests unexpected amount of data - ## Problem
A non-dated DAG ingests all available data from its provider on each run. Therefore, we expect these DAGs to ingest the same amount _or more_ data on each run. If a non-dated DAG ingests _fewer_ records than it has on previous runs, this is highly unusual and worthy of investigation. (It should be noted that it's possible for data to be deleted on the provider between DagRuns, but a net decrease in records is at least a red flag).
Our DAG record reporting lets us know how many records were ingested on a particular run, but unfortunately it's unlikely that a discrepancy will be noticed unless an observer happens to be familiar with the typical amount of data for a provider. For example, Smithsonian has provided very different counts (typically ~3.8million, but sometimes as low as 60k), but this was not noticed until a thorough investigation of some unrelated errors in that DAG was performed. #929 describes a similar discrepancy with Jamendo which was also only noticed by chance.
## Description
<!-- Describe the feature and how it solves the problem. -->
We could keep track of the number of records returned by each non-dated DAG, and alert in Slack when a DagRun falls short of this number.
|
non_process
|
alert when a non dated dag ingests unexpected amount of data problem a non dated dag ingests all available data from its provider on each run therefore we expect these dags to ingest the same amount or more data on each run if a non dated dag ingests fewer records than it has on previous runs this is highly unusual and worthy of investigation it should be noted that it s possible for data to be deleted on the provider between dagruns but a net decrease in records is at least a red flag our dag record reporting lets us know how many records were ingested on a particular run but unfortunately it s unlikely that a discrepancy will be noticed unless an observer happens to be familiar with the typical amount of data for a provider for example smithsonian has provided very different counts typically but sometimes as low as but this was not noticed until a thorough investigation of some unrelated errors in that dag was performed describes a similar discrepancy with jamendo which was also only noticed by chance description we could keep track of the number of records returned by each non dated dag and alert in slack when a dagrun falls short of this number
| 0
|
4,225
| 7,181,155,887
|
IssuesEvent
|
2018-02-01 03:09:20
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Performance issue with blocks with a lot of transactions
|
status-inprocess tools-getBlock type-enhancement
|
This command
getBlock -r 4200400
is much faster than this one
getBlock -c 4200400
even though the one from the cache is supposed to be significantly faster. Why?
|
1.0
|
Performance issue with blocks with a lot of transactions - This command
getBlock -r 4200400
is much faster than this one
getBlock -c 4200400
even though the one from the cache is supposed to be significantly faster. Why?
|
process
|
performance issue with blocks with a lot of transactions this command getblock r is much faster than this one getblock c even though the one from the cache is supposed to be significantly faster why
| 1
|
222,427
| 17,069,301,177
|
IssuesEvent
|
2021-07-07 11:18:53
|
kaiwalyakoparkar/Lilliputian-Bot
|
https://api.github.com/repos/kaiwalyakoparkar/Lilliputian-Bot
|
opened
|
[Doc] Structure the commands offered by the bot.
|
documentation enhancement good first issue
|
Create a table instead of putting one below the other and give an external link to the images. The links provided in the readme are of Imgur so can be directly linked as the external source.
|
1.0
|
[Doc] Structure the commands offered by the bot. - Create a table instead of putting one below the other and give an external link to the images. The links provided in the readme are of Imgur so can be directly linked as the external source.
|
non_process
|
structure the commands offered by the bot create a table instead of putting one below the other and give an external link to the images the links provided in the readme are of imgur so can be directly linked as the external source
| 0
|
10,173
| 13,044,162,751
|
IssuesEvent
|
2020-07-29 03:47:35
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SetVar` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SetVar` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SetVar` from TiDB -
## Description
Port the scalar function `SetVar` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function setvar from tidb description port the scalar function setvar from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
365,291
| 25,527,711,674
|
IssuesEvent
|
2022-11-29 04:50:18
|
howieraem/COMS4156-ASE
|
https://api.github.com/repos/howieraem/COMS4156-ASE
|
closed
|
[DO_NOT_DELETE] Tracker for T5
|
documentation
|
### About this
A simple tracker to maintain todos across our two repos, i know we can use a GH project instead, but this is easy to use.
### Instructions on using this Tracker
1. add a new item when you find one.
2. if you are owning the item, add `@yourself` in the front of the item.
3. once done, cross it out with a PR number
### TODOs
- [x] do we have this? "Implement system tests that exercise your server's final API, e.g., using some API testing tool. Your tests should exercise the multiple clients and persistent data aspects of your server. All your API tests should run automatically during CI. "
- [x] manual e2e tests from client to server; document the tests
- [x] @KenXiong123 client - request page
- [x] @jenniferduan45 client - transfer page
- [x] client "Describe how some third-party could develop and run their own client that uses your server. "
- [x] @rl3250 update server readme to include pointer to client repo
- [x] add more instructions on reading the branch & overall coverage badges in readme, CI report
- [x] static code analyzer, should also run during CI - "You should also use a static analysis bug finder tool on your entire server codebase. The static analyzer should run automatically during CI. Include the static analysis reports in your server repository. Try to fix most of the bugs found by the analyzer. "
- [x] @howieraem client - profile page
|
1.0
|
[DO_NOT_DELETE] Tracker for T5 - ### About this
A simple tracker to maintain todos across our two repos, i know we can use a GH project instead, but this is easy to use.
### Instructions on using this Tracker
1. add a new item when you find one.
2. if you are owning the item, add `@yourself` in the front of the item.
3. once done, cross it out with a PR number
### TODOs
- [x] do we have this? "Implement system tests that exercise your server's final API, e.g., using some API testing tool. Your tests should exercise the multiple clients and persistent data aspects of your server. All your API tests should run automatically during CI. "
- [x] manual e2e tests from client to server; document the tests
- [x] @KenXiong123 client - request page
- [x] @jenniferduan45 client - transfer page
- [x] client "Describe how some third-party could develop and run their own client that uses your server. "
- [x] @rl3250 update server readme to include pointer to client repo
- [x] add more instructions on reading the branch & overall coverage badges in readme, CI report
- [x] static code analyzer, should also run during CI - "You should also use a static analysis bug finder tool on your entire server codebase. The static analyzer should run automatically during CI. Include the static analysis reports in your server repository. Try to fix most of the bugs found by the analyzer. "
- [x] @howieraem client - profile page
|
non_process
|
tracker for about this a simple tracker to maintain todos across our two repos i know we can use a gh project instead but this is easy to use instructions on using this tracker add a new item when you find one if you are owning the item add yourself in the front of the item once done cross it out with a pr number todos do we have this implement system tests that exercise your server s final api e g using some api testing tool your tests should exercise the multiple clients and persistent data aspects of your server all your api tests should run automatically during ci manual tests from client to server document the tests client request page client transfer page client describe how some third party could develop and run their own client that uses your server update server readme to include pointer to client repo add more instructions on reading the branch overall coverage badges in readme ci report static code analyzer should also run during ci you should also use a static analysis bug finder tool on your entire server codebase the static analyzer should run automatically during ci include the static analysis reports in your server repository try to fix most of the bugs found by the analyzer howieraem client profile page
| 0
|
7,653
| 2,919,576,594
|
IssuesEvent
|
2015-06-24 14:51:09
|
Semantic-Org/Semantic-UI
|
https://api.github.com/repos/Semantic-Org/Semantic-UI
|
closed
|
FormValidate: test if two fields does not Match.
|
Needs Test Case Read the Readme
|
Hi, I'm using the formValidate on my website. My form is a sing-up form.
```Javascript
$('#form').form({
ps: {
identifier : 'ps',
rules: [
{
type : 'empty',
prompt : 'Veuillez choisir un pseudonyme.'
}
]
},
pass: {
identifier : 'pass',
rules: [
{
type : 'empty',
prompt : 'Veuillez choisir un mot de passe.'
}
]
},
pass2: {
identifier : 'pass2',
rules: [
{
type : 'empty',
prompt : 'Veuillez répéter le mot de passe.'
},
{
type : 'match[pass]',
prompt : 'Les mots de passe doivent correspondre.'
}
]
},
lname: {
identifier : 'lname',
rules: [
{
type : 'empty',
prompt : 'Veuillez renseigner votre nom.'
}
]
},
fname: {
identifier : 'fname',
rules: [
{
type : 'empty',
prompt : 'Veuillez renseigner votre prénom.'
}
]
},
gender: {
identifier : 'gender',
rules: [
{
type : 'empty',
prompt : 'Veuillez selectionner votre sexe.'
}
]
}
});
```
but when I try to compare two password fields with the "match" type it's not works. I can compare if pass and pass2 values are equals but I need to test if they are different. How to fix that ?
|
1.0
|
FormValidate: test if two fields does not Match. - Hi, I'm using the formValidate on my website. My form is a sing-up form.
```Javascript
$('#form').form({
ps: {
identifier : 'ps',
rules: [
{
type : 'empty',
prompt : 'Veuillez choisir un pseudonyme.'
}
]
},
pass: {
identifier : 'pass',
rules: [
{
type : 'empty',
prompt : 'Veuillez choisir un mot de passe.'
}
]
},
pass2: {
identifier : 'pass2',
rules: [
{
type : 'empty',
prompt : 'Veuillez répéter le mot de passe.'
},
{
type : 'match[pass]',
prompt : 'Les mots de passe doivent correspondre.'
}
]
},
lname: {
identifier : 'lname',
rules: [
{
type : 'empty',
prompt : 'Veuillez renseigner votre nom.'
}
]
},
fname: {
identifier : 'fname',
rules: [
{
type : 'empty',
prompt : 'Veuillez renseigner votre prénom.'
}
]
},
gender: {
identifier : 'gender',
rules: [
{
type : 'empty',
prompt : 'Veuillez selectionner votre sexe.'
}
]
}
});
```
but when I try to compare two password fields with the "match" type it's not works. I can compare if pass and pass2 values are equals but I need to test if they are different. How to fix that ?
|
non_process
|
formvalidate test if two fields does not match hi i m using the formvalidate on my website my form is a sing up form javascript form form ps identifier ps rules type empty prompt veuillez choisir un pseudonyme pass identifier pass rules type empty prompt veuillez choisir un mot de passe identifier rules type empty prompt veuillez répéter le mot de passe type match prompt les mots de passe doivent correspondre lname identifier lname rules type empty prompt veuillez renseigner votre nom fname identifier fname rules type empty prompt veuillez renseigner votre prénom gender identifier gender rules type empty prompt veuillez selectionner votre sexe but when i try to compare two password fields with the match type it s not works i can compare if pass and values are equals but i need to test if they are different how to fix that
| 0
|
12,300
| 9,682,495,843
|
IssuesEvent
|
2019-05-23 09:18:12
|
GIScience/openrouteservice
|
https://api.github.com/repos/GIScience/openrouteservice
|
closed
|
Use local maven artifact repo for development
|
Awaiting release infrastructure
|
Use local maven artifact repo in order to speed up development turnover time.
|
1.0
|
Use local maven artifact repo for development - Use local maven artifact repo in order to speed up development turnover time.
|
non_process
|
use local maven artifact repo for development use local maven artifact repo in order to speed up development turnover time
| 0
|
195,610
| 15,531,407,804
|
IssuesEvent
|
2021-03-13 23:29:05
|
MUHC-DP-Project/pbrn-gateway
|
https://api.github.com/repos/MUHC-DP-Project/pbrn-gateway
|
closed
|
First Iteration of the SRS documentation
|
documentation
|
# Goal
You will have to do the first iteration of the SRS documentation following the IEEE style.
The Template will be as follow:
## Section I: Introduction
1.1. Purpose ............................................................................................3
1.2. Scope ............................................................................................3
1.3. Definitions, Acronyms & Abbreviations .................................... 3-4
1.4. References ............................................................................................4
1.5. Overview ............................................................................................4
## Section II: Overall Description
2.1 Product Perspective ....................................................................5-7
2.2 Product Functions ....................................................................7-8
## Section III: Specific Requirements
3.1 System Features ..................................................................8-10
3.2 Functional Requirements................................................................10-12
3.3 Performance Requirements ...........................................................12
3.4 Design Constraints .....................................................................13
3.5 Software System Attributes ….......................................................13
|
1.0
|
First Iteration of the SRS documentation - # Goal
You will have to do the first iteration of the SRS documentation following the IEEE style.
The Template will be as follow:
## Section I: Introduction
1.1. Purpose ............................................................................................3
1.2. Scope ............................................................................................3
1.3. Definitions, Acronyms & Abbreviations .................................... 3-4
1.4. References ............................................................................................4
1.5. Overview ............................................................................................4
## Section II: Overall Description
2.1 Product Perspective ....................................................................5-7
2.2 Product Functions ....................................................................7-8
## Section III: Specific Requirements
3.1 System Features ..................................................................8-10
3.2 Functional Requirements................................................................10-12
3.3 Performance Requirements ...........................................................12
3.4 Design Constraints .....................................................................13
3.5 Software System Attributes ….......................................................13
|
non_process
|
first iteration of the srs documentation goal you will have to do the first iteration of the srs documentation following the ieee style the template will be as follow section i introduction purpose scope definitions acronyms abbreviations references overview section ii overall description product perspective product functions section iii specific requirements system features functional requirements performance requirements design constraints software system attributes …
| 0
|
16,015
| 9,669,748,758
|
IssuesEvent
|
2019-05-21 18:08:58
|
PowerShell/Announcements
|
https://api.github.com/repos/PowerShell/Announcements
|
opened
|
Microsoft Security Advisory CVE-2019-0733 - Windows Defender Application Control Security Feature Bypass Vulnerability
|
PowerShell Security
|
# Microsoft Security Advisory CVE-2019-0733 - Windows Defender Application Control Security Feature Bypass Vulnerability
## Executive Summary
A security feature bypass vulnerability exists in Windows Defender Application Control (WDAC) which could allow an attacker to bypass WDAC enforcement. An attacker who successfully exploited this vulnerability could circumvent Windows PowerShell Constrained Language Mode on the machine.
To exploit the vulnerability, an attacker would first have access to the local machine where PowerShell is running in Constrained Language mode. By doing that an attacker could leverage script debugging to abuse signed modules in an unintended way.
The update addresses the vulnerability by correcting how PowerShell functions in Constrained Language Mode.
System administrators are advised to update PowerShell Core to an unaffected version (see [affected software](#user-content-affected-software).)
## Discussion
Please use PowerShell/PowerShell#9644 for discussion of this advisory.
## <a name="affected-software">Affected Software</a>
The vulnerability affects PowerShell Core prior to the following versions:
| PowerShell Core Version | Fixed in |
|-------------------------|-------------------|
| 6.1 | 6.1.4 |
| 6.2 | 6.2.1 |
## Advisory FAQ
### How do I know if I am affected?
If all of the following are true:
1. Run `pwsh -v`, then, check the version in the table in [Affected Software](#user-content-affected-software) to see if your version of PowerShell Core is affected.
1. If you are running a version of PowerShell Core where the executable is not `pwsh` or `pwsh.exe`, then you are affected. This only existed for preview version of `6.0`.
### How do I update to an unaffected version?
Follow the instructions at [Installing PowerShell Core](https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell?view=powershell-6) to install the latest version of PowerShell Core.
## Other Information
### Reporting Security Issues
If you have found a potential security issue in PowerShell Core,
please email details to secure@microsoft.com.
### Support
You can ask questions about this issue on GitHub in the PowerShell organization.
This is located at https://github.com/PowerShell/.
The Announcements repo (https://github.com/PowerShell/Announcements)
will contain this bulletin as an issue and will include a link to a discussion issue where you can ask questions.
### What if the update breaks my script or module?
You can uninstall the newer version of PowerShell Core and install the previous version of PowerShell Core.
This should be treated as a temporary measure.
Therefore, the script or module should be updated to work with the patched version of PowerShell Core.
### Acknowledgments
Matt Graeber of [SpecterOps](https://specterops.io/)
Microsoft recognizes the efforts of those in the security community who help us protect customers through coordinated vulnerability disclosure.
See [acknowledgments](https://portal.msrc.microsoft.com/en-us/security-guidance/acknowledgments) for more information.
### External Links
[CVE-2019-0733](https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0733)
### Revisions
V1.0 (May 21, 2019): Advisory published.
*Version 1.0*
*Last Updated 2019-05-21*
|
True
|
Microsoft Security Advisory CVE-2019-0733 - Windows Defender Application Control Security Feature Bypass Vulnerability - # Microsoft Security Advisory CVE-2019-0733 - Windows Defender Application Control Security Feature Bypass Vulnerability
## Executive Summary
A security feature bypass vulnerability exists in Windows Defender Application Control (WDAC) which could allow an attacker to bypass WDAC enforcement. An attacker who successfully exploited this vulnerability could circumvent Windows PowerShell Constrained Language Mode on the machine.
To exploit the vulnerability, an attacker would first have access to the local machine where PowerShell is running in Constrained Language mode. By doing that an attacker could leverage script debugging to abuse signed modules in an unintended way.
The update addresses the vulnerability by correcting how PowerShell functions in Constrained Language Mode.
System administrators are advised to update PowerShell Core to an unaffected version (see [affected software](#user-content-affected-software).)
## Discussion
Please use PowerShell/PowerShell#9644 for discussion of this advisory.
## <a name="affected-software">Affected Software</a>
The vulnerability affects PowerShell Core prior to the following versions:
| PowerShell Core Version | Fixed in |
|-------------------------|-------------------|
| 6.1 | 6.1.4 |
| 6.2 | 6.2.1 |
## Advisory FAQ
### How do I know if I am affected?
If all of the following are true:
1. Run `pwsh -v`, then, check the version in the table in [Affected Software](#user-content-affected-software) to see if your version of PowerShell Core is affected.
1. If you are running a version of PowerShell Core where the executable is not `pwsh` or `pwsh.exe`, then you are affected. This only existed for preview version of `6.0`.
### How do I update to an unaffected version?
Follow the instructions at [Installing PowerShell Core](https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell?view=powershell-6) to install the latest version of PowerShell Core.
## Other Information
### Reporting Security Issues
If you have found a potential security issue in PowerShell Core,
please email details to secure@microsoft.com.
### Support
You can ask questions about this issue on GitHub in the PowerShell organization.
This is located at https://github.com/PowerShell/.
The Announcements repo (https://github.com/PowerShell/Announcements)
will contain this bulletin as an issue and will include a link to a discussion issue where you can ask questions.
### What if the update breaks my script or module?
You can uninstall the newer version of PowerShell Core and install the previous version of PowerShell Core.
This should be treated as a temporary measure.
Therefore, the script or module should be updated to work with the patched version of PowerShell Core.
### Acknowledgments
Matt Graeber of [SpecterOps](https://specterops.io/)
Microsoft recognizes the efforts of those in the security community who help us protect customers through coordinated vulnerability disclosure.
See [acknowledgments](https://portal.msrc.microsoft.com/en-us/security-guidance/acknowledgments) for more information.
### External Links
[CVE-2019-0733](https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-0733)
### Revisions
V1.0 (May 21, 2019): Advisory published.
*Version 1.0*
*Last Updated 2019-05-21*
|
non_process
|
microsoft security advisory cve windows defender application control security feature bypass vulnerability microsoft security advisory cve windows defender application control security feature bypass vulnerability executive summary a security feature bypass vulnerability exists in windows defender application control wdac which could allow an attacker to bypass wdac enforcement an attacker who successfully exploited this vulnerability could circumvent windows powershell constrained language mode on the machine to exploit the vulnerability an attacker would first have access to the local machine where powershell is running in constrained language mode by doing that an attacker could leverage script debugging to abuse signed modules in an unintended way the update addresses the vulnerability by correcting how powershell functions in constrained language mode system administrators are advised to update powershell core to an unaffected version see user content affected software discussion please use powershell powershell for discussion of this advisory affected software the vulnerability affects powershell core prior to the following versions powershell core version fixed in advisory faq how do i know if i am affected if all of the following are true run pwsh v then check the version in the table in user content affected software to see if your version of powershell core is affected if you are running a version of powershell core where the executable is not pwsh or pwsh exe then you are affected this only existed for preview version of how do i update to an unaffected version follow the instructions at to install the latest version of powershell core other information reporting security issues if you have found a potential security issue in powershell core please email details to secure microsoft com support you can ask questions about this issue on github in the powershell organization this is located at the announcements repo will contain this bulletin as an issue and will include a link to a discussion issue where you can ask questions what if the update breaks my script or module you can uninstall the newer version of powershell core and install the previous version of powershell core this should be treated as a temporary measure therefore the script or module should be updated to work with the patched version of powershell core acknowledgments matt graeber of microsoft recognizes the efforts of those in the security community who help us protect customers through coordinated vulnerability disclosure see for more information external links revisions may advisory published version last updated
| 0
|
5,730
| 8,570,073,523
|
IssuesEvent
|
2018-11-11 16:51:32
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Non-UTF-8 env vars
|
feature request process
|
* **Version**: v8.8.1
* **Platform**: Linux daurn-z170 4.13.7-1-ARCH #1 SMP PREEMPT Sat Oct 14 20:13:26 CEST 2017 x86_64 GNU/Linux
* **Subsystem**: process
<!-- Enter your issue details below this comment. -->
Node.js doesn't seem to have a way to read environment variables that aren't valid unicode. For both keys and values. e.g.
```
$ env -i $'F\xa5B=BAR' node -e 'console.log(process.env)'
{ 'F�B': undefined }
$ env -i $'FOO=B\xa5R' node -e 'console.log(process.env.FOO.codePointAt(1))'
65533
```
|
1.0
|
Non-UTF-8 env vars - * **Version**: v8.8.1
* **Platform**: Linux daurn-z170 4.13.7-1-ARCH #1 SMP PREEMPT Sat Oct 14 20:13:26 CEST 2017 x86_64 GNU/Linux
* **Subsystem**: process
<!-- Enter your issue details below this comment. -->
Node.js doesn't seem to have a way to read environment variables that aren't valid unicode. For both keys and values. e.g.
```
$ env -i $'F\xa5B=BAR' node -e 'console.log(process.env)'
{ 'F�B': undefined }
$ env -i $'FOO=B\xa5R' node -e 'console.log(process.env.FOO.codePointAt(1))'
65533
```
|
process
|
non utf env vars version platform linux daurn arch smp preempt sat oct cest gnu linux subsystem process node js doesn t seem to have a way to read environment variables that aren t valid unicode for both keys and values e g env i f bar node e console log process env f�b undefined env i foo b node e console log process env foo codepointat
| 1
|
9,198
| 12,232,396,947
|
IssuesEvent
|
2020-05-04 09:36:53
|
prisma/prisma-client-js
|
https://api.github.com/repos/prisma/prisma-client-js
|
closed
|
Prisma Client removed on adding a new package: Error: "Photon is not initialized yet"
|
bug/0-needs-info kind/bug process/next-milestone team/typescript
|
This behavior can be observed in both `npm` and `yarn`, setup a Photon project and run any query with photon to ensure that it exists.
Now add any dependency in `yarn`, `npm` and re-execute photon query. I reproduced this with `yarn add dotenv` or `npm install dotenv`.
It doesn't matter if `dotenv` is already install or is a new package. I could reproduce this reliably.
You would run into (error paraphrased):
`npm`: `import { Photon } from '@prisma/photon'` no found.
`yarn`: `Photon is not initialized yet. Please run prisma2 generate again`
I used the following versions:
```
divyendusingh [prisma]$ yarn --version
1.21.1
divyendusingh [prisma]$ npm --version
6.10.2
```
|
1.0
|
Prisma Client removed on adding a new package: Error: "Photon is not initialized yet" - This behavior can be observed in both `npm` and `yarn`, setup a Photon project and run any query with photon to ensure that it exists.
Now add any dependency in `yarn`, `npm` and re-execute photon query. I reproduced this with `yarn add dotenv` or `npm install dotenv`.
It doesn't matter if `dotenv` is already install or is a new package. I could reproduce this reliably.
You would run into (error paraphrased):
`npm`: `import { Photon } from '@prisma/photon'` no found.
`yarn`: `Photon is not initialized yet. Please run prisma2 generate again`
I used the following versions:
```
divyendusingh [prisma]$ yarn --version
1.21.1
divyendusingh [prisma]$ npm --version
6.10.2
```
|
process
|
prisma client removed on adding a new package error photon is not initialized yet this behavior can be observed in both npm and yarn setup a photon project and run any query with photon to ensure that it exists now add any dependency in yarn npm and re execute photon query i reproduced this with yarn add dotenv or npm install dotenv it doesn t matter if dotenv is already install or is a new package i could reproduce this reliably you would run into error paraphrased npm import photon from prisma photon no found yarn photon is not initialized yet please run generate again i used the following versions divyendusingh yarn version divyendusingh npm version
| 1
|
13,918
| 16,676,385,723
|
IssuesEvent
|
2021-06-07 16:42:45
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Issues with preprocessors that use fx variables
|
bug preprocessor
|
I found some small issues with the new implementation of the fx preprocessors in #999 (great job btw there @sloosvel!).
Issues
---------
1. Not directly related to #999, but do we have a list of all preprocessors for which `fx_variables` can be specified? Maybe I'm blind, but I didn't find it in the documentation. Basically the list given here: https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/_recipe.py#L482-L485
2. For `mask_landsea` we had checks for the shape of the fx variables in place. If the shapes did not match, Natural Earth masks were used. Right now, if the shapes do not match, an error is printed as reported by @LisaBock [here](https://github.com/ESMValGroup/ESMValTool/issues/2155#issuecomment-832060910): `ValueError: Dimensions of tas and sftof cubes do not match. Cannot broadcast cubes.` This is in particular problematic since for `mask_landsea`, the tool automatically adds the fx variables `sftlf` and `sftof` if none is specified. Since `sftlf` is always checked first, the tool is most likely to fail if an ocean variable is specified since the shapes of `sfltf` and ocean variables differ for most models. For example, the first recipe fails with the error `ValueError: Dimensions of tos and sftlf cubes do not match. Cannot broadcast cubes.`, and the second one runs fine:
<details>
<summary>Recipe 1 (fails)</summary>
```yml
preprocessors:
test:
mask_landsea:
mask_out: land
diagnostics:
test:
variables:
tos:
preprocessor: test
project: CMIP5
mip: Omon
exp: historical
start_year: 2000
end_year: 2000
ensemble: r1i1p1
additional_datasets:
- {dataset: CanESM2}
scripts: null
```
</details>
<details>
<summary>Recipe 2 (works)</summary>
```yml
preprocessors:
test:
mask_landsea:
mask_out: land
fx_variables:
sftof:
diagnostics:
test:
variables:
tos:
preprocessor: test
project: CMIP5
mip: Omon
exp: historical
start_year: 2000
end_year: 2000
ensemble: r1i1p1
additional_datasets:
- {dataset: CanESM2}
scripts: null
```
</details>
3. For `weighting_landsea_fraction`, we had similar checks. The tool tried `sftlf` and `sftof`, and only if both had different shapes, an error was printed. Right now, due to the prioritization of `sftlf`, an error is e.g. printed for every ocean variables that is not on an atmospheric grid (thus does not match the shape of `sftlf`), even though `sftof` would work just fine.
4. As far as I can tell from the code (didn't test though!) the remaining preprocessors should behave similarly to the old version (the error messages might differ, though).
5. The following message is now printed to the log. I think we should demote this to a debug message and maybe only print the `short_name` of the fx variables as info message.
<details>
<summary>Log</summary>
```
2021-05-05 08:28:29,557 UTC [14161] INFO Using fx_files: {'sftlf': {'alias': 'CanESM2',
'dataset': 'CanESM2',
'diagnostic': 'test',
'end_year': 2000,
'ensemble': 'r0i0p0',
'exp': 'historical',
'filename': '/pf/b/b309141/work/CMIP5_DKRZ/CCCma/CanESM2/historical/fx/atmos/fx/r0i0p0/v20120410/sftlf/sftlf_fx_CanESM2_historical_r0i0p0.nc',
'frequency': 'fx',
'institute': ['CCCma'],
'long_name': 'Land Area Fraction',
'mip': 'fx',
'modeling_realm': ['atmos'],
'original_short_name': 'sftlf',
'preprocessor': 'test',
'project': 'CMIP5',
'recipe_dataset_index': 0,
'short_name': 'sftlf',
'standard_name': 'land_area_fraction',
'start_year': 2000,
'units': '%',
'variable_group': 'sftlf'},
'sftof': {'alias': 'CanESM2',
'dataset': 'CanESM2',
'diagnostic': 'test',
'end_year': 2000,
'ensemble': 'r0i0p0',
'exp': 'historical',
'filename': '/pf/b/b309141/work/CMIP5_DKRZ/CCCma/CanESM2/historical/fx/ocean/fx/r0i0p0/v20130119/sftof/sftof_fx_CanESM2_historical_r0i0p0.nc',
'frequency': 'fx',
'institute': ['CCCma'],
'long_name': 'Sea Area Fraction',
'mip': 'fx',
'modeling_realm': ['ocean'],
'original_short_name': 'sftof',
'preprocessor': 'test',
'project': 'CMIP5',
'recipe_dataset_index': 0,
'short_name': 'sftof',
'standard_name': 'sea_area_fraction',
'start_year': 2000,
'units': '%',
'variable_group': 'sftof'}} for variable tos during step mask_landsea
```
</details>
Possible solution
----------------------
The main issues here are 2. and 3. I think both can be fixed if we don't raise an exception [here](https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/preprocessor/_ancillary_vars.py#L80-L83) and [here](https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/preprocessor/_ancillary_vars.py#L120-L123) if the shapes of the ancilliary files do not match the cube. If a preprocessor cannot handle the case that no fx files are found/the shapes differ, the errors should be raised in the corresponding preprocessing function.
It would be also nice to add some tests for these cases, similiar to the [ones we had before](https://github.com/ESMValGroup/ESMValCore/pull/999/files?file-filters%5B%5D=.py#diff-e57a19c9bf65c498803a124f8d455e21ea4d03ed5d1be8f51d6fa4570bb25623L56-L92).
|
1.0
|
Issues with preprocessors that use fx variables - I found some small issues with the new implementation of the fx preprocessors in #999 (great job btw there @sloosvel!).
Issues
---------
1. Not directly related to #999, but do we have a list of all preprocessors for which `fx_variables` can be specified? Maybe I'm blind, but I didn't find it in the documentation. Basically the list given here: https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/_recipe.py#L482-L485
2. For `mask_landsea` we had checks for the shape of the fx variables in place. If the shapes did not match, Natural Earth masks were used. Right now, if the shapes do not match, an error is printed as reported by @LisaBock [here](https://github.com/ESMValGroup/ESMValTool/issues/2155#issuecomment-832060910): `ValueError: Dimensions of tas and sftof cubes do not match. Cannot broadcast cubes.` This is in particular problematic since for `mask_landsea`, the tool automatically adds the fx variables `sftlf` and `sftof` if none is specified. Since `sftlf` is always checked first, the tool is most likely to fail if an ocean variable is specified since the shapes of `sfltf` and ocean variables differ for most models. For example, the first recipe fails with the error `ValueError: Dimensions of tos and sftlf cubes do not match. Cannot broadcast cubes.`, and the second one runs fine:
<details>
<summary>Recipe 1 (fails)</summary>
```yml
preprocessors:
test:
mask_landsea:
mask_out: land
diagnostics:
test:
variables:
tos:
preprocessor: test
project: CMIP5
mip: Omon
exp: historical
start_year: 2000
end_year: 2000
ensemble: r1i1p1
additional_datasets:
- {dataset: CanESM2}
scripts: null
```
</details>
<details>
<summary>Recipe 2 (works)</summary>
```yml
preprocessors:
test:
mask_landsea:
mask_out: land
fx_variables:
sftof:
diagnostics:
test:
variables:
tos:
preprocessor: test
project: CMIP5
mip: Omon
exp: historical
start_year: 2000
end_year: 2000
ensemble: r1i1p1
additional_datasets:
- {dataset: CanESM2}
scripts: null
```
</details>
3. For `weighting_landsea_fraction`, we had similar checks. The tool tried `sftlf` and `sftof`, and only if both had different shapes, an error was printed. Right now, due to the prioritization of `sftlf`, an error is e.g. printed for every ocean variables that is not on an atmospheric grid (thus does not match the shape of `sftlf`), even though `sftof` would work just fine.
4. As far as I can tell from the code (didn't test though!) the remaining preprocessors should behave similarly to the old version (the error messages might differ, though).
5. The following message is now printed to the log. I think we should demote this to a debug message and maybe only print the `short_name` of the fx variables as info message.
<details>
<summary>Log</summary>
```
2021-05-05 08:28:29,557 UTC [14161] INFO Using fx_files: {'sftlf': {'alias': 'CanESM2',
'dataset': 'CanESM2',
'diagnostic': 'test',
'end_year': 2000,
'ensemble': 'r0i0p0',
'exp': 'historical',
'filename': '/pf/b/b309141/work/CMIP5_DKRZ/CCCma/CanESM2/historical/fx/atmos/fx/r0i0p0/v20120410/sftlf/sftlf_fx_CanESM2_historical_r0i0p0.nc',
'frequency': 'fx',
'institute': ['CCCma'],
'long_name': 'Land Area Fraction',
'mip': 'fx',
'modeling_realm': ['atmos'],
'original_short_name': 'sftlf',
'preprocessor': 'test',
'project': 'CMIP5',
'recipe_dataset_index': 0,
'short_name': 'sftlf',
'standard_name': 'land_area_fraction',
'start_year': 2000,
'units': '%',
'variable_group': 'sftlf'},
'sftof': {'alias': 'CanESM2',
'dataset': 'CanESM2',
'diagnostic': 'test',
'end_year': 2000,
'ensemble': 'r0i0p0',
'exp': 'historical',
'filename': '/pf/b/b309141/work/CMIP5_DKRZ/CCCma/CanESM2/historical/fx/ocean/fx/r0i0p0/v20130119/sftof/sftof_fx_CanESM2_historical_r0i0p0.nc',
'frequency': 'fx',
'institute': ['CCCma'],
'long_name': 'Sea Area Fraction',
'mip': 'fx',
'modeling_realm': ['ocean'],
'original_short_name': 'sftof',
'preprocessor': 'test',
'project': 'CMIP5',
'recipe_dataset_index': 0,
'short_name': 'sftof',
'standard_name': 'sea_area_fraction',
'start_year': 2000,
'units': '%',
'variable_group': 'sftof'}} for variable tos during step mask_landsea
```
</details>
Possible solution
----------------------
The main issues here are 2. and 3. I think both can be fixed if we don't raise an exception [here](https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/preprocessor/_ancillary_vars.py#L80-L83) and [here](https://github.com/ESMValGroup/ESMValCore/blob/fee5d541dc871efe72ab972a6a52195c452d16eb/esmvalcore/preprocessor/_ancillary_vars.py#L120-L123) if the shapes of the ancilliary files do not match the cube. If a preprocessor cannot handle the case that no fx files are found/the shapes differ, the errors should be raised in the corresponding preprocessing function.
It would be also nice to add some tests for these cases, similiar to the [ones we had before](https://github.com/ESMValGroup/ESMValCore/pull/999/files?file-filters%5B%5D=.py#diff-e57a19c9bf65c498803a124f8d455e21ea4d03ed5d1be8f51d6fa4570bb25623L56-L92).
|
process
|
issues with preprocessors that use fx variables i found some small issues with the new implementation of the fx preprocessors in great job btw there sloosvel issues not directly related to but do we have a list of all preprocessors for which fx variables can be specified maybe i m blind but i didn t find it in the documentation basically the list given here for mask landsea we had checks for the shape of the fx variables in place if the shapes did not match natural earth masks were used right now if the shapes do not match an error is printed as reported by lisabock valueerror dimensions of tas and sftof cubes do not match cannot broadcast cubes this is in particular problematic since for mask landsea the tool automatically adds the fx variables sftlf and sftof if none is specified since sftlf is always checked first the tool is most likely to fail if an ocean variable is specified since the shapes of sfltf and ocean variables differ for most models for example the first recipe fails with the error valueerror dimensions of tos and sftlf cubes do not match cannot broadcast cubes and the second one runs fine recipe fails yml preprocessors test mask landsea mask out land diagnostics test variables tos preprocessor test project mip omon exp historical start year end year ensemble additional datasets dataset scripts null recipe works yml preprocessors test mask landsea mask out land fx variables sftof diagnostics test variables tos preprocessor test project mip omon exp historical start year end year ensemble additional datasets dataset scripts null for weighting landsea fraction we had similar checks the tool tried sftlf and sftof and only if both had different shapes an error was printed right now due to the prioritization of sftlf an error is e g printed for every ocean variables that is not on an atmospheric grid thus does not match the shape of sftlf even though sftof would work just fine as far as i can tell from the code didn t test though the remaining preprocessors should behave similarly to the old version the error messages might differ though the following message is now printed to the log i think we should demote this to a debug message and maybe only print the short name of the fx variables as info message log utc info using fx files sftlf alias dataset diagnostic test end year ensemble exp historical filename pf b work dkrz cccma historical fx atmos fx sftlf sftlf fx historical nc frequency fx institute long name land area fraction mip fx modeling realm original short name sftlf preprocessor test project recipe dataset index short name sftlf standard name land area fraction start year units variable group sftlf sftof alias dataset diagnostic test end year ensemble exp historical filename pf b work dkrz cccma historical fx ocean fx sftof sftof fx historical nc frequency fx institute long name sea area fraction mip fx modeling realm original short name sftof preprocessor test project recipe dataset index short name sftof standard name sea area fraction start year units variable group sftof for variable tos during step mask landsea possible solution the main issues here are and i think both can be fixed if we don t raise an exception and if the shapes of the ancilliary files do not match the cube if a preprocessor cannot handle the case that no fx files are found the shapes differ the errors should be raised in the corresponding preprocessing function it would be also nice to add some tests for these cases similiar to the
| 1
|
16,290
| 20,919,583,805
|
IssuesEvent
|
2022-03-24 16:10:13
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
sklearn.preprocessing.LabelEncoder: new feature to manage unknown value in `transform()` method
|
New Feature module:preprocessing
|
### Describe the workflow you want to enable
Hi, I'm working with `from sklearn.preprocessing import LabelEncoder` and I would find interesting (and useful) to have an option to not get an error in case of when using the `transform()` method there is an unknown label as in this toy example.
Currently:
```python
import pandas as pd
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train = pd.DataFrame({"x1": ["a", "b", "c"], "y": [1, 2, 3]})
test = pd.DataFrame({"x1": ["a", "c", "d"], "y": [1, 3, 4]})
print(le.fit_transform(train["x1"]))
print(le.transform(test["x1"]))
```
output is:
```
[0 1 2]
KeyError
.
.
.
ValueError: y contains previously unseen labels: 'd'
```
Wanted behaviour/feature:
```python
import pandas as pd
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train = pd.DataFrame({"x1": ["a", "b", "c"], "y": [1, 2, 3]})
test = pd.DataFrame({"x1": ["a", "c", "d"], "y": [1, 3, 4]})
print(le.fit_transform(train["x1"]))
print(le.transform(test["x1"], allow_unkown=True))
```
output is something like:
```
[0 1 2]
[0 2 None]
```
### Describe your proposed solution
We could do this by adding a new parameter to the `transform()` method of `LabelEncoder()` class or in the `__init__()`method of the `LabelEncoder()` class. The parameter could also looks like `replace_unknown=False` if we don't want this new feature (=default value) or if `replace_unknown!=False` we will replace the unknown value (not provided during the `fit()`) by the specified value (for example: `replace_unknown=None` will replace the unknown values by `None` or `replace_unknown=0` will replace the unknown values by `0`).
I could open a PR to propose this change
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_
|
1.0
|
sklearn.preprocessing.LabelEncoder: new feature to manage unknown value in `transform()` method - ### Describe the workflow you want to enable
Hi, I'm working with `from sklearn.preprocessing import LabelEncoder` and I would find interesting (and useful) to have an option to not get an error in case of when using the `transform()` method there is an unknown label as in this toy example.
Currently:
```python
import pandas as pd
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train = pd.DataFrame({"x1": ["a", "b", "c"], "y": [1, 2, 3]})
test = pd.DataFrame({"x1": ["a", "c", "d"], "y": [1, 3, 4]})
print(le.fit_transform(train["x1"]))
print(le.transform(test["x1"]))
```
output is:
```
[0 1 2]
KeyError
.
.
.
ValueError: y contains previously unseen labels: 'd'
```
Wanted behaviour/feature:
```python
import pandas as pd
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train = pd.DataFrame({"x1": ["a", "b", "c"], "y": [1, 2, 3]})
test = pd.DataFrame({"x1": ["a", "c", "d"], "y": [1, 3, 4]})
print(le.fit_transform(train["x1"]))
print(le.transform(test["x1"], allow_unkown=True))
```
output is something like:
```
[0 1 2]
[0 2 None]
```
### Describe your proposed solution
We could do this by adding a new parameter to the `transform()` method of `LabelEncoder()` class or in the `__init__()`method of the `LabelEncoder()` class. The parameter could also looks like `replace_unknown=False` if we don't want this new feature (=default value) or if `replace_unknown!=False` we will replace the unknown value (not provided during the `fit()`) by the specified value (for example: `replace_unknown=None` will replace the unknown values by `None` or `replace_unknown=0` will replace the unknown values by `0`).
I could open a PR to propose this change
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_
|
process
|
sklearn preprocessing labelencoder new feature to manage unknown value in transform method describe the workflow you want to enable hi i m working with from sklearn preprocessing import labelencoder and i would find interesting and useful to have an option to not get an error in case of when using the transform method there is an unknown label as in this toy example currently python import pandas as pd from sklearn preprocessing import labelencoder le labelencoder train pd dataframe y test pd dataframe y print le fit transform train print le transform test output is keyerror valueerror y contains previously unseen labels d wanted behaviour feature python import pandas as pd from sklearn preprocessing import labelencoder le labelencoder train pd dataframe y test pd dataframe y print le fit transform train print le transform test allow unkown true output is something like describe your proposed solution we could do this by adding a new parameter to the transform method of labelencoder class or in the init method of the labelencoder class the parameter could also looks like replace unknown false if we don t want this new feature default value or if replace unknown false we will replace the unknown value not provided during the fit by the specified value for example replace unknown none will replace the unknown values by none or replace unknown will replace the unknown values by i could open a pr to propose this change describe alternatives you ve considered if relevant no response additional context no response
| 1
|
244,594
| 26,436,647,761
|
IssuesEvent
|
2023-01-15 13:25:26
|
MikeGratsas/currency-converter
|
https://api.github.com/repos/MikeGratsas/currency-converter
|
closed
|
CVE-2022-22968 (Medium) detected in spring-context-5.3.12.jar - autoclosed
|
security vulnerability
|
## CVE-2022-22968 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.12.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-context/5.3.12/spring-context-5.3.12.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.5.6.jar (Root Library)
- spring-data-jpa-2.5.6.jar
- :x: **spring-context-5.3.12.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path.
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution (org.springframework:spring-context): 5.3.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-data-jpa): 2.5.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22968 (Medium) detected in spring-context-5.3.12.jar - autoclosed - ## CVE-2022-22968 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-context-5.3.12.jar</b></p></summary>
<p>Spring Context</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-context/5.3.12/spring-context-5.3.12.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.5.6.jar (Root Library)
- spring-data-jpa-2.5.6.jar
- :x: **spring-context-5.3.12.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.18, 5.2.0 - 5.2.20, and older unsupported versions, the patterns for disallowedFields on a DataBinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field, including upper and lower case for the first character of all nested fields within the property path.
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-22968>CVE-2022-22968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22968">https://tanzu.vmware.com/security/cve-2022-22968</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution (org.springframework:spring-context): 5.3.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-data-jpa): 2.5.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in spring context jar autoclosed cve medium severity vulnerability vulnerable library spring context jar spring context library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org springframework spring context spring context jar dependency hierarchy spring boot starter data jpa jar root library spring data jpa jar x spring context jar vulnerable library found in base branch main vulnerability details in spring framework versions and older unsupported versions the patterns for disallowedfields on a databinder are case sensitive which means a field is not effectively protected unless it is listed with both upper and lower case for the first character of the field including upper and lower case for the first character of all nested fields within the property path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring context direct dependency fix resolution org springframework boot spring boot starter data jpa step up your open source security game with mend
| 0
|
114,722
| 14,628,377,804
|
IssuesEvent
|
2020-12-23 14:03:36
|
alpheios-project/alignment-editor-new
|
https://api.github.com/repos/alpheios-project/alignment-editor-new
|
closed
|
decide on text states storage requirements
|
design
|
@balmas and @kirlat , I suggest to discuss overall scope of the task here.
**About handling with texts:**
After examinining the requirements I believe that we could define the following states of source/translations text:
1. input text
- it could be uploaded manually, from file, from external API (DTS API)
- it could be a plain text, TEI XML or text with already defined alignment based on Alpheios Alignment Schema
- it could have metadata in xml format (TEI XML)
2. updated text after upload
- a user could edit line breaks
- could add some words
- metada could be editted
3. tokenized text before alignment ( it could be fully tokenized or partly according to the discussion in #1 )
- it should be able to edit tokenized results
- add line breaks
- add words
4. text on each alignment iteration
- a user should be able to undo/redo each alignment step
- should be able to add comments to aligned groups
5. output alignment result (with various additional features)
and also we could have several translations texts for one source texts
To realize such requirements we could go one of the following ways:
- we could store texts on each state - so we would have 3 versions + all alignment steps
**advantages** - it would be easier to handle, manupulate and go between steps
**disadvantages** - it would use much memory
- we could store transactions of texts changing
for example,
- plain text
- added line break
- added words
- added token to this word
- added new words
- added another token
- added alignment for these tokens
and when we need to to get the text on any state we apply all these changes
**advantages** - for big texts it would need less memory
**disadvantages** - it would need additional calculations
I like the first variant more, but I think that we could use some storage system in addition to it
- relational DB or IndexedDB
At some point we could try to store it in memory in some array/object-like structures
Or we could define what states could not be re-get, for example we store only finally editted input (with/or without tokens) and alignment steps
@balmas, upon requirements what do you think - what amount of stored data in memory we need?
and what states of texts we should store for logged in users?
|
1.0
|
decide on text states storage requirements - @balmas and @kirlat , I suggest to discuss overall scope of the task here.
**About handling with texts:**
After examinining the requirements I believe that we could define the following states of source/translations text:
1. input text
- it could be uploaded manually, from file, from external API (DTS API)
- it could be a plain text, TEI XML or text with already defined alignment based on Alpheios Alignment Schema
- it could have metadata in xml format (TEI XML)
2. updated text after upload
- a user could edit line breaks
- could add some words
- metada could be editted
3. tokenized text before alignment ( it could be fully tokenized or partly according to the discussion in #1 )
- it should be able to edit tokenized results
- add line breaks
- add words
4. text on each alignment iteration
- a user should be able to undo/redo each alignment step
- should be able to add comments to aligned groups
5. output alignment result (with various additional features)
and also we could have several translations texts for one source texts
To realize such requirements we could go one of the following ways:
- we could store texts on each state - so we would have 3 versions + all alignment steps
**advantages** - it would be easier to handle, manupulate and go between steps
**disadvantages** - it would use much memory
- we could store transactions of texts changing
for example,
- plain text
- added line break
- added words
- added token to this word
- added new words
- added another token
- added alignment for these tokens
and when we need to to get the text on any state we apply all these changes
**advantages** - for big texts it would need less memory
**disadvantages** - it would need additional calculations
I like the first variant more, but I think that we could use some storage system in addition to it
- relational DB or IndexedDB
At some point we could try to store it in memory in some array/object-like structures
Or we could define what states could not be re-get, for example we store only finally editted input (with/or without tokens) and alignment steps
@balmas, upon requirements what do you think - what amount of stored data in memory we need?
and what states of texts we should store for logged in users?
|
non_process
|
decide on text states storage requirements balmas and kirlat i suggest to discuss overall scope of the task here about handling with texts after examinining the requirements i believe that we could define the following states of source translations text input text it could be uploaded manually from file from external api dts api it could be a plain text tei xml or text with already defined alignment based on alpheios alignment schema it could have metadata in xml format tei xml updated text after upload a user could edit line breaks could add some words metada could be editted tokenized text before alignment it could be fully tokenized or partly according to the discussion in it should be able to edit tokenized results add line breaks add words text on each alignment iteration a user should be able to undo redo each alignment step should be able to add comments to aligned groups output alignment result with various additional features and also we could have several translations texts for one source texts to realize such requirements we could go one of the following ways we could store texts on each state so we would have versions all alignment steps advantages it would be easier to handle manupulate and go between steps disadvantages it would use much memory we could store transactions of texts changing for example plain text added line break added words added token to this word added new words added another token added alignment for these tokens and when we need to to get the text on any state we apply all these changes advantages for big texts it would need less memory disadvantages it would need additional calculations i like the first variant more but i think that we could use some storage system in addition to it relational db or indexeddb at some point we could try to store it in memory in some array object like structures or we could define what states could not be re get for example we store only finally editted input with or without tokens and alignment steps balmas upon requirements what do you think what amount of stored data in memory we need and what states of texts we should store for logged in users
| 0
|
63,045
| 8,652,998,239
|
IssuesEvent
|
2018-11-27 09:41:58
|
wix/react-native-navigation
|
https://api.github.com/repos/wix/react-native-navigation
|
closed
|
Add this sample app to v2 docs
|
type: documentation user: looking for contributors
|
I don't know if this is something you'd like to include in the readme but I built this app with v2 with many features it has over v1 like SplitView etc.
https://github.com/birkir/hekla
|
1.0
|
Add this sample app to v2 docs - I don't know if this is something you'd like to include in the readme but I built this app with v2 with many features it has over v1 like SplitView etc.
https://github.com/birkir/hekla
|
non_process
|
add this sample app to docs i don t know if this is something you d like to include in the readme but i built this app with with many features it has over like splitview etc
| 0
|
28,475
| 23,284,985,478
|
IssuesEvent
|
2022-08-05 15:33:23
|
accessibility-exchange/platform
|
https://api.github.com/repos/accessibility-exchange/platform
|
closed
|
Ensure routes, config, and events are cached in production
|
help wanted infrastructure
|
These commands should be added to the deployment workflow:
```
php artisan optimize # will cache routes and config
php artisan event:cache
```
The first line above can replace these two for brevity: https://github.com/accessibility-exchange/platform/blob/897da5615b30696f7fb464b62493dfda7c6b4496/.deploy/accessibility-app/entrypoint.sh#L30-L31
|
1.0
|
Ensure routes, config, and events are cached in production - These commands should be added to the deployment workflow:
```
php artisan optimize # will cache routes and config
php artisan event:cache
```
The first line above can replace these two for brevity: https://github.com/accessibility-exchange/platform/blob/897da5615b30696f7fb464b62493dfda7c6b4496/.deploy/accessibility-app/entrypoint.sh#L30-L31
|
non_process
|
ensure routes config and events are cached in production these commands should be added to the deployment workflow php artisan optimize will cache routes and config php artisan event cache the first line above can replace these two for brevity
| 0
|
6,962
| 10,115,829,811
|
IssuesEvent
|
2019-07-30 23:10:25
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
Remove unused dependencies
|
type: process
|
Running `mvn dependency:analyze` shows that there are a bunch of "Used undeclared dependencies found" and "Unused declared dependencies found". This issue is to clean up the "Unused declared dependencies found"
|
1.0
|
Remove unused dependencies - Running `mvn dependency:analyze` shows that there are a bunch of "Used undeclared dependencies found" and "Unused declared dependencies found". This issue is to clean up the "Unused declared dependencies found"
|
process
|
remove unused dependencies running mvn dependency analyze shows that there are a bunch of used undeclared dependencies found and unused declared dependencies found this issue is to clean up the unused declared dependencies found
| 1
|
15,528
| 19,703,291,075
|
IssuesEvent
|
2022-01-12 18:53:56
|
googleapis/nodejs-service-usage
|
https://api.github.com/repos/googleapis/nodejs-service-usage
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'service-usage' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'service-usage' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname service usage invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
744,739
| 25,953,900,612
|
IssuesEvent
|
2022-12-18 00:30:24
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
use of __noinit with ecc memory hangs system
|
bug priority: low Stale
|
**Describe the bug**
The `__noinit` attribute is used primarily for the purposes listed below:
1. to preserve a section of SRAM across reboots
1. to speedup init time by reducing the size of `.bss`
Some configurations may also use `__noinit` to skip initialization of memory regions initialized by other application-specific hardware.
There have been two identified areas where memory regions marked as `__noinit` are read-from before they are written-to - specifically, the shell and logging subsystems. Issuing a read before a write might be reasonable in certain scenarios, but for some systems with ECC-enabled memory controllers, a read operation before a write operation will cause the memory controller to issue a fault, hanging the system.
**To Reproduce**
1. obtain a system with ECC memory and an ECC-capable memory controller
2. enabling logging (with deferred mode) and / or the shell
3. log messages
4. observe hang
In the above cases, the system is completely hung before the boot banner was even printed.
**Expected behavior**
The system is not hung.
**Impact**
Showstopper
**Logs and console output**
There are no logs, nor is there any output on the console. It may be possible to find a stacktrace.
**Environment (please complete the following information):**
- OS: Zephyr v2.6.0
- Toolchain: crosstool-ng / riscv64
- Commit SHA or Version used: 79a6c07536bc14583198f8e3555df6134d8822cf
**Additional context**
This problem cannot be reproduced on non-ECC systems, and specifically, the ECC memory controller must also be configured to check SRAM memory regions. It is not obvious if there are any readily-available Zephyr community boards that include an ECC memory controller. This may highlight previously unknown race conditions, but those fixes may be considerably more involved. Therefore, the linked PR is mainly for mitigation purposes.
|
1.0
|
use of __noinit with ecc memory hangs system - **Describe the bug**
The `__noinit` attribute is used primarily for the purposes listed below:
1. to preserve a section of SRAM across reboots
1. to speedup init time by reducing the size of `.bss`
Some configurations may also use `__noinit` to skip initialization of memory regions initialized by other application-specific hardware.
There have been two identified areas where memory regions marked as `__noinit` are read-from before they are written-to - specifically, the shell and logging subsystems. Issuing a read before a write might be reasonable in certain scenarios, but for some systems with ECC-enabled memory controllers, a read operation before a write operation will cause the memory controller to issue a fault, hanging the system.
**To Reproduce**
1. obtain a system with ECC memory and an ECC-capable memory controller
2. enabling logging (with deferred mode) and / or the shell
3. log messages
4. observe hang
In the above cases, the system is completely hung before the boot banner was even printed.
**Expected behavior**
The system is not hung.
**Impact**
Showstopper
**Logs and console output**
There are no logs, nor is there any output on the console. It may be possible to find a stacktrace.
**Environment (please complete the following information):**
- OS: Zephyr v2.6.0
- Toolchain: crosstool-ng / riscv64
- Commit SHA or Version used: 79a6c07536bc14583198f8e3555df6134d8822cf
**Additional context**
This problem cannot be reproduced on non-ECC systems, and specifically, the ECC memory controller must also be configured to check SRAM memory regions. It is not obvious if there are any readily-available Zephyr community boards that include an ECC memory controller. This may highlight previously unknown race conditions, but those fixes may be considerably more involved. Therefore, the linked PR is mainly for mitigation purposes.
|
non_process
|
use of noinit with ecc memory hangs system describe the bug the noinit attribute is used primarily for the purposes listed below to preserve a section of sram across reboots to speedup init time by reducing the size of bss some configurations may also use noinit to skip initialization of memory regions initialized by other application specific hardware there have been two identified areas where memory regions marked as noinit are read from before they are written to specifically the shell and logging subsystems issuing a read before a write might be reasonable in certain scenarios but for some systems with ecc enabled memory controllers a read operation before a write operation will cause the memory controller to issue a fault hanging the system to reproduce obtain a system with ecc memory and an ecc capable memory controller enabling logging with deferred mode and or the shell log messages observe hang in the above cases the system is completely hung before the boot banner was even printed expected behavior the system is not hung impact showstopper logs and console output there are no logs nor is there any output on the console it may be possible to find a stacktrace environment please complete the following information os zephyr toolchain crosstool ng commit sha or version used additional context this problem cannot be reproduced on non ecc systems and specifically the ecc memory controller must also be configured to check sram memory regions it is not obvious if there are any readily available zephyr community boards that include an ecc memory controller this may highlight previously unknown race conditions but those fixes may be considerably more involved therefore the linked pr is mainly for mitigation purposes
| 0
|
111,092
| 9,498,863,013
|
IssuesEvent
|
2019-04-24 03:47:26
|
stan-dev/math
|
https://api.github.com/repos/stan-dev/math
|
closed
|
Math repo runs no tests on Windows
|
continuous integration testing windows
|
Ideally we'd run the unit tests there. I think they might be broken, though.
#### Current Version:
v2.18.0
|
1.0
|
Math repo runs no tests on Windows - Ideally we'd run the unit tests there. I think they might be broken, though.
#### Current Version:
v2.18.0
|
non_process
|
math repo runs no tests on windows ideally we d run the unit tests there i think they might be broken though current version
| 0
|
111,673
| 9,537,933,552
|
IssuesEvent
|
2019-04-30 13:42:10
|
pints-team/pints
|
https://api.github.com/repos/pints-team/pints
|
closed
|
Functional testing results repo
|
functional-testing
|
The [functional-testing-results](https://github.com/pints-team/functional-testing-results) repo is currently 1.6Gb, increasing by the hour.
This is because we seem to be literally versioning png images every time a new test is run, and that's not really a use-case that GitHub is intended for.
We're definitely going to hit the GitHub "fair use" limits either sooner or later, so we will need a new strategy for reporting functional testing results.
|
1.0
|
Functional testing results repo - The [functional-testing-results](https://github.com/pints-team/functional-testing-results) repo is currently 1.6Gb, increasing by the hour.
This is because we seem to be literally versioning png images every time a new test is run, and that's not really a use-case that GitHub is intended for.
We're definitely going to hit the GitHub "fair use" limits either sooner or later, so we will need a new strategy for reporting functional testing results.
|
non_process
|
functional testing results repo the repo is currently increasing by the hour this is because we seem to be literally versioning png images every time a new test is run and that s not really a use case that github is intended for we re definitely going to hit the github fair use limits either sooner or later so we will need a new strategy for reporting functional testing results
| 0
|
17,910
| 23,898,037,105
|
IssuesEvent
|
2022-09-08 16:12:38
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.63
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.63.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.63 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.63.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
1,016
| 3,478,620,004
|
IssuesEvent
|
2015-12-28 14:08:48
|
pwittchen/prefser
|
https://api.github.com/repos/pwittchen/prefser
|
closed
|
Release 2.0.5
|
release process
|
**Initial release notes**:
- improved error checking in `put(...)` method
- added missing annotations to some tests and reorganized tests
- added missing license info to some classes
- bumped RxAndroid version in `README.md` file
**Things to do**:
- [x] bump library version :arrow_right: https://github.com/pwittchen/prefser/commit/f8ed5563973ba41e187445791a748ac1e6f27067
- [x] upload archives to Maven Central with `./gradlew uploadArchives` command
- [x] close and release artifact on oss.sonatype.org
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release 2.0.5 - **Initial release notes**:
- improved error checking in `put(...)` method
- added missing annotations to some tests and reorganized tests
- added missing license info to some classes
- bumped RxAndroid version in `README.md` file
**Things to do**:
- [x] bump library version :arrow_right: https://github.com/pwittchen/prefser/commit/f8ed5563973ba41e187445791a748ac1e6f27067
- [x] upload archives to Maven Central with `./gradlew uploadArchives` command
- [x] close and release artifact on oss.sonatype.org
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release initial release notes improved error checking in put method added missing annotations to some tests and reorganized tests added missing license info to some classes bumped rxandroid version in readme md file things to do bump library version arrow right upload archives to maven central with gradlew uploadarchives command close and release artifact on oss sonatype org update changelog md after maven sync bump library version in readme md create new github release
| 1
|
14,807
| 18,108,668,400
|
IssuesEvent
|
2021-09-22 22:46:32
|
googleapis/gax-nodejs
|
https://api.github.com/repos/googleapis/gax-nodejs
|
closed
|
Migrate to main branch on GitHub
|
type: process
|
We're in the process of migrating to using `main` as the primary branch name on GitHub.
It would be good if `gax-nodejs` was migrated, so that it's consistent with other Node.js repositories.
You will need to be mindful of any automation configured on this repository.
|
1.0
|
Migrate to main branch on GitHub - We're in the process of migrating to using `main` as the primary branch name on GitHub.
It would be good if `gax-nodejs` was migrated, so that it's consistent with other Node.js repositories.
You will need to be mindful of any automation configured on this repository.
|
process
|
migrate to main branch on github we re in the process of migrating to using main as the primary branch name on github it would be good if gax nodejs was migrated so that it s consistent with other node js repositories you will need to be mindful of any automation configured on this repository
| 1
|
40,115
| 5,277,148,908
|
IssuesEvent
|
2017-02-07 01:51:50
|
kakazuo/yimengtech
|
https://api.github.com/repos/kakazuo/yimengtech
|
closed
|
测试结算功能
|
test 测试任务 翡翠厂
|
1、1级公司明细进货。

2、定价。

3、发货到2级公司。

4、2级公司收货,入库。

5、2级公司发货到3级公司或门店。

6、3级公司或门店收货,入库。

7、在2级公司的1级系统管理中测试结算功能。

(1)选择结算



(2)单据结算

(3)结算退货

结算查询:

结算退货查询:

结论:数据正确
用时:3个小时
|
1.0
|
测试结算功能 - 1、1级公司明细进货。

2、定价。

3、发货到2级公司。

4、2级公司收货,入库。

5、2级公司发货到3级公司或门店。

6、3级公司或门店收货,入库。

7、在2级公司的1级系统管理中测试结算功能。

(1)选择结算



(2)单据结算

(3)结算退货

结算查询:

结算退货查询:

结论:数据正确
用时:3个小时
|
non_process
|
测试结算功能 、 。 、定价。 、 。 、 ,入库。 、 。 、 ,入库。 、 。 ( )选择结算 ( )单据结算 ( )结算退货 结算查询: 结算退货查询: 结论:数据正确 用时:
| 0
|
84,421
| 10,526,595,132
|
IssuesEvent
|
2019-09-30 17:26:25
|
mozilla-lockwise/lockwise-android
|
https://api.github.com/repos/mozilla-lockwise/lockwise-android
|
opened
|
Prevent entry duplication during edit
|
feature-edit ✏️ needs-design type: enhancement
|
We should not be saving a duplicate of an existing login when editing an entry. It is possible to edit an entry by changing the username to conflict with one already saved for that `origin`/`formActionOrigin`/`httpRealm` combo.
## Context
### hostname
**has to have http:// or https://**
- We don't have such a restriction in any of the Firefox browsers so I don't see why we should have that in Lockwise. The origin field is a URI, not a URL e.g. ftp://example.com, file:///, and chrome://FirefoxAccounts are all values that we currently store and Sync.
- You should also ensure that there are no path segments i.e. https://www.example.com is valid whereas https://www.example.com/ and https://www.example.com/foo are invalid.
- We decided not to allow editing of this field on desktop for now because users can make changes to it which break autofill so you could also de-prioritize editing of the origin and only allow editing this field for new logins
### username
- should check (after text changes? on save?) that the username does not exist for that hostname already
|
1.0
|
Prevent entry duplication during edit - We should not be saving a duplicate of an existing login when editing an entry. It is possible to edit an entry by changing the username to conflict with one already saved for that `origin`/`formActionOrigin`/`httpRealm` combo.
## Context
### hostname
**has to have http:// or https://**
- We don't have such a restriction in any of the Firefox browsers so I don't see why we should have that in Lockwise. The origin field is a URI, not a URL e.g. ftp://example.com, file:///, and chrome://FirefoxAccounts are all values that we currently store and Sync.
- You should also ensure that there are no path segments i.e. https://www.example.com is valid whereas https://www.example.com/ and https://www.example.com/foo are invalid.
- We decided not to allow editing of this field on desktop for now because users can make changes to it which break autofill so you could also de-prioritize editing of the origin and only allow editing this field for new logins
### username
- should check (after text changes? on save?) that the username does not exist for that hostname already
|
non_process
|
prevent entry duplication during edit we should not be saving a duplicate of an existing login when editing an entry it is possible to edit an entry by changing the username to conflict with one already saved for that origin formactionorigin httprealm combo context hostname has to have http or we don t have such a restriction in any of the firefox browsers so i don t see why we should have that in lockwise the origin field is a uri not a url e g ftp example com file and chrome firefoxaccounts are all values that we currently store and sync you should also ensure that there are no path segments i e is valid whereas and are invalid we decided not to allow editing of this field on desktop for now because users can make changes to it which break autofill so you could also de prioritize editing of the origin and only allow editing this field for new logins username should check after text changes on save that the username does not exist for that hostname already
| 0
|
191,922
| 6,845,053,386
|
IssuesEvent
|
2017-11-13 06:04:50
|
SuperTurban/ekm_mobiilposm2ng
|
https://api.github.com/repos/SuperTurban/ekm_mobiilposm2ng
|
opened
|
Show different tabs on leaderboard prompt (7h)
|
MobApp Priority 2 UC-5.2
|
Different tabs for different games leaderboards that are changed by swiping. The first leaderboard is the general one.
[UC-5.2](https://github.com/SuperTurban/ekm_mobiilposm2ng/wiki/Use-cases#uc-52---checking-the-leaderboard-on-the-mobile-app)
Sub task of [https://github.com/SuperTurban/ekm_mobiilposm2ng/issues/86](https://github.com/SuperTurban/ekm_mobiilposm2ng/issues/86)
|
1.0
|
Show different tabs on leaderboard prompt (7h) - Different tabs for different games leaderboards that are changed by swiping. The first leaderboard is the general one.
[UC-5.2](https://github.com/SuperTurban/ekm_mobiilposm2ng/wiki/Use-cases#uc-52---checking-the-leaderboard-on-the-mobile-app)
Sub task of [https://github.com/SuperTurban/ekm_mobiilposm2ng/issues/86](https://github.com/SuperTurban/ekm_mobiilposm2ng/issues/86)
|
non_process
|
show different tabs on leaderboard prompt different tabs for different games leaderboards that are changed by swiping the first leaderboard is the general one sub task of
| 0
|
140,097
| 18,895,231,596
|
IssuesEvent
|
2021-11-15 17:08:14
|
bgoonz/searchAwesome
|
https://api.github.com/repos/bgoonz/searchAwesome
|
closed
|
CVE-2021-23341 (High) detected in prismjs-1.20.0.tgz
|
security vulnerability
|
## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.20.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-wpo/website/package.json</p>
<p>Path to vulnerable library: /clones/awesome-wpo/website/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- :x: **prismjs-1.20.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/cb1b8421c464b43b24d4816929e575612a00cd49">cb1b8421c464b43b24d4816929e575612a00cd49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23341 (High) detected in prismjs-1.20.0.tgz - ## CVE-2021-23341 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>prismjs-1.20.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.20.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-wpo/website/package.json</p>
<p>Path to vulnerable library: /clones/awesome-wpo/website/node_modules/prismjs/package.json</p>
<p>
Dependency Hierarchy:
- :x: **prismjs-1.20.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/cb1b8421c464b43b24d4816929e575612a00cd49">cb1b8421c464b43b24d4816929e575612a00cd49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package prismjs before 1.23.0 are vulnerable to Regular Expression Denial of Service (ReDoS) via the prism-asciidoc, prism-rest, prism-tap and prism-eiffel components.
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23341>CVE-2021-23341</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23341</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: 1.23.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in prismjs tgz cve high severity vulnerability vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href path to dependency file searchawesome clones awesome wpo website package json path to vulnerable library clones awesome wpo website node modules prismjs package json dependency hierarchy x prismjs tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package prismjs before are vulnerable to regular expression denial of service redos via the prism asciidoc prism rest prism tap and prism eiffel components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
12,709
| 15,081,217,043
|
IssuesEvent
|
2021-02-05 12:53:03
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
Current buffer has no process
|
process process:remote
|
On Windows 10 / Emacs 27.1, 26.3 / ESS 20210126, I occasionally edit files on network drives, and run into some odd ESS behavior. It's hard to pin down the issue, but after working interactively with R scripts on network drives, the iESS buffer loses connection, and shows `no process` in the modeline.
In the messages buffer, I can only find the phase `user-error: Current buffer has no process`. If I try to continue to interact with the broken process, I get the message`Error running timer ‘ess--idle-timer-function’: (wrong-type-argument stringp nil)`
When this happens I have to restart the R process, but I inevitably run into it again.
The *ESS* buffer shows the output
```
(R): ess-dialect=R, buf=JTH/v3_data_read.R, start-arg=nil
current-prefix-arg=nil
(inferior-ess: waiting for process to start (before hook)
(inferior-ess 3): waiting for process after hook(R): inferior-ess-language-start=options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient', show.error.locations=TRUE)
(ess-search-list ... ) after 'search()
', point-max=294
(nil): created new alist of length 13
```
I've only run into this `no process` issue while working interactively with R scripts on network drives under Windows 10, but I'm not sure that's part of the problem.
Here is my complete ESS configuration:
```
(use-package ess
:defer t
:ensure t
:hook
(ess-help-mode . evil-normal-state)
:custom
(inferior-ess-fix-misaligned-output t)
(ess-eldoc-show-on-symbol t)
(ess-gen-proc-buffer-name-function 'ess-gen-proc-buffer-name:projectile-or-directory)
(ess-eval-visibly 'nil "Don't hog Emacs")
(ess-style 'RStudio)
(ess-use-flymake nil "Syntax checking is usually not helpful")
(ess-tab-complete-in-script nil "Do not interfere with Company")
(ess-use-ido nil "Prefer Ivy/Counsel")
(ess-history-directory (expand-file-name "ESS-history/" no-littering-var-directory))
(inferior-R-args "--no-save")
(ess-R-font-lock-keywords
(quote
((ess-R-fl-keyword:keywords . t)
(ess-R-fl-keyword:constants . t)
(ess-R-fl-keyword:modifiers . t)
(ess-R-fl-keyword:fun-defs . t)
(ess-R-fl-keyword:assign-ops . t)
(ess-R-fl-keyword:%op% . t)
(ess-fl-keyword:fun-calls . t)
(ess-fl-keyword:numbers . t)
(ess-fl-keyword:operators)
(ess-fl-keyword:delimiters)
(ess-fl-keyword:=)
(ess-R-fl-keyword:F&T))))
(ess-ask-for-ess-directory nil)
(ess-smart-S-assign-key nil)
(ess-indent-with-fancy-comments nil)
:config
(defun my/add-pipe ()
"Add a pipe operator %>% at the end of the current line.
Don't add one if the end of line already has one. Ensure one
space to the left and start a newline with indentation."
(interactive)
(end-of-line)
(unless (looking-back "%>%" nil)
(just-one-space 1)
(insert "%>%"))))
```
Maybe this is related to #1091?
|
2.0
|
Current buffer has no process - On Windows 10 / Emacs 27.1, 26.3 / ESS 20210126, I occasionally edit files on network drives, and run into some odd ESS behavior. It's hard to pin down the issue, but after working interactively with R scripts on network drives, the iESS buffer loses connection, and shows `no process` in the modeline.
In the messages buffer, I can only find the phase `user-error: Current buffer has no process`. If I try to continue to interact with the broken process, I get the message`Error running timer ‘ess--idle-timer-function’: (wrong-type-argument stringp nil)`
When this happens I have to restart the R process, but I inevitably run into it again.
The *ESS* buffer shows the output
```
(R): ess-dialect=R, buf=JTH/v3_data_read.R, start-arg=nil
current-prefix-arg=nil
(inferior-ess: waiting for process to start (before hook)
(inferior-ess 3): waiting for process after hook(R): inferior-ess-language-start=options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient', show.error.locations=TRUE)
(ess-search-list ... ) after 'search()
', point-max=294
(nil): created new alist of length 13
```
I've only run into this `no process` issue while working interactively with R scripts on network drives under Windows 10, but I'm not sure that's part of the problem.
Here is my complete ESS configuration:
```
(use-package ess
:defer t
:ensure t
:hook
(ess-help-mode . evil-normal-state)
:custom
(inferior-ess-fix-misaligned-output t)
(ess-eldoc-show-on-symbol t)
(ess-gen-proc-buffer-name-function 'ess-gen-proc-buffer-name:projectile-or-directory)
(ess-eval-visibly 'nil "Don't hog Emacs")
(ess-style 'RStudio)
(ess-use-flymake nil "Syntax checking is usually not helpful")
(ess-tab-complete-in-script nil "Do not interfere with Company")
(ess-use-ido nil "Prefer Ivy/Counsel")
(ess-history-directory (expand-file-name "ESS-history/" no-littering-var-directory))
(inferior-R-args "--no-save")
(ess-R-font-lock-keywords
(quote
((ess-R-fl-keyword:keywords . t)
(ess-R-fl-keyword:constants . t)
(ess-R-fl-keyword:modifiers . t)
(ess-R-fl-keyword:fun-defs . t)
(ess-R-fl-keyword:assign-ops . t)
(ess-R-fl-keyword:%op% . t)
(ess-fl-keyword:fun-calls . t)
(ess-fl-keyword:numbers . t)
(ess-fl-keyword:operators)
(ess-fl-keyword:delimiters)
(ess-fl-keyword:=)
(ess-R-fl-keyword:F&T))))
(ess-ask-for-ess-directory nil)
(ess-smart-S-assign-key nil)
(ess-indent-with-fancy-comments nil)
:config
(defun my/add-pipe ()
"Add a pipe operator %>% at the end of the current line.
Don't add one if the end of line already has one. Ensure one
space to the left and start a newline with indentation."
(interactive)
(end-of-line)
(unless (looking-back "%>%" nil)
(just-one-space 1)
(insert "%>%"))))
```
Maybe this is related to #1091?
|
process
|
current buffer has no process on windows emacs ess i occasionally edit files on network drives and run into some odd ess behavior it s hard to pin down the issue but after working interactively with r scripts on network drives the iess buffer loses connection and shows no process in the modeline in the messages buffer i can only find the phase user error current buffer has no process if i try to continue to interact with the broken process i get the message error running timer ‘ess idle timer function’ wrong type argument stringp nil when this happens i have to restart the r process but i inevitably run into it again the ess buffer shows the output r ess dialect r buf jth data read r start arg nil current prefix arg nil inferior ess waiting for process to start before hook inferior ess waiting for process after hook r inferior ess language start options sterm iess str dendrogram last editor emacsclient show error locations true ess search list after search point max nil created new alist of length i ve only run into this no process issue while working interactively with r scripts on network drives under windows but i m not sure that s part of the problem here is my complete ess configuration use package ess defer t ensure t hook ess help mode evil normal state custom inferior ess fix misaligned output t ess eldoc show on symbol t ess gen proc buffer name function ess gen proc buffer name projectile or directory ess eval visibly nil don t hog emacs ess style rstudio ess use flymake nil syntax checking is usually not helpful ess tab complete in script nil do not interfere with company ess use ido nil prefer ivy counsel ess history directory expand file name ess history no littering var directory inferior r args no save ess r font lock keywords quote ess r fl keyword keywords t ess r fl keyword constants t ess r fl keyword modifiers t ess r fl keyword fun defs t ess r fl keyword assign ops t ess r fl keyword op t ess fl keyword fun calls t ess fl keyword numbers t ess fl keyword operators ess fl keyword delimiters ess fl keyword ess r fl keyword f t ess ask for ess directory nil ess smart s assign key nil ess indent with fancy comments nil config defun my add pipe add a pipe operator at the end of the current line don t add one if the end of line already has one ensure one space to the left and start a newline with indentation interactive end of line unless looking back nil just one space insert maybe this is related to
| 1
|
8,511
| 11,693,662,335
|
IssuesEvent
|
2020-03-06 01:18:05
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
blocked wss on hqq.tv
|
whitelisting process
|
*@Vladislavik commented on Jan 24, 2019, 6:12 PM UTC:*
Hi, filter block p2p function on players from hqq.tv, remove please domain hqq.tv and hqq.watch from list, wss used for p2p on this players.
*This issue was moved by [funilrys](https://github.com/funilrys) from [Ultimate-Hosts-Blacklist/adblock-nocoin-list#2](https://github.com/Ultimate-Hosts-Blacklist/adblock-nocoin-list/issues/2).*
|
1.0
|
blocked wss on hqq.tv - *@Vladislavik commented on Jan 24, 2019, 6:12 PM UTC:*
Hi, filter block p2p function on players from hqq.tv, remove please domain hqq.tv and hqq.watch from list, wss used for p2p on this players.
*This issue was moved by [funilrys](https://github.com/funilrys) from [Ultimate-Hosts-Blacklist/adblock-nocoin-list#2](https://github.com/Ultimate-Hosts-Blacklist/adblock-nocoin-list/issues/2).*
|
process
|
blocked wss on hqq tv vladislavik commented on jan pm utc hi filter block function on players from hqq tv remove please domain hqq tv and hqq watch from list wss used for on this players this issue was moved by from
| 1
|
20,392
| 27,049,210,592
|
IssuesEvent
|
2023-02-13 12:03:11
|
notofonts/tamil
|
https://api.github.com/repos/notofonts/tamil
|
closed
|
NotoSerifTamilSlanted-VF.ttf and NotoSerifTamilSlanted-Italic-VF.ttf in tree/main/unhinted/variable-ttf
|
Script-Tamil Noto-Process-Issue A:Shaping
|
In tree/main/unhinted/variable-ttf we have both NotoSerifTamilSlanted-VF.ttf and NotoSerifTamilSlanted-Italic-VF.ttf. I think one of them should be removed? (Also in slim-variable-ttf )
|
1.0
|
NotoSerifTamilSlanted-VF.ttf and NotoSerifTamilSlanted-Italic-VF.ttf in tree/main/unhinted/variable-ttf - In tree/main/unhinted/variable-ttf we have both NotoSerifTamilSlanted-VF.ttf and NotoSerifTamilSlanted-Italic-VF.ttf. I think one of them should be removed? (Also in slim-variable-ttf )
|
process
|
notoseriftamilslanted vf ttf and notoseriftamilslanted italic vf ttf in tree main unhinted variable ttf in tree main unhinted variable ttf we have both notoseriftamilslanted vf ttf and notoseriftamilslanted italic vf ttf i think one of them should be removed also in slim variable ttf
| 1
|
209,951
| 23,730,980,557
|
IssuesEvent
|
2022-08-31 01:39:29
|
ChoeMinji/scikit-learn-0.23.0
|
https://api.github.com/repos/ChoeMinji/scikit-learn-0.23.0
|
closed
|
CVE-2021-41496 (High) detected in numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl - autoclosed
|
security vulnerability
|
## CVE-2021-41496 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5b/0d/de55834c5ea0dd287cb1cb156c8bc120af2863c36e4d49b4dc28f174e278/numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/5b/0d/de55834c5ea0dd287cb1cb156c8bc120af2863c36e4d49b4dc28f174e278/numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /sklearn/cluster</p>
<p>Path to vulnerable library: /sklearn/cluster,/sklearn,/tmp/ws-scm/scikit-learn-0.23.0</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Buffer overflow in the array_from_pyobj function of fortranobject.c in NumPy < 1.19, which allows attackers to conduct a Denial of Service attacks by carefully constructing an array with negative values. NOTE: The vendor does not agree this is a vulnerability; the negative dimensions can only be created by an already privileged user (or internally).
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41496>CVE-2021-41496</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-41496">https://nvd.nist.gov/vuln/detail/CVE-2021-41496</a></p>
<p>Release Date: 2021-12-17</p>
<p>Fix Resolution: autovizwidget - 0.12.7;numpy - 1.22.0rc1;numcodecs - 0.6.2;numpy-base - 1.11.3;numpy - 1.17.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41496 (High) detected in numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl - autoclosed - ## CVE-2021-41496 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5b/0d/de55834c5ea0dd287cb1cb156c8bc120af2863c36e4d49b4dc28f174e278/numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/5b/0d/de55834c5ea0dd287cb1cb156c8bc120af2863c36e4d49b4dc28f174e278/numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /sklearn/cluster</p>
<p>Path to vulnerable library: /sklearn/cluster,/sklearn,/tmp/ws-scm/scikit-learn-0.23.0</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Buffer overflow in the array_from_pyobj function of fortranobject.c in NumPy < 1.19, which allows attackers to conduct a Denial of Service attacks by carefully constructing an array with negative values. NOTE: The vendor does not agree this is a vulnerability; the negative dimensions can only be created by an already privileged user (or internally).
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41496>CVE-2021-41496</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-41496">https://nvd.nist.gov/vuln/detail/CVE-2021-41496</a></p>
<p>Release Date: 2021-12-17</p>
<p>Fix Resolution: autovizwidget - 0.12.7;numpy - 1.22.0rc1;numcodecs - 0.6.2;numpy-base - 1.11.3;numpy - 1.17.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in numpy manylinux whl autoclosed cve high severity vulnerability vulnerable library numpy manylinux whl numpy is the fundamental package for array computing with python library home page a href path to dependency file sklearn cluster path to vulnerable library sklearn cluster sklearn tmp ws scm scikit learn dependency hierarchy x numpy manylinux whl vulnerable library found in base branch master vulnerability details disputed buffer overflow in the array from pyobj function of fortranobject c in numpy which allows attackers to conduct a denial of service attacks by carefully constructing an array with negative values note the vendor does not agree this is a vulnerability the negative dimensions can only be created by an already privileged user or internally publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution autovizwidget numpy numcodecs numpy base numpy step up your open source security game with whitesource
| 0
|
283,030
| 21,316,019,184
|
IssuesEvent
|
2022-04-16 09:35:11
|
Bryan-BC/pe
|
https://api.github.com/repos/Bryan-BC/pe
|
opened
|
Inconsistency in help command in UG and DG
|
severity.High type.DocumentationBug
|
The DG has the format as follows (see ADD PACKAGE part):

whereas the UG mentions:

<!--session: 1650095700758-e64dc17a-3d22-4687-a9c8-48c53c052012-->
<!--Version: Web v3.4.2-->
|
1.0
|
Inconsistency in help command in UG and DG - The DG has the format as follows (see ADD PACKAGE part):

whereas the UG mentions:

<!--session: 1650095700758-e64dc17a-3d22-4687-a9c8-48c53c052012-->
<!--Version: Web v3.4.2-->
|
non_process
|
inconsistency in help command in ug and dg the dg has the format as follows see add package part whereas the ug mentions
| 0
|
75,750
| 15,444,921,395
|
IssuesEvent
|
2021-03-08 11:02:49
|
scriptex/webpack-mpa-next
|
https://api.github.com/repos/scriptex/webpack-mpa-next
|
opened
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.14.tgz (Root Library)
- browser-sync-ui-2.26.14.tgz
- socket.io-client-2.4.0.tgz
- engine.io-client-3.5.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/6c66c3e01f160737f403c8c9e44899a73786f126">6c66c3e01f160737f403c8c9e44899a73786f126</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.14.tgz (Root Library)
- browser-sync-ui-2.26.14.tgz
- socket.io-client-2.4.0.tgz
- engine.io-client-3.5.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/6c66c3e01f160737f403c8c9e44899a73786f126">6c66c3e01f160737f403c8c9e44899a73786f126</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file webpack mpa next package json path to vulnerable library webpack mpa next node modules xmlhttprequest ssl package json dependency hierarchy browser sync tgz root library browser sync ui tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
50,104
| 7,567,122,072
|
IssuesEvent
|
2018-04-22 05:53:26
|
jeffreydwalter/arlo
|
https://api.github.com/repos/jeffreydwalter/arlo
|
closed
|
Question on setting custom mode
|
documentation question
|
Great work. I see in the examples you can arm and disarm. Have you found a way to set a custom mode?
Thanks
Please answer these questions before submitting your issue. Thanks!
### What version of Python are you using (`python -V`)?
### What operating system and processor architecture are you using (`python -c 'import platform; print(platform.uname());'`)?
### Which Python packages do you have installed (run the `pip freeze` or `pip3 freeze` command and paste output)?
### Which Arlo hardware do you have (camera types - [Arlo, Pro, Q, etc.], basestation model, etc.)?
### What did you do?
If possible, provide the steps you took to reproduce the issue.
A complete runnable program is good. (don't include your user/password or any sensitive info)
### What did you expect to see?
### What did you see instead?
### Does this issue reproduce with the latest release?
|
1.0
|
Question on setting custom mode - Great work. I see in the examples you can arm and disarm. Have you found a way to set a custom mode?
Thanks
Please answer these questions before submitting your issue. Thanks!
### What version of Python are you using (`python -V`)?
### What operating system and processor architecture are you using (`python -c 'import platform; print(platform.uname());'`)?
### Which Python packages do you have installed (run the `pip freeze` or `pip3 freeze` command and paste output)?
### Which Arlo hardware do you have (camera types - [Arlo, Pro, Q, etc.], basestation model, etc.)?
### What did you do?
If possible, provide the steps you took to reproduce the issue.
A complete runnable program is good. (don't include your user/password or any sensitive info)
### What did you expect to see?
### What did you see instead?
### Does this issue reproduce with the latest release?
|
non_process
|
question on setting custom mode great work i see in the examples you can arm and disarm have you found a way to set a custom mode thanks please answer these questions before submitting your issue thanks what version of python are you using python v what operating system and processor architecture are you using python c import platform print platform uname which python packages do you have installed run the pip freeze or freeze command and paste output which arlo hardware do you have camera types basestation model etc what did you do if possible provide the steps you took to reproduce the issue a complete runnable program is good don t include your user password or any sensitive info what did you expect to see what did you see instead does this issue reproduce with the latest release
| 0
|
120,836
| 4,794,973,043
|
IssuesEvent
|
2016-10-31 22:50:28
|
pombase/curation
|
https://api.github.com/repos/pombase/curation
|
closed
|
cellular protein localization
|
annotation priority high priority
|
I added this to the "GO terms not to annotate, we should always be able to specify to where"
There are quite a lot of violations from the old annotation (in artemis) which will disappear gradually either by adding specificity or converting to phenotypes.
These are the only ones in canto
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
|
2.0
|
cellular protein localization - I added this to the "GO terms not to annotate, we should always be able to specify to where"
There are quite a lot of violations from the old annotation (in artemis) which will disappear gradually either by adding specificity or converting to phenotypes.
These are the only ones in canto
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
SPAC10F6.12c.1 GO:0034613 cellular protein localization 8622e3e232348e8d PomBase
|
non_process
|
cellular protein localization i added this to the go terms not to annotate we should always be able to specify to where there are quite a lot of violations from the old annotation in artemis which will disappear gradually either by adding specificity or converting to phenotypes these are the only ones in canto go cellular protein localization pombase go cellular protein localization pombase go cellular protein localization pombase
| 0
|
4,007
| 6,935,188,952
|
IssuesEvent
|
2017-12-03 05:03:08
|
candango/myfuses
|
https://api.github.com/repos/candango/myfuses
|
closed
|
Builder is being called statically incorrectly.
|
bug core engine process
|
We are adding a builder instance in the controller during it's initialization but we never use this instance again.
In another hand we're calling the builder in the life cycle statically. This is not correct and generates depreciated errors when ~E_DEPRECIATED is removed from error_reporting in the php.ini.
Let's add a method to the controller to return the builder instance and use it during the life cycle execution. Also all static references inside the builder must be removed, let's keep method calls on the builder instance scope.
|
1.0
|
Builder is being called statically incorrectly. - We are adding a builder instance in the controller during it's initialization but we never use this instance again.
In another hand we're calling the builder in the life cycle statically. This is not correct and generates depreciated errors when ~E_DEPRECIATED is removed from error_reporting in the php.ini.
Let's add a method to the controller to return the builder instance and use it during the life cycle execution. Also all static references inside the builder must be removed, let's keep method calls on the builder instance scope.
|
process
|
builder is being called statically incorrectly we are adding a builder instance in the controller during it s initialization but we never use this instance again in another hand we re calling the builder in the life cycle statically this is not correct and generates depreciated errors when e depreciated is removed from error reporting in the php ini let s add a method to the controller to return the builder instance and use it during the life cycle execution also all static references inside the builder must be removed let s keep method calls on the builder instance scope
| 1
|
21,200
| 28,238,777,833
|
IssuesEvent
|
2023-04-06 04:40:59
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
How do "MS-SR-Update-MobilityServiceForA2AVirtualMachines" jobs get updated to use managed identities?
|
automation/svc triaged cxp product-question process-automation/subsvc Pri1
|
When a recovery services vault has been configured to hold ASR-replicated VMs, Site Recovery leverages an automation account to manage Site Recovery extensions on all your replicated items and keeps them up-to-date, as seen here:

<br />
This job, inside the linked automation account, runs every 24 hours, as seen here:

<br />
However, the job itself is not made visible in the list of runbooks, as seen here:

<br />
But it can be seen in the job output that these jobs require a RunAs account in the automation account, as seen here:

<br />
My issue/question is: How do these critically important jobs that fail without a RunAs account, get converted to using system managed identities? No information is provided on this page or any other. It almost seems that perhaps this aspect of RunAs accounts being depricated was not considered or accounted for.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 3eedb810-487f-bc9c-89f5-d5fdbcc5d796
* Version Independent ID: 329e9ec7-d9ea-518b-625b-d39880365e80
* Content: [Migrate from a Run As account to a managed identity](https://learn.microsoft.com/en-us/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account)
* Content Source: [articles/automation/migrate-run-as-accounts-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/migrate-run-as-accounts-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
How do "MS-SR-Update-MobilityServiceForA2AVirtualMachines" jobs get updated to use managed identities? -
When a recovery services vault has been configured to hold ASR-replicated VMs, Site Recovery leverages an automation account to manage Site Recovery extensions on all your replicated items and keeps them up-to-date, as seen here:

<br />
This job, inside the linked automation account, runs every 24 hours, as seen here:

<br />
However, the job itself is not made visible in the list of runbooks, as seen here:

<br />
But it can be seen in the job output that these jobs require a RunAs account in the automation account, as seen here:

<br />
My issue/question is: How do these critically important jobs that fail without a RunAs account, get converted to using system managed identities? No information is provided on this page or any other. It almost seems that perhaps this aspect of RunAs accounts being depricated was not considered or accounted for.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 3eedb810-487f-bc9c-89f5-d5fdbcc5d796
* Version Independent ID: 329e9ec7-d9ea-518b-625b-d39880365e80
* Content: [Migrate from a Run As account to a managed identity](https://learn.microsoft.com/en-us/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account)
* Content Source: [articles/automation/migrate-run-as-accounts-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/migrate-run-as-accounts-managed-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
how do ms sr update jobs get updated to use managed identities when a recovery services vault has been configured to hold asr replicated vms site recovery leverages an automation account to manage site recovery extensions on all your replicated items and keeps them up to date as seen here this job inside the linked automation account runs every hours as seen here however the job itself is not made visible in the list of runbooks as seen here but it can be seen in the job output that these jobs require a runas account in the automation account as seen here my issue question is how do these critically important jobs that fail without a runas account get converted to using system managed identities no information is provided on this page or any other it almost seems that perhaps this aspect of runas accounts being depricated was not considered or accounted for document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
17,916
| 23,907,168,777
|
IssuesEvent
|
2022-09-09 02:55:19
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Allow searching for parent item title/creator/year in Add Note dialog
|
Word Processor Integration
|
https://forums.zotero.org/discussion/99611/inserting-notes-in-word-how-the-search-bar-works
And show all of the item's notes. (Relatedly, we should show the parent item title as well, similar to how we show it in the PDF reader notes pane.)
|
1.0
|
Allow searching for parent item title/creator/year in Add Note dialog - https://forums.zotero.org/discussion/99611/inserting-notes-in-word-how-the-search-bar-works
And show all of the item's notes. (Relatedly, we should show the parent item title as well, similar to how we show it in the PDF reader notes pane.)
|
process
|
allow searching for parent item title creator year in add note dialog and show all of the item s notes relatedly we should show the parent item title as well similar to how we show it in the pdf reader notes pane
| 1
|
9,631
| 12,576,632,610
|
IssuesEvent
|
2020-06-09 08:13:57
|
varys-main/ps-tools
|
https://api.github.com/repos/varys-main/ps-tools
|
closed
|
StartUp - Keine lokalen NAV-Installationen - Fehlerbehandlung
|
processing
|
# User Story
- Ist auf dem Computer, auf den das StartUp-Skript ausgeführt wird, das Verzeichnis C:\NAV nicht vorhanden, führt das zu Fehlermeldungen.
- Diese Fehler sollen durch eine Warnung ersetzt werden.
# Task
- [x] Prüfung, ob C:\NAV vorhanden ist
- [x] ~~Warnung ausgeben~~
# Implementations
- Ist C:\NAV nicht vorhanden, werden die Menü-Einträge ohne Fehler nicht erzeugt.
- Eine Warnung wird nicht ausgegeben.
# Known Problems
|
1.0
|
StartUp - Keine lokalen NAV-Installationen - Fehlerbehandlung - # User Story
- Ist auf dem Computer, auf den das StartUp-Skript ausgeführt wird, das Verzeichnis C:\NAV nicht vorhanden, führt das zu Fehlermeldungen.
- Diese Fehler sollen durch eine Warnung ersetzt werden.
# Task
- [x] Prüfung, ob C:\NAV vorhanden ist
- [x] ~~Warnung ausgeben~~
# Implementations
- Ist C:\NAV nicht vorhanden, werden die Menü-Einträge ohne Fehler nicht erzeugt.
- Eine Warnung wird nicht ausgegeben.
# Known Problems
|
process
|
startup keine lokalen nav installationen fehlerbehandlung user story ist auf dem computer auf den das startup skript ausgeführt wird das verzeichnis c nav nicht vorhanden führt das zu fehlermeldungen diese fehler sollen durch eine warnung ersetzt werden task prüfung ob c nav vorhanden ist warnung ausgeben implementations ist c nav nicht vorhanden werden die menü einträge ohne fehler nicht erzeugt eine warnung wird nicht ausgegeben known problems
| 1
|
50,218
| 6,065,846,049
|
IssuesEvent
|
2017-06-14 17:07:31
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
Extended.[Conformance][registry][migration] manifest migration from etcd to registry storage registry can get access to manifest [local]
|
dependency/devicemapper kind/test-flake priority/P1
|
Flaking out as seen [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_future/710/testReport/junit/(root)/Extended/_Conformance__registry__migration__manifest_migration_from_etcd_to_registry_storage_registry_can_get_access_to_manifest__local_/):
```
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
Expected error:
<*docker.Error | 0xc4210b0300>: {
Status: 500,
Message: "{\"message\":\"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy\"}\n",
}
API error (500): {"message":"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy"}
not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:121
```
Looks like it may be fallout from the change to `docker-storage-setup` or in general a failure with `devicemapper`. /cc @jwhonce @runcom @bparees @csrwng
<details>
<summary>Click for full logs</summary>
```
Stacktrace
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
Expected error:
<*docker.Error | 0xc4210b0300>: {
Status: 500,
Message: "{\"message\":\"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy\"}\n",
}
API error (500): {"message":"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy"}
not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:121
Standard Output
[BeforeEach] [Top Level]
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [Conformance][registry][migration] manifest migration from etcd to registry storage
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar 1 17:23:34.963: INFO: >>> kubeConfig: /tmp/openshift/test-extended/core/openshift.local.config/master/admin.kubeconfig
STEP: Building a namespace api object
Mar 1 17:23:34.984: INFO: configPath is now "/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig"
Mar 1 17:23:34.984: INFO: The user is now "extended-test-registry-migration-xl9xv-9gkl6-user"
Mar 1 17:23:34.984: INFO: Creating project "extended-test-registry-migration-xl9xv-9gkl6"
STEP: Waiting for a default service account to be provisioned in namespace
[It] registry can get access to manifest [local]
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
STEP: set up policy for registry to have anonymous access to images
Mar 1 17:23:35.050: INFO: Running 'oc policy --config=/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig --namespace=extended-test-registry-migration-xl9xv-9gkl6 add-role-to-user registry-viewer system:anonymous'
role "registry-viewer" added: "system:anonymous"
STEP: pushing image...
Step 1 : FROM scratch
--->
Step 2 : COPY data1 /data1
---> cbf641f53e1c
Removing intermediate container 9b472246d259
Successfully built cbf641f53e1c
Mar 1 17:23:37.584: INFO: Running 'oc whoami --config=/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig --namespace=extended-test-registry-migration-xl9xv-9gkl6 -t'
The push refers to a repository [172.30.103.69:5000/extended-test-registry-migration-xl9xv-9gkl6/app]
Preparing
Pushing [====================> ] 512 B/1.28 kB
Pushing
Pushing [==================================================>] 1.792 kB
Pushing
Pushing [==================================================>] 3.072 kB
Pushing
Pushing [==================================================>] 3.072 kB
Pushing
Pushed
latest: digest: sha256:d7cf5b995be8469310e05d5f636d7c8b18d946cdc0e62503bd9fca9a48f44848 size: 1536
matching digest string
STEP: checking that the image converted...
STEP: getting image manifest from docker-registry...
STEP: restoring manifest...
STEP: checking that the manifest is present in the image...
STEP: getting image manifest from docker-registry one more time...
STEP: waiting until image is updated...
STEP: checking that the manifest was removed from the image...
STEP: getting image manifest from docker-registry to check if he's available...
STEP: pulling image...
STEP: get secret list err <nil>
STEP: secret name builder-dockercfg-l284m
STEP: docker cfg token json {"172.30.103.69:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMTRiRGw0ZGkwNVoydHNOaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXNXNOV2Q2SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUppTlRsaU5qY3dOQzFtWldOa0xURXhaVFl0WWpBMVlpMHdaV0k1T1RVMk1XSmtaVGdpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0ZUd3NWVIWXRPV2RyYkRZNlluVnBiR1JsY2lKOS5hUF94R0FPb3IzWTYzTkFMYW1ucFVUTHBYM1Fuc0pjc1dPSktPQ25XVVdGU20yWTdyMHVOQkhIMV91ME9Bd2hNOGFHNzRuZVY4SzBLQ1VTUE9Qekt4RjFBWmpTb25nRjQ5cU9LdWpmSHY2NG02bjhXbDFDNXVmUF9hSUNBMWVyRGs1d2s2MWQ3Qm5kZVVrYlYxNXRkUUJHRW00QWR0UEJsMmltcWR1UWxCa293WEE3RGtpR21mRkc0TlNPamhfbkxmeVFkSmxkNVFaVm9naXFQVlJKbGk1bHJPcWZneHY0UW9iREhranJmU0xWaGVXWi1qX2pYMTBtaUpNQnBtbEl3VE82SGNCMHlObFQ4MDktendLZjJEa2thdGtreXcyM1hPeGRfME81LUhFbHl4bjhRbEhKYkZiQzJLSHRVQmhhNzhLTy03LVdWRUlPRi00VVRGM1l1SkE="},"docker-registry.default.svc:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMTRiRGw0ZGkwNVoydHNOaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXNXNOV2Q2SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUppTlRsaU5qY3dOQzFtWldOa0xURXhaVFl0WWpBMVlpMHdaV0k1T1RVMk1XSmtaVGdpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0ZUd3NWVIWXRPV2RyYkRZNlluVnBiR1JsY2lKOS5hUF94R0FPb3IzWTYzTkFMYW1ucFVUTHBYM1Fuc0pjc1dPSktPQ25XVVdGU20yWTdyMHVOQkhIMV91ME9Bd2hNOGFHNzRuZVY4SzBLQ1VTUE9Qekt4RjFBWmpTb25nRjQ5cU9LdWpmSHY2NG02bjhXbDFDNXVmUF9hSUNBMWVyRGs1d2s2MWQ3Qm5kZVVrYlYxNXRkUUJHRW00QWR0UEJsMmltcWR1UWxCa293WEE3RGtpR21mRkc0TlNPamhfbkxmeVFkSmxkNVFaVm9naXFQVlJKbGk1bHJPcWZneHY0UW9iREhranJmU0xWaGVXWi1qX2pYMTBtaUpNQnBtbEl3VE82SGNCMHlObFQ4MDktendLZjJEa2thdGtreXcyM1hPeGRfME81LUhFbHl4bjhRbEhKYkZiQzJLSHRVQmhhNzhLTy03LVdWRUlPRi00VVRGM1l1SkE="}}
STEP: json unmarshal err <nil>
STEP: found auth true with auth cfg len 1
STEP: dockercfg with svrAddr 172.30.103.69:5000 user serviceaccount pass eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA email serviceaccount@example.org
STEP: removing image...
STEP: Deleting images and image streams in project "extended-test-registry-migration-xl9xv-9gkl6"
[AfterEach] [Conformance][registry][migration] manifest migration from etcd to registry storage
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "extended-test-registry-migration-xl9xv-9gkl6".
STEP: Found 0 events.
Mar 1 17:23:39.585: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 1 17:23:39.585: INFO: docker-registry-1-vx13z 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:23 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:32 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:23 -0500 EST }]
Mar 1 17:23:39.585: INFO: router-2-bz6p6 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:54 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:00:14 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:54 -0500 EST }]
Mar 1 17:23:39.585: INFO: test-docker-1-build 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:43 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:47 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:43 -0500 EST }]
Mar 1 17:23:39.585: INFO:
Mar 1 17:23:39.586: INFO:
Logging node info for node 172.18.4.157
Mar 1 17:23:39.588: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Mar 1 17:23:39.588: INFO:
Logging kubelet events for node 172.18.4.157
Mar 1 17:23:39.590: INFO:
Logging pods the kubelet thinks is on node 172.18.4.157
Mar 1 17:23:39.596: INFO: docker-registry-1-vx13z started at 2017-03-01 16:59:23 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container registry ready: true, restart count 0
Mar 1 17:23:39.596: INFO: router-2-bz6p6 started at 2017-03-01 16:59:54 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container router ready: true, restart count 0
Mar 1 17:23:39.596: INFO: test-docker-1-build started at 2017-03-01 17:22:43 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container docker-build ready: true, restart count 0
Mar 1 17:23:39.906: INFO:
Latency metrics for node 172.18.4.157
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m1.041255s}
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m1.041255s}
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m1.015743s}
Mar 1 17:23:39.906: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:49.341606s}
Mar 1 17:23:39.906: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.201589s}
Mar 1 17:23:39.906: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:27.722085s}
STEP: Dumping a list of prepulled images on each node
Mar 1 17:23:39.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-registry-migration-xl9xv-9gkl6" for this suite.
Mar 1 17:23:44.961: INFO: namespace: extended-test-registry-migration-xl9xv-9gkl6, resource: bindings, ignored listing per whitelist
```
</details>
|
1.0
|
Extended.[Conformance][registry][migration] manifest migration from etcd to registry storage registry can get access to manifest [local] - Flaking out as seen [here](https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_future/710/testReport/junit/(root)/Extended/_Conformance__registry__migration__manifest_migration_from_etcd_to_registry_storage_registry_can_get_access_to_manifest__local_/):
```
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
Expected error:
<*docker.Error | 0xc4210b0300>: {
Status: 500,
Message: "{\"message\":\"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy\"}\n",
}
API error (500): {"message":"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy"}
not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:121
```
Looks like it may be fallout from the change to `docker-storage-setup` or in general a failure with `devicemapper`. /cc @jwhonce @runcom @bparees @csrwng
<details>
<summary>Click for full logs</summary>
```
Stacktrace
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
Expected error:
<*docker.Error | 0xc4210b0300>: {
Status: 500,
Message: "{\"message\":\"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy\"}\n",
}
API error (500): {"message":"failed to remove device 8019a36673d26e2f94ca11f122a2ad36964521755c60fb7cd0c6b6a5720b8985:Device is Busy"}
not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:121
Standard Output
[BeforeEach] [Top Level]
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [Conformance][registry][migration] manifest migration from etcd to registry storage
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar 1 17:23:34.963: INFO: >>> kubeConfig: /tmp/openshift/test-extended/core/openshift.local.config/master/admin.kubeconfig
STEP: Building a namespace api object
Mar 1 17:23:34.984: INFO: configPath is now "/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig"
Mar 1 17:23:34.984: INFO: The user is now "extended-test-registry-migration-xl9xv-9gkl6-user"
Mar 1 17:23:34.984: INFO: Creating project "extended-test-registry-migration-xl9xv-9gkl6"
STEP: Waiting for a default service account to be provisioned in namespace
[It] registry can get access to manifest [local]
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122
STEP: set up policy for registry to have anonymous access to images
Mar 1 17:23:35.050: INFO: Running 'oc policy --config=/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig --namespace=extended-test-registry-migration-xl9xv-9gkl6 add-role-to-user registry-viewer system:anonymous'
role "registry-viewer" added: "system:anonymous"
STEP: pushing image...
Step 1 : FROM scratch
--->
Step 2 : COPY data1 /data1
---> cbf641f53e1c
Removing intermediate container 9b472246d259
Successfully built cbf641f53e1c
Mar 1 17:23:37.584: INFO: Running 'oc whoami --config=/tmp/extended-test-registry-migration-xl9xv-9gkl6-user.kubeconfig --namespace=extended-test-registry-migration-xl9xv-9gkl6 -t'
The push refers to a repository [172.30.103.69:5000/extended-test-registry-migration-xl9xv-9gkl6/app]
Preparing
Pushing [====================> ] 512 B/1.28 kB
Pushing
Pushing [==================================================>] 1.792 kB
Pushing
Pushing [==================================================>] 3.072 kB
Pushing
Pushing [==================================================>] 3.072 kB
Pushing
Pushed
latest: digest: sha256:d7cf5b995be8469310e05d5f636d7c8b18d946cdc0e62503bd9fca9a48f44848 size: 1536
matching digest string
STEP: checking that the image converted...
STEP: getting image manifest from docker-registry...
STEP: restoring manifest...
STEP: checking that the manifest is present in the image...
STEP: getting image manifest from docker-registry one more time...
STEP: waiting until image is updated...
STEP: checking that the manifest was removed from the image...
STEP: getting image manifest from docker-registry to check if he's available...
STEP: pulling image...
STEP: get secret list err <nil>
STEP: secret name builder-dockercfg-l284m
STEP: docker cfg token json {"172.30.103.69:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMTRiRGw0ZGkwNVoydHNOaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXNXNOV2Q2SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUppTlRsaU5qY3dOQzFtWldOa0xURXhaVFl0WWpBMVlpMHdaV0k1T1RVMk1XSmtaVGdpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0ZUd3NWVIWXRPV2RyYkRZNlluVnBiR1JsY2lKOS5hUF94R0FPb3IzWTYzTkFMYW1ucFVUTHBYM1Fuc0pjc1dPSktPQ25XVVdGU20yWTdyMHVOQkhIMV91ME9Bd2hNOGFHNzRuZVY4SzBLQ1VTUE9Qekt4RjFBWmpTb25nRjQ5cU9LdWpmSHY2NG02bjhXbDFDNXVmUF9hSUNBMWVyRGs1d2s2MWQ3Qm5kZVVrYlYxNXRkUUJHRW00QWR0UEJsMmltcWR1UWxCa293WEE3RGtpR21mRkc0TlNPamhfbkxmeVFkSmxkNVFaVm9naXFQVlJKbGk1bHJPcWZneHY0UW9iREhranJmU0xWaGVXWi1qX2pYMTBtaUpNQnBtbEl3VE82SGNCMHlObFQ4MDktendLZjJEa2thdGtreXcyM1hPeGRfME81LUhFbHl4bjhRbEhKYkZiQzJLSHRVQmhhNzhLTy03LVdWRUlPRi00VVRGM1l1SkE="},"docker-registry.default.svc:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMTRiRGw0ZGkwNVoydHNOaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXNXNOV2Q2SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUppTlRsaU5qY3dOQzFtWldOa0xURXhaVFl0WWpBMVlpMHdaV0k1T1RVMk1XSmtaVGdpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0ZUd3NWVIWXRPV2RyYkRZNlluVnBiR1JsY2lKOS5hUF94R0FPb3IzWTYzTkFMYW1ucFVUTHBYM1Fuc0pjc1dPSktPQ25XVVdGU20yWTdyMHVOQkhIMV91ME9Bd2hNOGFHNzRuZVY4SzBLQ1VTUE9Qekt4RjFBWmpTb25nRjQ5cU9LdWpmSHY2NG02bjhXbDFDNXVmUF9hSUNBMWVyRGs1d2s2MWQ3Qm5kZVVrYlYxNXRkUUJHRW00QWR0UEJsMmltcWR1UWxCa293WEE3RGtpR21mRkc0TlNPamhfbkxmeVFkSmxkNVFaVm9naXFQVlJKbGk1bHJPcWZneHY0UW9iREhranJmU0xWaGVXWi1qX2pYMTBtaUpNQnBtbEl3VE82SGNCMHlObFQ4MDktendLZjJEa2thdGtreXcyM1hPeGRfME81LUhFbHl4bjhRbEhKYkZiQzJLSHRVQmhhNzhLTy03LVdWRUlPRi00VVRGM1l1SkE="}}
STEP: json unmarshal err <nil>
STEP: found auth true with auth cfg len 1
STEP: dockercfg with svrAddr 172.30.103.69:5000 user serviceaccount pass eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi14bDl4di05Z2tsNiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLW5sNWd6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNTliNjcwNC1mZWNkLTExZTYtYjA1Yi0wZWI5OTU2MWJkZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24teGw5eHYtOWdrbDY6YnVpbGRlciJ9.aP_xGAOor3Y63NALamnpUTLpX3QnsJcsWOJKOCnWUWFSm2Y7r0uNBHH1_u0OAwhM8aG74neV8K0KCUSPOPzKxF1AZjSongF49qOKujfHv64m6n8Wl1C5ufP_aICA1erDk5wk61d7BndeUkbV15tdQBGEm4AdtPBl2imqduQlBkowXA7DkiGmfFG4NSOjh_nLfyQdJld5QZVogiqPVRJli5lrOqfgxv4QobDHkjrfSLVheWZ-j_jX10miJMBpmlIwTO6HcB0yNlT809-zwKf2Dkkatkkyw23XOxd_0O5-HElyxn8QlHJbFbC2KHtUBha78KO-7-WVEIOF-4UTF3YuJA email serviceaccount@example.org
STEP: removing image...
STEP: Deleting images and image streams in project "extended-test-registry-migration-xl9xv-9gkl6"
[AfterEach] [Conformance][registry][migration] manifest migration from etcd to registry storage
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "extended-test-registry-migration-xl9xv-9gkl6".
STEP: Found 0 events.
Mar 1 17:23:39.585: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 1 17:23:39.585: INFO: docker-registry-1-vx13z 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:23 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:32 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:23 -0500 EST }]
Mar 1 17:23:39.585: INFO: router-2-bz6p6 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:54 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:00:14 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 16:59:54 -0500 EST }]
Mar 1 17:23:39.585: INFO: test-docker-1-build 172.18.4.157 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:43 -0500 EST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:47 -0500 EST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-03-01 17:22:43 -0500 EST }]
Mar 1 17:23:39.585: INFO:
Mar 1 17:23:39.586: INFO:
Logging node info for node 172.18.4.157
Mar 1 17:23:39.588: INFO: Node Info: &TypeMeta{Kind:,APIVersion:,}
Mar 1 17:23:39.588: INFO:
Logging kubelet events for node 172.18.4.157
Mar 1 17:23:39.590: INFO:
Logging pods the kubelet thinks is on node 172.18.4.157
Mar 1 17:23:39.596: INFO: docker-registry-1-vx13z started at 2017-03-01 16:59:23 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container registry ready: true, restart count 0
Mar 1 17:23:39.596: INFO: router-2-bz6p6 started at 2017-03-01 16:59:54 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container router ready: true, restart count 0
Mar 1 17:23:39.596: INFO: test-docker-1-build started at 2017-03-01 17:22:43 -0500 EST (0+1 container statuses recorded)
Mar 1 17:23:39.596: INFO: Container docker-build ready: true, restart count 0
Mar 1 17:23:39.906: INFO:
Latency metrics for node 172.18.4.157
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m1.041255s}
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m1.041255s}
Mar 1 17:23:39.906: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m1.015743s}
Mar 1 17:23:39.906: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:49.341606s}
Mar 1 17:23:39.906: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.201589s}
Mar 1 17:23:39.906: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:27.722085s}
STEP: Dumping a list of prepulled images on each node
Mar 1 17:23:39.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-registry-migration-xl9xv-9gkl6" for this suite.
Mar 1 17:23:44.961: INFO: namespace: extended-test-registry-migration-xl9xv-9gkl6, resource: bindings, ignored listing per whitelist
```
</details>
|
non_process
|
extended manifest migration from etcd to registry storage registry can get access to manifest flaking out as seen go src github com openshift origin output local go src github com openshift origin test extended registry registry go expected error status message message failed to remove device device is busy n api error message failed to remove device device is busy not to have occurred go src github com openshift origin output local go src github com openshift origin test extended registry registry go looks like it may be fallout from the change to docker storage setup or in general a failure with devicemapper cc jwhonce runcom bparees csrwng click for full logs stacktrace go src github com openshift origin output local go src github com openshift origin test extended registry registry go expected error status message message failed to remove device device is busy n api error message failed to remove device device is busy not to have occurred go src github com openshift origin output local go src github com openshift origin test extended registry registry go standard output go src github com openshift origin output local go src github com openshift origin test extended util test go manifest migration from etcd to registry storage go src github com openshift origin output local go src github com openshift origin vendor io kubernetes test framework framework go step creating a kubernetes client mar info kubeconfig tmp openshift test extended core openshift local config master admin kubeconfig step building a namespace api object mar info configpath is now tmp extended test registry migration user kubeconfig mar info the user is now extended test registry migration user mar info creating project extended test registry migration step waiting for a default service account to be provisioned in namespace registry can get access to manifest go src github com openshift origin output local go src github com openshift origin test extended registry registry go step set up policy for registry to have anonymous access to images mar info running oc policy config tmp extended test registry migration user kubeconfig namespace extended test registry migration add role to user registry viewer system anonymous role registry viewer added system anonymous step pushing image step from scratch step copy removing intermediate container successfully built mar info running oc whoami config tmp extended test registry migration user kubeconfig namespace extended test registry migration t the push refers to a repository preparing pushing b kb pushing pushing kb pushing pushing kb pushing pushing kb pushing pushed latest digest size matching digest string step checking that the image converted step getting image manifest from docker registry step restoring manifest step checking that the manifest is present in the image step getting image manifest from docker registry one more time step waiting until image is updated step checking that the manifest was removed from the image step getting image manifest from docker registry to check if he s available step pulling image step get secret list err step secret name builder dockercfg step docker cfg token json username serviceaccount password ap j wveiof email serviceaccount example org auth docker registry default svc username serviceaccount password ap j wveiof email serviceaccount example org auth step json unmarshal err step found auth true with auth cfg len step dockercfg with svraddr user serviceaccount pass ap j wveiof email serviceaccount example org step removing image step deleting images and image streams in project extended test registry migration manifest migration from etcd to registry storage go src github com openshift origin output local go src github com openshift origin vendor io kubernetes test framework framework go step collecting events from namespace extended test registry migration step found events mar info pod node phase grace conditions mar info docker registry running mar info router running mar info test docker build running mar info mar info logging node info for node mar info node info typemeta kind apiversion mar info logging kubelet events for node mar info logging pods the kubelet thinks is on node mar info docker registry started at est container statuses recorded mar info container registry ready true restart count mar info router started at est container statuses recorded mar info container router ready true restart count mar info test docker build started at est container statuses recorded mar info container docker build ready true restart count mar info latency metrics for node mar info operation sync method pod worker latency microseconds quantile latency mar info operation sync method pod worker latency microseconds quantile latency mar info operation sync method pod worker latency microseconds quantile latency mar info operation method pod start latency microseconds quantile latency mar info operation stop container method docker operations latency microseconds quantile latency mar info operation method pod start latency microseconds quantile latency step dumping a list of prepulled images on each node mar info waiting up to for all but nodes to be ready step destroying namespace extended test registry migration for this suite mar info namespace extended test registry migration resource bindings ignored listing per whitelist
| 0
|
244,481
| 26,412,566,976
|
IssuesEvent
|
2023-01-13 13:29:15
|
dmyers87/amundsenfrontendlibrary
|
https://api.github.com/repos/dmyers87/amundsenfrontendlibrary
|
opened
|
CVE-2022-21191 (High) detected in global-modules-path-2.3.0.tgz
|
security vulnerability
|
## CVE-2022-21191 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>global-modules-path-2.3.0.tgz</b></p></summary>
<p>Returns path to globally installed package</p>
<p>Library home page: <a href="https://registry.npmjs.org/global-modules-path/-/global-modules-path-2.3.0.tgz">https://registry.npmjs.org/global-modules-path/-/global-modules-path-2.3.0.tgz</a></p>
<p>Path to dependency file: /amundsen_application/static/package.json</p>
<p>Path to vulnerable library: /amundsen_application/static/node_modules/global-modules-path/package.json</p>
<p>
Dependency Hierarchy:
- webpack-cli-3.1.2.tgz (Root Library)
- :x: **global-modules-path-2.3.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the package global-modules-path before 3.0.0 are vulnerable to Command Injection due to missing input sanitization or other checks and sandboxes being employed to the getPath function.
<p>Publish Date: 2023-01-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21191>CVE-2022-21191</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-13</p>
<p>Fix Resolution: global-modules-path - 3.0.0
</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-21191 (High) detected in global-modules-path-2.3.0.tgz - ## CVE-2022-21191 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>global-modules-path-2.3.0.tgz</b></p></summary>
<p>Returns path to globally installed package</p>
<p>Library home page: <a href="https://registry.npmjs.org/global-modules-path/-/global-modules-path-2.3.0.tgz">https://registry.npmjs.org/global-modules-path/-/global-modules-path-2.3.0.tgz</a></p>
<p>Path to dependency file: /amundsen_application/static/package.json</p>
<p>Path to vulnerable library: /amundsen_application/static/node_modules/global-modules-path/package.json</p>
<p>
Dependency Hierarchy:
- webpack-cli-3.1.2.tgz (Root Library)
- :x: **global-modules-path-2.3.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of the package global-modules-path before 3.0.0 are vulnerable to Command Injection due to missing input sanitization or other checks and sandboxes being employed to the getPath function.
<p>Publish Date: 2023-01-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21191>CVE-2022-21191</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-01-13</p>
<p>Fix Resolution: global-modules-path - 3.0.0
</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in global modules path tgz cve high severity vulnerability vulnerable library global modules path tgz returns path to globally installed package library home page a href path to dependency file amundsen application static package json path to vulnerable library amundsen application static node modules global modules path package json dependency hierarchy webpack cli tgz root library x global modules path tgz vulnerable library found in base branch master vulnerability details versions of the package global modules path before are vulnerable to command injection due to missing input sanitization or other checks and sandboxes being employed to the getpath function publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution global modules path
| 0
|
9,303
| 12,312,501,544
|
IssuesEvent
|
2020-05-12 14:01:27
|
bazelbuild/rules_python
|
https://api.github.com/repos/bazelbuild/rules_python
|
closed
|
Clarify ownership of rules_python
|
P1 type: process
|
Following up on #290, I've added @andyscott and @thundergolfer as maintainers of rules_python. We now need to define the boundaries between community-owned pieces of this repo and pieces owned by the core Bazel team.
+Cc @lberki and @laurentlb on core Bazel, and @pstradomski who is responsible for the wheel packaging rules under //experimental.
This repo currently serves two purposes: pip integration rules written purely in Starlark, and Starlark stubs for the native Python rules. Our intent is that the community take ownership of the pip integration rules (e.g., `python/pip.bzl`, `python/wheel.bzl`, `packaging/...`, `tools/...`) while core Bazel team retains ownership of the stubs (`python/defs.bzl`, subdirectories of `python/`). For repo-global things like the WORKSPACE I think the chance of stepping on each other's toes is low so I'm not worried about explicit ownership delineations there.
Action items are to spell out this division in a little more detail (expand on #290) and add CODEOWNERS. Beyond that, Andy and Jonathan should discuss how they'd like to evolve the packaging rules -- e.g. whether to merge with an existing alternative ruleset, vs preserving the existing API.
|
1.0
|
Clarify ownership of rules_python - Following up on #290, I've added @andyscott and @thundergolfer as maintainers of rules_python. We now need to define the boundaries between community-owned pieces of this repo and pieces owned by the core Bazel team.
+Cc @lberki and @laurentlb on core Bazel, and @pstradomski who is responsible for the wheel packaging rules under //experimental.
This repo currently serves two purposes: pip integration rules written purely in Starlark, and Starlark stubs for the native Python rules. Our intent is that the community take ownership of the pip integration rules (e.g., `python/pip.bzl`, `python/wheel.bzl`, `packaging/...`, `tools/...`) while core Bazel team retains ownership of the stubs (`python/defs.bzl`, subdirectories of `python/`). For repo-global things like the WORKSPACE I think the chance of stepping on each other's toes is low so I'm not worried about explicit ownership delineations there.
Action items are to spell out this division in a little more detail (expand on #290) and add CODEOWNERS. Beyond that, Andy and Jonathan should discuss how they'd like to evolve the packaging rules -- e.g. whether to merge with an existing alternative ruleset, vs preserving the existing API.
|
process
|
clarify ownership of rules python following up on i ve added andyscott and thundergolfer as maintainers of rules python we now need to define the boundaries between community owned pieces of this repo and pieces owned by the core bazel team cc lberki and laurentlb on core bazel and pstradomski who is responsible for the wheel packaging rules under experimental this repo currently serves two purposes pip integration rules written purely in starlark and starlark stubs for the native python rules our intent is that the community take ownership of the pip integration rules e g python pip bzl python wheel bzl packaging tools while core bazel team retains ownership of the stubs python defs bzl subdirectories of python for repo global things like the workspace i think the chance of stepping on each other s toes is low so i m not worried about explicit ownership delineations there action items are to spell out this division in a little more detail expand on and add codeowners beyond that andy and jonathan should discuss how they d like to evolve the packaging rules e g whether to merge with an existing alternative ruleset vs preserving the existing api
| 1
|
81,191
| 3,587,795,083
|
IssuesEvent
|
2016-01-30 15:41:55
|
fgpv-vpgf/fgpv-vpgf
|
https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf
|
opened
|
chore(build): wet sample page missing resources on fgpcloud site
|
priority: high type: chore
|
I didn't want to include wet code in our source, so I linked resourced from a lib folder. Auto-deploy script doesn't copy them over and the wet sample page is currently broken: http://fgpv.cloudapp.net/demo/develop/index-wet.html
What should we do? Include wet in source or update auto-deploy script to copy over wet folder?
Relates to #10
|
1.0
|
chore(build): wet sample page missing resources on fgpcloud site - I didn't want to include wet code in our source, so I linked resourced from a lib folder. Auto-deploy script doesn't copy them over and the wet sample page is currently broken: http://fgpv.cloudapp.net/demo/develop/index-wet.html
What should we do? Include wet in source or update auto-deploy script to copy over wet folder?
Relates to #10
|
non_process
|
chore build wet sample page missing resources on fgpcloud site i didn t want to include wet code in our source so i linked resourced from a lib folder auto deploy script doesn t copy them over and the wet sample page is currently broken what should we do include wet in source or update auto deploy script to copy over wet folder relates to
| 0
|
9,353
| 12,366,223,065
|
IssuesEvent
|
2020-05-18 10:04:52
|
DiSSCo/user-stories
|
https://api.github.com/repos/DiSSCo/user-stories
|
opened
|
to select all digitized labels from a specific collector
|
2. Collection Management 2. University/Research institute 4. Data processing ICEDIG-SURVEY Specimen level
|
As a Scientist I want to extract handwriting samples of a collector so that I can verify collection localities and collection dates of specimens of a collector for this I need to select all digitized labels from a specific collector
|
1.0
|
to select all digitized labels from a specific collector - As a Scientist I want to extract handwriting samples of a collector so that I can verify collection localities and collection dates of specimens of a collector for this I need to select all digitized labels from a specific collector
|
process
|
to select all digitized labels from a specific collector as a scientist i want to extract handwriting samples of a collector so that i can verify collection localities and collection dates of specimens of a collector for this i need to select all digitized labels from a specific collector
| 1
|
357,634
| 25,176,414,710
|
IssuesEvent
|
2022-11-11 09:39:32
|
markusteim/pe
|
https://api.github.com/repos/markusteim/pe
|
opened
|
User stories parts don't match
|
type.DocumentationBug severity.Low
|

The description of the user story does not match the name of it, it should be that a new pet can be added to the list of pets that are in the hospital, not that you could keep track of them, it sounds more like viewing the list of pets.
<!--session: 1668139587324-7f4abad3-8d1d-403e-b295-c4984fc31dc9-->
<!--Version: Web v3.4.4-->
|
1.0
|
User stories parts don't match - 
The description of the user story does not match the name of it, it should be that a new pet can be added to the list of pets that are in the hospital, not that you could keep track of them, it sounds more like viewing the list of pets.
<!--session: 1668139587324-7f4abad3-8d1d-403e-b295-c4984fc31dc9-->
<!--Version: Web v3.4.4-->
|
non_process
|
user stories parts don t match the description of the user story does not match the name of it it should be that a new pet can be added to the list of pets that are in the hospital not that you could keep track of them it sounds more like viewing the list of pets
| 0
|
22,134
| 30,679,612,032
|
IssuesEvent
|
2023-07-26 08:17:07
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Deploy a Linux Hybrid Runbook Worker in Docker
|
automation/svc triaged assigned-to-author doc-idea process-automation/subsvc Pri2
|
I would be great to have instructions for deploying to a Linux Docker container also.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e38be5b8-d76d-a4f1-c014-7bf9248be2de
* Version Independent ID: 976e5e90-b28c-d7ba-0495-69d92e62ea46
* Content: [Deploy a Linux Hybrid Runbook Worker in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-linux-hrw-install)
* Content Source: [articles/automation/automation-linux-hrw-install.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-linux-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Deploy a Linux Hybrid Runbook Worker in Docker - I would be great to have instructions for deploying to a Linux Docker container also.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e38be5b8-d76d-a4f1-c014-7bf9248be2de
* Version Independent ID: 976e5e90-b28c-d7ba-0495-69d92e62ea46
* Content: [Deploy a Linux Hybrid Runbook Worker in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-linux-hrw-install)
* Content Source: [articles/automation/automation-linux-hrw-install.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-linux-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
deploy a linux hybrid runbook worker in docker i would be great to have instructions for deploying to a linux docker container also document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
301,080
| 26,014,510,714
|
IssuesEvent
|
2022-12-21 07:01:12
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Getting STOD (stuck triangle of doom) trying to log in to `account.brave.com` using `Private window`
|
bug privacy QA/Yes QA/Test-Plan-Specified OS/Desktop feature/vpn
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Getting STOD (stuck triangle of doom) trying to log in to `account.brave.com` using `Private window`
(Found by @bsclifton, logged by yours truly)
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.48.73`
2. launch Brave
3. click on the `"hamburger"` menu
4. click on `New Private Window`
5. load `account.brave.com`
6. enter a unique, throwaway `@mailinator.com` address
7. click on the `Get login link` button
8. click on the `Log in to Brave` button in the resulting email
9. wait...
## Actual result:
<!--Please add screenshots if needed-->
<img width="1316" alt="Screen Shot 2022-12-20 at 10 50 09 PM" src="https://user-images.githubusercontent.com/387249/208840675-44239a97-2f76-4090-aafb-e305bbdb683c.png">
## Expected result:
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100%
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.48.73 Chromium: 109.0.5414.46 (Official Build) nightly (x86_64)
-- | --
Revision | 6e36b77363ef3febbe792af680fa1367993ddcf0-refs/branch-heads/5414@{#709}
OS | macOS Version 11.7.2 (Build 20G1020)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
/cc @bsclifton @brave/qa-team @mattmcalister @rebron @simonhong @spylogsster
|
1.0
|
Getting STOD (stuck triangle of doom) trying to log in to `account.brave.com` using `Private window` - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Getting STOD (stuck triangle of doom) trying to log in to `account.brave.com` using `Private window`
(Found by @bsclifton, logged by yours truly)
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.48.73`
2. launch Brave
3. click on the `"hamburger"` menu
4. click on `New Private Window`
5. load `account.brave.com`
6. enter a unique, throwaway `@mailinator.com` address
7. click on the `Get login link` button
8. click on the `Log in to Brave` button in the resulting email
9. wait...
## Actual result:
<!--Please add screenshots if needed-->
<img width="1316" alt="Screen Shot 2022-12-20 at 10 50 09 PM" src="https://user-images.githubusercontent.com/387249/208840675-44239a97-2f76-4090-aafb-e305bbdb683c.png">
## Expected result:
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100%
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 1.48.73 Chromium: 109.0.5414.46 (Official Build) nightly (x86_64)
-- | --
Revision | 6e36b77363ef3febbe792af680fa1367993ddcf0-refs/branch-heads/5414@{#709}
OS | macOS Version 11.7.2 (Build 20G1020)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
/cc @bsclifton @brave/qa-team @mattmcalister @rebron @simonhong @spylogsster
|
non_process
|
getting stod stuck triangle of doom trying to log in to account brave com using private window have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description getting stod stuck triangle of doom trying to log in to account brave com using private window found by bsclifton logged by yours truly steps to reproduce install launch brave click on the hamburger menu click on new private window load account brave com enter a unique throwaway mailinator com address click on the get login link button click on the log in to brave button in the resulting email wait actual result img width alt screen shot at pm src expected result reproduces how often brave version brave version info brave chromium official build nightly revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release can you reproduce this issue with the beta channel can you reproduce this issue with the nightly channel other additional information does the issue resolve itself when disabling brave shields does the issue resolve itself when disabling brave rewards is the issue reproducible on the latest version of chrome miscellaneous information cc bsclifton brave qa team mattmcalister rebron simonhong spylogsster
| 0
|
1,342
| 3,193,643,515
|
IssuesEvent
|
2015-09-30 07:15:06
|
raml-org/raml-spec
|
https://api.github.com/repos/raml-org/raml-spec
|
opened
|
Allow Defining Configurable New Security Schemes
|
v1-enhanced-security-schemes
|
Allow new security schemes to be defined that require certain configuration settings, and allow consuming them in an API definition. The processor of the RAML spec would need to support such custom schemes.
|
True
|
Allow Defining Configurable New Security Schemes - Allow new security schemes to be defined that require certain configuration settings, and allow consuming them in an API definition. The processor of the RAML spec would need to support such custom schemes.
|
non_process
|
allow defining configurable new security schemes allow new security schemes to be defined that require certain configuration settings and allow consuming them in an api definition the processor of the raml spec would need to support such custom schemes
| 0
|
19,616
| 25,970,685,475
|
IssuesEvent
|
2022-12-19 10:58:41
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Remove DML dependency from MongoDB introspection
|
process/candidate topic: introspection topic: internal tech/engines/introspection engine team/schema topic: mongodb
|
We still use DML in MongoDB introspection. Let's get rid of that, and get something similar going what we have in the SQL side.
|
1.0
|
Remove DML dependency from MongoDB introspection - We still use DML in MongoDB introspection. Let's get rid of that, and get something similar going what we have in the SQL side.
|
process
|
remove dml dependency from mongodb introspection we still use dml in mongodb introspection let s get rid of that and get something similar going what we have in the sql side
| 1
|
12,488
| 14,952,630,936
|
IssuesEvent
|
2021-01-26 15:45:27
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Pre-Release testing of PHAST assessment
|
Process Heating important
|
Step through creation of a couple new assessments. Import MEASUR Demo Plant B, and any other frequently used. Anything else is fair game as well.
|
1.0
|
Pre-Release testing of PHAST assessment - Step through creation of a couple new assessments. Import MEASUR Demo Plant B, and any other frequently used. Anything else is fair game as well.
|
process
|
pre release testing of phast assessment step through creation of a couple new assessments import measur demo plant b and any other frequently used anything else is fair game as well
| 1
|
302,562
| 22,830,877,431
|
IssuesEvent
|
2022-07-12 12:50:46
|
cambiatus/backend
|
https://api.github.com/repos/cambiatus/backend
|
closed
|
Atualizar readme para mencionar a dependência exiftool
|
📚 documentation
|
O PR #247 introduziu a limpeza dos metadados de imagem. Pra fazer isso usamos uma dependência externa chamada "exiftool" que precisa ser instalada na máquina que irá rodar o backend.
Para informar novos usuários e facilitar a colaboração devemos informar essa dependência no readme do diretório, assim como fazmos para o ImageMagick.
|
1.0
|
Atualizar readme para mencionar a dependência exiftool - O PR #247 introduziu a limpeza dos metadados de imagem. Pra fazer isso usamos uma dependência externa chamada "exiftool" que precisa ser instalada na máquina que irá rodar o backend.
Para informar novos usuários e facilitar a colaboração devemos informar essa dependência no readme do diretório, assim como fazmos para o ImageMagick.
|
non_process
|
atualizar readme para mencionar a dependência exiftool o pr introduziu a limpeza dos metadados de imagem pra fazer isso usamos uma dependência externa chamada exiftool que precisa ser instalada na máquina que irá rodar o backend para informar novos usuários e facilitar a colaboração devemos informar essa dependência no readme do diretório assim como fazmos para o imagemagick
| 0
|
63,022
| 8,652,109,210
|
IssuesEvent
|
2018-11-27 06:43:35
|
aopell/SchoologyPlus
|
https://api.github.com/repos/aopell/SchoologyPlus
|
opened
|
Document grade page features
|
Documentation Enhancement
|
We currently have extensive documentation on the theme editor. Given the popularity of the grade page features, we should probably have similar documentation on them.
|
1.0
|
Document grade page features - We currently have extensive documentation on the theme editor. Given the popularity of the grade page features, we should probably have similar documentation on them.
|
non_process
|
document grade page features we currently have extensive documentation on the theme editor given the popularity of the grade page features we should probably have similar documentation on them
| 0
|
31,540
| 14,987,670,627
|
IssuesEvent
|
2021-01-28 23:24:54
|
diegodlh/zotero-wikicite
|
https://api.github.com/repos/diegodlh/zotero-wikicite
|
opened
|
Alternative Wikidata search translator for citation target item metadata
|
enhancement performance wikidata
|
Methods such as `Citations.syncItemCitationsWithWikidata` rely on `Wikidata.getItems` to get citation target item metadata.
However, `Wikidata.getItems` relies on custom Wikidata API search translator, which returns a full item. Most of this information will be discarded when saving the citation (#27).
This means Wikidata API will be called more times than needed, and many unnecessary MBs will be transferred, specially for batch operations.
Consider providing an alternative Wikidata API (or SPARQL) search translator for citation target items that only fetches the information required. This alternative translator may be used by `Wikidata.getItems` when a `minimum` parameter is set to `true`, for example.
|
True
|
Alternative Wikidata search translator for citation target item metadata - Methods such as `Citations.syncItemCitationsWithWikidata` rely on `Wikidata.getItems` to get citation target item metadata.
However, `Wikidata.getItems` relies on custom Wikidata API search translator, which returns a full item. Most of this information will be discarded when saving the citation (#27).
This means Wikidata API will be called more times than needed, and many unnecessary MBs will be transferred, specially for batch operations.
Consider providing an alternative Wikidata API (or SPARQL) search translator for citation target items that only fetches the information required. This alternative translator may be used by `Wikidata.getItems` when a `minimum` parameter is set to `true`, for example.
|
non_process
|
alternative wikidata search translator for citation target item metadata methods such as citations syncitemcitationswithwikidata rely on wikidata getitems to get citation target item metadata however wikidata getitems relies on custom wikidata api search translator which returns a full item most of this information will be discarded when saving the citation this means wikidata api will be called more times than needed and many unnecessary mbs will be transferred specially for batch operations consider providing an alternative wikidata api or sparql search translator for citation target items that only fetches the information required this alternative translator may be used by wikidata getitems when a minimum parameter is set to true for example
| 0
|
14,387
| 17,403,912,199
|
IssuesEvent
|
2021-08-03 01:14:27
|
googleapis/python-resource-settings
|
https://api.github.com/repos/googleapis/python-resource-settings
|
closed
|
GA Release
|
api: resourcesettings type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-resource-settings/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/resource-manager/docs/release-notes#June_08_2021).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
GA Release - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-resource-settings/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/resource-manager/docs/release-notes#June_08_2021).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
ga release required days elapsed since last beta release with new api surface see server api is ga see package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
14,366
| 17,390,123,738
|
IssuesEvent
|
2021-08-02 05:57:17
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: GO:0044825 retroviral strand transfer activity
|
multi-species process obsoletion
|
Please provide as much information as you can:
* **GO term ID and Label**
GO:0044825 retroviral strand transfer activity
* **Reason for deprecation**
This represents part of an activity - 'part of' some 'retroviral integrase activity'
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
* **"Consider" term(s) (ID and label)**
GO:0044823 retroviral integrase activity
* **Are there annotations to this term?**
- How many EXP: 0 annotations
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* None
* **Is this term in a subset? (check the AmiGO page for that term)**
* No
* **Any other information**
|
1.0
|
Obsoletion notice: GO:0044825 retroviral strand transfer activity - Please provide as much information as you can:
* **GO term ID and Label**
GO:0044825 retroviral strand transfer activity
* **Reason for deprecation**
This represents part of an activity - 'part of' some 'retroviral integrase activity'
* **"Replace by" term (ID and label)**
If all annotations can safely be moved to that term
* **"Consider" term(s) (ID and label)**
GO:0044823 retroviral integrase activity
* **Are there annotations to this term?**
- How many EXP: 0 annotations
* **Are there mappings and cross references to this term? (InterPro, Keywords; check QuickGO cross-references section)**
* None
* **Is this term in a subset? (check the AmiGO page for that term)**
* No
* **Any other information**
|
process
|
obsoletion notice go retroviral strand transfer activity please provide as much information as you can go term id and label go retroviral strand transfer activity reason for deprecation this represents part of an activity part of some retroviral integrase activity replace by term id and label if all annotations can safely be moved to that term consider term s id and label go retroviral integrase activity are there annotations to this term how many exp annotations are there mappings and cross references to this term interpro keywords check quickgo cross references section none is this term in a subset check the amigo page for that term no any other information
| 1
|
9,292
| 12,306,094,285
|
IssuesEvent
|
2020-05-12 00:22:55
|
bazelbuild/rules_python
|
https://api.github.com/repos/bazelbuild/rules_python
|
closed
|
Release 0.0.2
|
P1 type: process
|
Targeting Bazel 2.0 (or sooner).
This means we want compatibility with any incompatible changes in 2.0. Then we'll aim to be included in the Bazel Federation release that'll come out shortly after Bazel 2.0.
|
1.0
|
Release 0.0.2 - Targeting Bazel 2.0 (or sooner).
This means we want compatibility with any incompatible changes in 2.0. Then we'll aim to be included in the Bazel Federation release that'll come out shortly after Bazel 2.0.
|
process
|
release targeting bazel or sooner this means we want compatibility with any incompatible changes in then we ll aim to be included in the bazel federation release that ll come out shortly after bazel
| 1
|
6,866
| 9,999,249,660
|
IssuesEvent
|
2019-07-12 10:09:10
|
OI-wiki/OI-wiki
|
https://api.github.com/repos/OI-wiki/OI-wiki
|
closed
|
树状数组页面代码格式混乱
|
需要处理 / Need Processing 需要帮助修正格式 / Help needed for format
|
首先,十分欢迎你来给 OI WIki 开 issue,在提交之前,请花时间阅读一下这个模板的内容,谢谢合作!
- [x] 请确认已经读过了 [F.A.Q.](https://oi-wiki.org/intro/faq/)(确认过后请将选项打钩 / 填为 `[x]`)
- 是出现了什么问题?(最好截图)
树状数组页面代码格式混乱,大括号时而换行,时而不换行。
https://github.com/24OI/OI-wiki/blob/master/docs/ds/bit.md#L72
- 你是否正在着手修复?
没有
- 如何复现?
|
1.0
|
树状数组页面代码格式混乱 - 首先,十分欢迎你来给 OI WIki 开 issue,在提交之前,请花时间阅读一下这个模板的内容,谢谢合作!
- [x] 请确认已经读过了 [F.A.Q.](https://oi-wiki.org/intro/faq/)(确认过后请将选项打钩 / 填为 `[x]`)
- 是出现了什么问题?(最好截图)
树状数组页面代码格式混乱,大括号时而换行,时而不换行。
https://github.com/24OI/OI-wiki/blob/master/docs/ds/bit.md#L72
- 你是否正在着手修复?
没有
- 如何复现?
|
process
|
树状数组页面代码格式混乱 首先,十分欢迎你来给 oi wiki 开 issue,在提交之前,请花时间阅读一下这个模板的内容,谢谢合作! 请确认已经读过了 填为 ) 是出现了什么问题?(最好截图) 树状数组页面代码格式混乱,大括号时而换行,时而不换行。 你是否正在着手修复? 没有 如何复现?
| 1
|
347
| 2,793,293,628
|
IssuesEvent
|
2015-05-11 09:57:47
|
ecodistrict/IDSSDashboard
|
https://api.github.com/repos/ecodistrict/IDSSDashboard
|
closed
|
Tab should be called ‘develop variant’ instead of ‘alternatives’ (right? It is confusing ☺)
|
form feedback 09102014 process step: develop alternatives
|
Tab should be called ‘develop variant’ instead of ‘alternatives’ (right? It is confusing ☺)
|
1.0
|
Tab should be called ‘develop variant’ instead of ‘alternatives’ (right? It is confusing ☺) - Tab should be called ‘develop variant’ instead of ‘alternatives’ (right? It is confusing ☺)
|
process
|
tab should be called ‘develop variant’ instead of ‘alternatives’ right it is confusing ☺ tab should be called ‘develop variant’ instead of ‘alternatives’ right it is confusing ☺
| 1
|
22,042
| 30,564,400,832
|
IssuesEvent
|
2023-07-20 16:38:43
|
AvaloniaUI/Avalonia
|
https://api.github.com/repos/AvaloniaUI/Avalonia
|
closed
|
GlyphRun - Index was outside the bounds of the array on Android
|
bug os-android area-textprocessing
|
**Describe the bug**
Run my app on Android mobile device.
Use Material.Avalonia.
When print in textbox app crush.
**To Reproduce**
Steps to reproduce the behavior:
Run app on android. Print to textbox.
**Expected behavior**
text in textbox
- OS: Android 13. OneUI 5.1
- Version 11.0.0-rc1
Exception: Index was outside the bounds of the array.
```
at Avalonia.Media.GlyphRun.CreateGlyphRunMetrics() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 643
at Avalonia.Media.GlyphRun.get_Metrics() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 164
at Avalonia.Media.GlyphRun.get_BaselineOrigin() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 171
at Avalonia.Media.GlyphRun.CreateGlyphRunImpl() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 828
at Avalonia.Media.GlyphRun.get_PlatformImpl() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 220
at Avalonia.Media.GlyphRun.get_InkBounds() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 158
at Avalonia.Media.TextFormatting.TextLineImpl.CreateLineMetrics() in /_/src/Avalonia.Base/Media/TextFormatting/TextLineImpl.cs:line 1287
at Avalonia.Media.TextFormatting.TextLineImpl.FinalizeLine() in /_/src/Avalonia.Base/Media/TextFormatting/TextLineImpl.cs:line 1000
at Avalonia.Media.TextFormatting.TextFormatterImpl.PerformTextWrapping(List`1 textRuns, Boolean canReuseTextRunList, Int32 firstTextSourceIndex, Double paragraphWidth, TextParagraphProperties paragraphProperties, FlowDirection resolvedFlowDirection, TextLineBreak currentLineBreak, FormattingObjectPool objectPool) in /_/src/Avalonia.Base/Media/TextFormatting/TextFormatterImpl.cs:line 889
at Avalonia.Media.TextFormatting.TextFormatterImpl.FormatLine(ITextSource textSource, Int32 firstTextSourceIndex, Double paragraphWidth, TextParagraphProperties paragraphProperties, TextLineBreak previousLineBreak) in /_/src/Avalonia.Base/Media/TextFormatting/TextFormatterImpl.cs:line 72
at Avalonia.Media.TextFormatting.TextLayout.CreateTextLines() in /_/src/Avalonia.Base/Media/TextFormatting/TextLayout.cs:line 638
at Avalonia.Media.TextFormatting.TextLayout..ctor(String text, Typeface typeface, Double fontSize, IBrush foreground, TextAlignment textAlignment, TextWrapping textWrapping, TextTrimming textTrimming, TextDecorationCollection textDecorations, FlowDirection flowDirection, Double maxWidth, Double maxHeight, Double lineHeight, Double letterSpacing, Int32 maxLines, IReadOnlyList`1 textStyleOverrides) in /_/src/Avalonia.Base/Media/TextFormatting/TextLayout.cs:line 69
at Avalonia.Controls.Presenters.TextPresenter.CreateTextLayoutInternal(Size constraint, String text, Typeface typeface, IReadOnlyList`1 textStyleOverrides) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 321
at Avalonia.Controls.Presenters.TextPresenter.CreateTextLayout() in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 551
at Avalonia.Controls.Presenters.TextPresenter.get_TextLayout() in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 248
at Avalonia.Controls.Presenters.TextPresenter.MoveCaretToTextPosition(Int32 textPosition, Boolean trailingEdge) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 610
at Avalonia.Controls.Presenters.TextPresenter.OnPropertyChanged(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 840
at Avalonia.AvaloniaObject.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 649
at Avalonia.Animation.Animatable.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/Animation/Animatable.cs:line 185
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 700
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaise(ValueStore owner, IValueEntry value, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 60
at Avalonia.PropertyStore.ValueStore.ReevaluateEffectiveValue(AvaloniaProperty property, EffectiveValue current, IValueEntry changedValueEntry, Boolean ignoreLocalValue) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 850
at Avalonia.PropertyStore.ValueStore.OnBindingValueChanged(IValueEntry entry, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 419
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].<SetValue>g__Execute|38_0(BindingEntryBase`2 instance, BindingValue`1 value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 162
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetValue(BindingValue`1 value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 171
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].OnNext(Object value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 99
at Avalonia.Data.TemplateBinding.PublishValue() in /_/src/Avalonia.Base/Data/TemplateBinding.cs:line 175
at Avalonia.Data.TemplateBinding.TemplatedParentPropertyChanged(Object sender, AvaloniaPropertyChangedEventArgs e) in /_/src/Avalonia.Base/Data/TemplateBinding.cs:line 212
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 705
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetCurrentValueAndRaise(ValueStore owner, StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 82
at Avalonia.PropertyStore.ValueStore.SetCurrentValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 205
at Avalonia.AvaloniaObject.SetCurrentValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 405
at Avalonia.Controls.TextBox.OnSelectionEndChanged(AvaloniaPropertyChangedEventArgs e) in /_/src/Avalonia.Controls/TextBox.cs:line 484
at Avalonia.Controls.TextBox.OnPropertyChanged(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Controls/TextBox.cs:line 854
at Avalonia.AvaloniaObject.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 649
at Avalonia.Animation.Animatable.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/Animation/Animatable.cs:line 185
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 700
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetLocalValueAndRaise(ValueStore owner, StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 74
at Avalonia.PropertyStore.ValueStore.SetLocalValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 220
at Avalonia.PropertyStore.ValueStore.SetValue[Int32](StyledProperty`1 property, Int32 value, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 196
at Avalonia.AvaloniaObject.SetValue[Int32](StyledProperty`1 property, Int32 value, BindingPriority priority) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 336
at Avalonia.Controls.TextBox.set_SelectionEnd(Int32 value) in /_/src/Avalonia.Controls/TextBox.cs:line 474
at Avalonia.Controls.TextBoxTextInputMethodClient.TextEditable_SelectionChanged(Object sender, EventArgs e) in /_/src/Avalonia.Controls/TextBoxTextInputMethodClient.cs:line 118
at Avalonia.Android.InputEditable.EndBatchEdit() in /_/src/Android/Avalonia.Android/InputEditable.cs:line 115
at Avalonia.Android.Platform.SkiaPlatform.AvaloniaInputConnection.EndBatchEdit() in /_/src/Android/Avalonia.Android/Platform/SkiaPlatform/TopLevelImpl.cs:line 488
at Android.Views.InputMethods.BaseInputConnection.n_EndBatchEdit(IntPtr jnienv, IntPtr native__this) in /Users/runner/work/1/s/xamarin-android/src/Mono.Android/obj/Release/net7.0/android-33/mcw/Android.Views.InputMethods.BaseInputConnection.cs:line 474
at Android.Runtime.JNINativeWrapper.Wrap_JniMarshal_PP_Z(_JniMarshal_PP_Z callback, IntPtr jnienv, IntPtr klazz) in /Users/runner/work/1/s/xamarin-android/src/Mono.Android/Android.Runtime/JNINativeWrapper.g.cs:line 44
```
|
1.0
|
GlyphRun - Index was outside the bounds of the array on Android - **Describe the bug**
Run my app on Android mobile device.
Use Material.Avalonia.
When print in textbox app crush.
**To Reproduce**
Steps to reproduce the behavior:
Run app on android. Print to textbox.
**Expected behavior**
text in textbox
- OS: Android 13. OneUI 5.1
- Version 11.0.0-rc1
Exception: Index was outside the bounds of the array.
```
at Avalonia.Media.GlyphRun.CreateGlyphRunMetrics() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 643
at Avalonia.Media.GlyphRun.get_Metrics() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 164
at Avalonia.Media.GlyphRun.get_BaselineOrigin() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 171
at Avalonia.Media.GlyphRun.CreateGlyphRunImpl() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 828
at Avalonia.Media.GlyphRun.get_PlatformImpl() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 220
at Avalonia.Media.GlyphRun.get_InkBounds() in /_/src/Avalonia.Base/Media/GlyphRun.cs:line 158
at Avalonia.Media.TextFormatting.TextLineImpl.CreateLineMetrics() in /_/src/Avalonia.Base/Media/TextFormatting/TextLineImpl.cs:line 1287
at Avalonia.Media.TextFormatting.TextLineImpl.FinalizeLine() in /_/src/Avalonia.Base/Media/TextFormatting/TextLineImpl.cs:line 1000
at Avalonia.Media.TextFormatting.TextFormatterImpl.PerformTextWrapping(List`1 textRuns, Boolean canReuseTextRunList, Int32 firstTextSourceIndex, Double paragraphWidth, TextParagraphProperties paragraphProperties, FlowDirection resolvedFlowDirection, TextLineBreak currentLineBreak, FormattingObjectPool objectPool) in /_/src/Avalonia.Base/Media/TextFormatting/TextFormatterImpl.cs:line 889
at Avalonia.Media.TextFormatting.TextFormatterImpl.FormatLine(ITextSource textSource, Int32 firstTextSourceIndex, Double paragraphWidth, TextParagraphProperties paragraphProperties, TextLineBreak previousLineBreak) in /_/src/Avalonia.Base/Media/TextFormatting/TextFormatterImpl.cs:line 72
at Avalonia.Media.TextFormatting.TextLayout.CreateTextLines() in /_/src/Avalonia.Base/Media/TextFormatting/TextLayout.cs:line 638
at Avalonia.Media.TextFormatting.TextLayout..ctor(String text, Typeface typeface, Double fontSize, IBrush foreground, TextAlignment textAlignment, TextWrapping textWrapping, TextTrimming textTrimming, TextDecorationCollection textDecorations, FlowDirection flowDirection, Double maxWidth, Double maxHeight, Double lineHeight, Double letterSpacing, Int32 maxLines, IReadOnlyList`1 textStyleOverrides) in /_/src/Avalonia.Base/Media/TextFormatting/TextLayout.cs:line 69
at Avalonia.Controls.Presenters.TextPresenter.CreateTextLayoutInternal(Size constraint, String text, Typeface typeface, IReadOnlyList`1 textStyleOverrides) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 321
at Avalonia.Controls.Presenters.TextPresenter.CreateTextLayout() in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 551
at Avalonia.Controls.Presenters.TextPresenter.get_TextLayout() in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 248
at Avalonia.Controls.Presenters.TextPresenter.MoveCaretToTextPosition(Int32 textPosition, Boolean trailingEdge) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 610
at Avalonia.Controls.Presenters.TextPresenter.OnPropertyChanged(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Controls/Presenters/TextPresenter.cs:line 840
at Avalonia.AvaloniaObject.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 649
at Avalonia.Animation.Animatable.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/Animation/Animatable.cs:line 185
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 700
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaise(ValueStore owner, IValueEntry value, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 60
at Avalonia.PropertyStore.ValueStore.ReevaluateEffectiveValue(AvaloniaProperty property, EffectiveValue current, IValueEntry changedValueEntry, Boolean ignoreLocalValue) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 850
at Avalonia.PropertyStore.ValueStore.OnBindingValueChanged(IValueEntry entry, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 419
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].<SetValue>g__Execute|38_0(BindingEntryBase`2 instance, BindingValue`1 value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 162
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetValue(BindingValue`1 value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 171
at Avalonia.PropertyStore.BindingEntryBase`2[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.Object, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].OnNext(Object value) in /_/src/Avalonia.Base/PropertyStore/BindingEntryBase.cs:line 99
at Avalonia.Data.TemplateBinding.PublishValue() in /_/src/Avalonia.Base/Data/TemplateBinding.cs:line 175
at Avalonia.Data.TemplateBinding.TemplatedParentPropertyChanged(Object sender, AvaloniaPropertyChangedEventArgs e) in /_/src/Avalonia.Base/Data/TemplateBinding.cs:line 212
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 705
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetCurrentValueAndRaise(ValueStore owner, StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 82
at Avalonia.PropertyStore.ValueStore.SetCurrentValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 205
at Avalonia.AvaloniaObject.SetCurrentValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 405
at Avalonia.Controls.TextBox.OnSelectionEndChanged(AvaloniaPropertyChangedEventArgs e) in /_/src/Avalonia.Controls/TextBox.cs:line 484
at Avalonia.Controls.TextBox.OnPropertyChanged(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Controls/TextBox.cs:line 854
at Avalonia.AvaloniaObject.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 649
at Avalonia.Animation.Animatable.OnPropertyChangedCore(AvaloniaPropertyChangedEventArgs change) in /_/src/Avalonia.Base/Animation/Animatable.cs:line 185
at Avalonia.AvaloniaObject.RaisePropertyChanged[Int32](AvaloniaProperty`1 property, Optional`1 oldValue, BindingValue`1 newValue, BindingPriority priority, Boolean isEffectiveValue) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 700
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetAndRaiseCore(ValueStore owner, StyledProperty`1 property, Int32 value, BindingPriority priority, Boolean isOverriddenCurrentValue, Boolean isCoercedDefaultValue) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 235
at Avalonia.PropertyStore.EffectiveValue`1[[System.Int32, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].SetLocalValueAndRaise(ValueStore owner, StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/EffectiveValue`1.cs:line 74
at Avalonia.PropertyStore.ValueStore.SetLocalValue[Int32](StyledProperty`1 property, Int32 value) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 220
at Avalonia.PropertyStore.ValueStore.SetValue[Int32](StyledProperty`1 property, Int32 value, BindingPriority priority) in /_/src/Avalonia.Base/PropertyStore/ValueStore.cs:line 196
at Avalonia.AvaloniaObject.SetValue[Int32](StyledProperty`1 property, Int32 value, BindingPriority priority) in /_/src/Avalonia.Base/AvaloniaObject.cs:line 336
at Avalonia.Controls.TextBox.set_SelectionEnd(Int32 value) in /_/src/Avalonia.Controls/TextBox.cs:line 474
at Avalonia.Controls.TextBoxTextInputMethodClient.TextEditable_SelectionChanged(Object sender, EventArgs e) in /_/src/Avalonia.Controls/TextBoxTextInputMethodClient.cs:line 118
at Avalonia.Android.InputEditable.EndBatchEdit() in /_/src/Android/Avalonia.Android/InputEditable.cs:line 115
at Avalonia.Android.Platform.SkiaPlatform.AvaloniaInputConnection.EndBatchEdit() in /_/src/Android/Avalonia.Android/Platform/SkiaPlatform/TopLevelImpl.cs:line 488
at Android.Views.InputMethods.BaseInputConnection.n_EndBatchEdit(IntPtr jnienv, IntPtr native__this) in /Users/runner/work/1/s/xamarin-android/src/Mono.Android/obj/Release/net7.0/android-33/mcw/Android.Views.InputMethods.BaseInputConnection.cs:line 474
at Android.Runtime.JNINativeWrapper.Wrap_JniMarshal_PP_Z(_JniMarshal_PP_Z callback, IntPtr jnienv, IntPtr klazz) in /Users/runner/work/1/s/xamarin-android/src/Mono.Android/Android.Runtime/JNINativeWrapper.g.cs:line 44
```
|
process
|
glyphrun index was outside the bounds of the array on android describe the bug run my app on android mobile device use material avalonia when print in textbox app crush to reproduce steps to reproduce the behavior run app on android print to textbox expected behavior text in textbox os android oneui version exception index was outside the bounds of the array at avalonia media glyphrun createglyphrunmetrics in src avalonia base media glyphrun cs line at avalonia media glyphrun get metrics in src avalonia base media glyphrun cs line at avalonia media glyphrun get baselineorigin in src avalonia base media glyphrun cs line at avalonia media glyphrun createglyphrunimpl in src avalonia base media glyphrun cs line at avalonia media glyphrun get platformimpl in src avalonia base media glyphrun cs line at avalonia media glyphrun get inkbounds in src avalonia base media glyphrun cs line at avalonia media textformatting textlineimpl createlinemetrics in src avalonia base media textformatting textlineimpl cs line at avalonia media textformatting textlineimpl finalizeline in src avalonia base media textformatting textlineimpl cs line at avalonia media textformatting textformatterimpl performtextwrapping list textruns boolean canreusetextrunlist firsttextsourceindex double paragraphwidth textparagraphproperties paragraphproperties flowdirection resolvedflowdirection textlinebreak currentlinebreak formattingobjectpool objectpool in src avalonia base media textformatting textformatterimpl cs line at avalonia media textformatting textformatterimpl formatline itextsource textsource firsttextsourceindex double paragraphwidth textparagraphproperties paragraphproperties textlinebreak previouslinebreak in src avalonia base media textformatting textformatterimpl cs line at avalonia media textformatting textlayout createtextlines in src avalonia base media textformatting textlayout cs line at avalonia media textformatting textlayout ctor string text typeface typeface double fontsize ibrush foreground textalignment textalignment textwrapping textwrapping texttrimming texttrimming textdecorationcollection textdecorations flowdirection flowdirection double maxwidth double maxheight double lineheight double letterspacing maxlines ireadonlylist textstyleoverrides in src avalonia base media textformatting textlayout cs line at avalonia controls presenters textpresenter createtextlayoutinternal size constraint string text typeface typeface ireadonlylist textstyleoverrides in src avalonia controls presenters textpresenter cs line at avalonia controls presenters textpresenter createtextlayout in src avalonia controls presenters textpresenter cs line at avalonia controls presenters textpresenter get textlayout in src avalonia controls presenters textpresenter cs line at avalonia controls presenters textpresenter movecarettotextposition textposition boolean trailingedge in src avalonia controls presenters textpresenter cs line at avalonia controls presenters textpresenter onpropertychanged avaloniapropertychangedeventargs change in src avalonia controls presenters textpresenter cs line at avalonia avaloniaobject onpropertychangedcore avaloniapropertychangedeventargs change in src avalonia base avaloniaobject cs line at avalonia animation animatable onpropertychangedcore avaloniapropertychangedeventargs change in src avalonia base animation animatable cs line at avalonia avaloniaobject raisepropertychanged avaloniaproperty property optional oldvalue bindingvalue newvalue bindingpriority priority boolean iseffectivevalue in src avalonia base avaloniaobject cs line at avalonia propertystore effectivevalue setandraisecore valuestore owner styledproperty property value bindingpriority priority boolean isoverriddencurrentvalue boolean iscoerceddefaultvalue in src avalonia base propertystore effectivevalue cs line at avalonia propertystore effectivevalue setandraise valuestore owner ivalueentry value bindingpriority priority in src avalonia base propertystore effectivevalue cs line at avalonia propertystore valuestore reevaluateeffectivevalue avaloniaproperty property effectivevalue current ivalueentry changedvalueentry boolean ignorelocalvalue in src avalonia base propertystore valuestore cs line at avalonia propertystore valuestore onbindingvaluechanged ivalueentry entry bindingpriority priority in src avalonia base propertystore valuestore cs line at avalonia propertystore bindingentrybase g execute bindingentrybase instance bindingvalue value in src avalonia base propertystore bindingentrybase cs line at avalonia propertystore bindingentrybase setvalue bindingvalue value in src avalonia base propertystore bindingentrybase cs line at avalonia propertystore bindingentrybase onnext object value in src avalonia base propertystore bindingentrybase cs line at avalonia data templatebinding publishvalue in src avalonia base data templatebinding cs line at avalonia data templatebinding templatedparentpropertychanged object sender avaloniapropertychangedeventargs e in src avalonia base data templatebinding cs line at avalonia avaloniaobject raisepropertychanged avaloniaproperty property optional oldvalue bindingvalue newvalue bindingpriority priority boolean iseffectivevalue in src avalonia base avaloniaobject cs line at avalonia propertystore effectivevalue setandraisecore valuestore owner styledproperty property value bindingpriority priority boolean isoverriddencurrentvalue boolean iscoerceddefaultvalue in src avalonia base propertystore effectivevalue cs line at avalonia propertystore effectivevalue setcurrentvalueandraise valuestore owner styledproperty property value in src avalonia base propertystore effectivevalue cs line at avalonia propertystore valuestore setcurrentvalue styledproperty property value in src avalonia base propertystore valuestore cs line at avalonia avaloniaobject setcurrentvalue styledproperty property value in src avalonia base avaloniaobject cs line at avalonia controls textbox onselectionendchanged avaloniapropertychangedeventargs e in src avalonia controls textbox cs line at avalonia controls textbox onpropertychanged avaloniapropertychangedeventargs change in src avalonia controls textbox cs line at avalonia avaloniaobject onpropertychangedcore avaloniapropertychangedeventargs change in src avalonia base avaloniaobject cs line at avalonia animation animatable onpropertychangedcore avaloniapropertychangedeventargs change in src avalonia base animation animatable cs line at avalonia avaloniaobject raisepropertychanged avaloniaproperty property optional oldvalue bindingvalue newvalue bindingpriority priority boolean iseffectivevalue in src avalonia base avaloniaobject cs line at avalonia propertystore effectivevalue setandraisecore valuestore owner styledproperty property value bindingpriority priority boolean isoverriddencurrentvalue boolean iscoerceddefaultvalue in src avalonia base propertystore effectivevalue cs line at avalonia propertystore effectivevalue setlocalvalueandraise valuestore owner styledproperty property value in src avalonia base propertystore effectivevalue cs line at avalonia propertystore valuestore setlocalvalue styledproperty property value in src avalonia base propertystore valuestore cs line at avalonia propertystore valuestore setvalue styledproperty property value bindingpriority priority in src avalonia base propertystore valuestore cs line at avalonia avaloniaobject setvalue styledproperty property value bindingpriority priority in src avalonia base avaloniaobject cs line at avalonia controls textbox set selectionend value in src avalonia controls textbox cs line at avalonia controls textboxtextinputmethodclient texteditable selectionchanged object sender eventargs e in src avalonia controls textboxtextinputmethodclient cs line at avalonia android inputeditable endbatchedit in src android avalonia android inputeditable cs line at avalonia android platform skiaplatform avaloniainputconnection endbatchedit in src android avalonia android platform skiaplatform toplevelimpl cs line at android views inputmethods baseinputconnection n endbatchedit intptr jnienv intptr native this in users runner work s xamarin android src mono android obj release android mcw android views inputmethods baseinputconnection cs line at android runtime jninativewrapper wrap jnimarshal pp z jnimarshal pp z callback intptr jnienv intptr klazz in users runner work s xamarin android src mono android android runtime jninativewrapper g cs line
| 1
|
17,387
| 23,205,965,840
|
IssuesEvent
|
2022-08-02 05:21:24
|
u4gbot/status.webodm.net
|
https://api.github.com/repos/u4gbot/status.webodm.net
|
closed
|
spark1.webodm.net was unavailable for 10 minutes on 8/2/22 1am EST for a planned storage upgrade
|
status processing-network-spark1
|
It is now back online.
|
1.0
|
spark1.webodm.net was unavailable for 10 minutes on 8/2/22 1am EST for a planned storage upgrade - It is now back online.
|
process
|
webodm net was unavailable for minutes on est for a planned storage upgrade it is now back online
| 1
|
433,100
| 30,312,318,029
|
IssuesEvent
|
2023-07-10 13:34:09
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Example of building and testing libraries doesn't work
|
documentation area-Infrastructure-libraries needs-further-triage in-pr
|
https://github.com/dotnet/runtime/blob/66def3a7024d8c860c047a9b1681085c591d46ca/docs/workflow/building/libraries/README.md?plain=1#L40-L48
There are two problems:
1. AFAIK `command1 && command2` runs sequentially but `command1 & command2` runs simultaneously. So the last line would not work as expected.
2. `System.Text.RegularExpressions` is uncommon that there's not a xxx.csproj in directory `tests`:
https://github.com/dotnet/runtime/tree/66def3a7024d8c860c047a9b1681085c591d46ca/src/libraries/System.Text.RegularExpressions/tests
|
1.0
|
Example of building and testing libraries doesn't work - https://github.com/dotnet/runtime/blob/66def3a7024d8c860c047a9b1681085c591d46ca/docs/workflow/building/libraries/README.md?plain=1#L40-L48
There are two problems:
1. AFAIK `command1 && command2` runs sequentially but `command1 & command2` runs simultaneously. So the last line would not work as expected.
2. `System.Text.RegularExpressions` is uncommon that there's not a xxx.csproj in directory `tests`:
https://github.com/dotnet/runtime/tree/66def3a7024d8c860c047a9b1681085c591d46ca/src/libraries/System.Text.RegularExpressions/tests
|
non_process
|
example of building and testing libraries doesn t work there are two problems afaik runs sequentially but runs simultaneously so the last line would not work as expected system text regularexpressions is uncommon that there s not a xxx csproj in directory tests
| 0
|
648,042
| 21,163,775,128
|
IssuesEvent
|
2022-04-07 11:50:35
|
woocommerce/woocommerce-ios
|
https://api.github.com/repos/woocommerce/woocommerce-ios
|
closed
|
Remove `axisOfTwoSubviews` helper
|
type: task good first issue priority: low
|
unrelated: this doesn't seem to be used in the codebase anymore, we can delete this at some point 🙂
_Originally posted by @jaclync in https://github.com/woocommerce/woocommerce-ios/pull/5656#discussion_r784494673_
|
1.0
|
Remove `axisOfTwoSubviews` helper - unrelated: this doesn't seem to be used in the codebase anymore, we can delete this at some point 🙂
_Originally posted by @jaclync in https://github.com/woocommerce/woocommerce-ios/pull/5656#discussion_r784494673_
|
non_process
|
remove axisoftwosubviews helper unrelated this doesn t seem to be used in the codebase anymore we can delete this at some point 🙂 originally posted by jaclync in
| 0
|
8,441
| 11,610,587,283
|
IssuesEvent
|
2020-02-26 03:33:50
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Druid v0.17.0 Error for a simple query
|
Database/Druid Priority:P1 Querying/GUI Querying/Processor Type:Bug
|
java.lang.Exception: Cannot construct instance of `org.apache.druid.query.select.SelectQuery`, problem: The 'select' query has been removed, use 'scan' instead. See https://druid.apache.org/docs/latest/querying/select-query.html for more details. at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 32] Error class:


what should I do?
|
1.0
|
Druid v0.17.0 Error for a simple query - java.lang.Exception: Cannot construct instance of `org.apache.druid.query.select.SelectQuery`, problem: The 'select' query has been removed, use 'scan' instead. See https://druid.apache.org/docs/latest/querying/select-query.html for more details. at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 32] Error class:


what should I do?
|
process
|
druid error for a simple query java lang exception cannot construct instance of org apache druid query select selectquery problem the select query has been removed use scan instead see for more details at error class what should i do?
| 1
|
263,833
| 19,977,361,215
|
IssuesEvent
|
2022-01-29 09:57:28
|
bussardrobbie/DDL-KB-Kanban
|
https://api.github.com/repos/bussardrobbie/DDL-KB-Kanban
|
closed
|
Optimize Knowledge Base Navigation
|
documentation High Priority
|
The Navigation in the subsection is not the least user friendly and at the moment completely useless.
Subnav points are not shown to the left in the vertical menu.
If I click on "Managing Multiple Vectors on Escape Pod" in the navigation to the left the nav point "Developer Tools" is marked bold, even if the user is not on that point in the documentation. An "active" link should indicate a page the user is on at the moment. This is basic html/UX knowledge that is freely available.
Aside from that the Escape Pod documentation is highly unorganized (as is the rest of Vector documentation), it looks as if the person doing the documentation has either no idea how to present a user friendly and clearly organized documentation, or the software used is not able to provide this. Or both.
As I suggested a year ago: Use a Wiki (Mediawiki, Dokuwiki) or at least a industry-proven knowledgebase software instead of that unusable and unprofessional tinker-solution. If this actually is helpscout, as the page source suggests, it was implemented wrong.
|
1.0
|
Optimize Knowledge Base Navigation - The Navigation in the subsection is not the least user friendly and at the moment completely useless.
Subnav points are not shown to the left in the vertical menu.
If I click on "Managing Multiple Vectors on Escape Pod" in the navigation to the left the nav point "Developer Tools" is marked bold, even if the user is not on that point in the documentation. An "active" link should indicate a page the user is on at the moment. This is basic html/UX knowledge that is freely available.
Aside from that the Escape Pod documentation is highly unorganized (as is the rest of Vector documentation), it looks as if the person doing the documentation has either no idea how to present a user friendly and clearly organized documentation, or the software used is not able to provide this. Or both.
As I suggested a year ago: Use a Wiki (Mediawiki, Dokuwiki) or at least a industry-proven knowledgebase software instead of that unusable and unprofessional tinker-solution. If this actually is helpscout, as the page source suggests, it was implemented wrong.
|
non_process
|
optimize knowledge base navigation the navigation in the subsection is not the least user friendly and at the moment completely useless subnav points are not shown to the left in the vertical menu if i click on managing multiple vectors on escape pod in the navigation to the left the nav point developer tools is marked bold even if the user is not on that point in the documentation an active link should indicate a page the user is on at the moment this is basic html ux knowledge that is freely available aside from that the escape pod documentation is highly unorganized as is the rest of vector documentation it looks as if the person doing the documentation has either no idea how to present a user friendly and clearly organized documentation or the software used is not able to provide this or both as i suggested a year ago use a wiki mediawiki dokuwiki or at least a industry proven knowledgebase software instead of that unusable and unprofessional tinker solution if this actually is helpscout as the page source suggests it was implemented wrong
| 0
|
39,206
| 9,311,679,078
|
IssuesEvent
|
2019-03-25 22:05:26
|
CenturyLinkCloud/mdw
|
https://api.github.com/repos/CenturyLinkCloud/mdw
|
closed
|
EL expression is ignored in composite string containing PlaceHolder values
|
defect
|
The expression will not be evaluated if it appears AFTER the placeholder. For example:
DW-GEE : MasterRequestID-{$MasterRequestID}: Process Instance Id - ${process.ownerId}
As a workaround, user can use only expressions instead of mixing placeholders with expressions.
DW-GEE : MasterRequestID-${masterRequestId}: Process Instance Id - ${process.ownerId}
|
1.0
|
EL expression is ignored in composite string containing PlaceHolder values - The expression will not be evaluated if it appears AFTER the placeholder. For example:
DW-GEE : MasterRequestID-{$MasterRequestID}: Process Instance Id - ${process.ownerId}
As a workaround, user can use only expressions instead of mixing placeholders with expressions.
DW-GEE : MasterRequestID-${masterRequestId}: Process Instance Id - ${process.ownerId}
|
non_process
|
el expression is ignored in composite string containing placeholder values the expression will not be evaluated if it appears after the placeholder for example dw gee masterrequestid masterrequestid process instance id process ownerid as a workaround user can use only expressions instead of mixing placeholders with expressions dw gee masterrequestid masterrequestid process instance id process ownerid
| 0
|
4,054
| 6,988,520,409
|
IssuesEvent
|
2017-12-14 13:16:05
|
nlbdev/pipeline
|
https://api.github.com/repos/nlbdev/pipeline
|
closed
|
Automatically reposition <em>
|
enhancement pre-processing Priority:1 - Low
|
*by @matskober from Trello:*
- `em` should end right before a period, not right after.
- `em` should also be *inside* quites / double angles, not outside.
**Wrong:**
```
<p><em>«Bryt.»</em> Niklas skuttet seg.
«Eller <em>ødelegg.</em> Vi får håpe det ikke virker.»</p>
```
**Correct:**
```
<p>«<em>Bryt.</em>» Niklas skuttet seg.
«Eller <em>ødelegg</em>. Vi får håpe det ikke virker.»</p>
```
|
1.0
|
Automatically reposition <em> - *by @matskober from Trello:*
- `em` should end right before a period, not right after.
- `em` should also be *inside* quites / double angles, not outside.
**Wrong:**
```
<p><em>«Bryt.»</em> Niklas skuttet seg.
«Eller <em>ødelegg.</em> Vi får håpe det ikke virker.»</p>
```
**Correct:**
```
<p>«<em>Bryt.</em>» Niklas skuttet seg.
«Eller <em>ødelegg</em>. Vi får håpe det ikke virker.»</p>
```
|
process
|
automatically reposition by matskober from trello em should end right before a period not right after em should also be inside quites double angles not outside wrong «bryt » niklas skuttet seg «eller ødelegg vi får håpe det ikke virker » correct « bryt » niklas skuttet seg «eller ødelegg vi får håpe det ikke virker »
| 1
|
721,835
| 24,839,498,790
|
IssuesEvent
|
2022-10-26 11:35:54
|
AY2223S1-CS2103-F14-3/tp
|
https://api.github.com/repos/AY2223S1-CS2103-F14-3/tp
|
reopened
|
As a user, I can view my applications statistics
|
type.Task priority.Medium
|
... so that, I can have a review on my application responds rate from the companies.
**Todo: Add application statistics feature**
- Include number of applications, counts of the different statuses
- Current number of interviews
|
1.0
|
As a user, I can view my applications statistics - ... so that, I can have a review on my application responds rate from the companies.
**Todo: Add application statistics feature**
- Include number of applications, counts of the different statuses
- Current number of interviews
|
non_process
|
as a user i can view my applications statistics so that i can have a review on my application responds rate from the companies todo add application statistics feature include number of applications counts of the different statuses current number of interviews
| 0
|
9,948
| 12,976,698,740
|
IssuesEvent
|
2020-07-21 19:15:42
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
ProcessStartInfo.Environment does not work on UAP
|
area-System.Diagnostics.Process bug disabled-test
|
I'm adding this test, but it fails on UAP. It appears that CreateProcess in the UAP sandbox does not pass environment variables specified in lpEnvironment.
```c#
[Fact]
[ActiveIssue(xxx, TargetFrameworkMonikers.UapNotUapAot)]
public void TestSetEnvironmentOnChildProcess()
{
const string name = "b5a715d3-d74f-465d-abb7-2abe844750c9";
Environment.SetEnvironmentVariable(name, "parent-process-value");
Process p = CreateProcess(() =>
{
if (Environment.GetEnvironmentVariable(name) != "child-process-value")
return 1;
return SuccessExitCode;
});
p.StartInfo.Environment.Add(name, "child-process-value");
p.Start();
Assert.True(p.WaitForExit(WaitInMS));
Assert.Equal(SuccessExitCode, p.ExitCode);
}
```
|
1.0
|
ProcessStartInfo.Environment does not work on UAP - I'm adding this test, but it fails on UAP. It appears that CreateProcess in the UAP sandbox does not pass environment variables specified in lpEnvironment.
```c#
[Fact]
[ActiveIssue(xxx, TargetFrameworkMonikers.UapNotUapAot)]
public void TestSetEnvironmentOnChildProcess()
{
const string name = "b5a715d3-d74f-465d-abb7-2abe844750c9";
Environment.SetEnvironmentVariable(name, "parent-process-value");
Process p = CreateProcess(() =>
{
if (Environment.GetEnvironmentVariable(name) != "child-process-value")
return 1;
return SuccessExitCode;
});
p.StartInfo.Environment.Add(name, "child-process-value");
p.Start();
Assert.True(p.WaitForExit(WaitInMS));
Assert.Equal(SuccessExitCode, p.ExitCode);
}
```
|
process
|
processstartinfo environment does not work on uap i m adding this test but it fails on uap it appears that createprocess in the uap sandbox does not pass environment variables specified in lpenvironment c public void testsetenvironmentonchildprocess const string name environment setenvironmentvariable name parent process value process p createprocess if environment getenvironmentvariable name child process value return return successexitcode p startinfo environment add name child process value p start assert true p waitforexit waitinms assert equal successexitcode p exitcode
| 1
|
113,322
| 24,399,418,498
|
IssuesEvent
|
2022-10-04 22:59:55
|
fwouts/previewjs
|
https://api.github.com/repos/fwouts/previewjs
|
closed
|
Components with forwardRef are not rendered
|
bug fix merged vscode fix shipped
|
### Describe the bug
Trying to a component build with forwardRef in Preview.js results in various errors messages.
### Reproduction
(Using my example Repo as reference here)
1. Open `index.tsx`
2. Open Preview for `Input` component
3. See error in previewjs log
### Preview.js version
v1.13.0
### Framework
React 18.2.0
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 22.04.1 LTS 22.04.1 LTS (Jammy Jellyfish)
CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Memory: 22.85 GB / 30.82 GB
Container: Yes
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 18.7.0 - ~/.nvm/versions/node/v18.7.0/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v18.7.0/bin/yarn
npm: 8.19.2 - ~/.nvm/versions/node/v18.7.0/bin/npm
IDEs:
Nano: 6.2 - /usr/bin/nano
VSCode: 1.71.0 - /snap/bin/code
Vim: 8.2 - /usr/bin/vim
Browsers:
Chrome: 105.0.5195.125
Firefox: 104.0.2
```
### Used Package Manager
npm
### Extension logs (useful for crashes)
_No response_
### Preview logs (useful for rendering errors)
```shell
[8:25:15 AM] Warning: Unexpected ref object provided for select. Use either a ref-setter function or React.createRef().
at select
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at SelectField2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26652:11)
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26664:19
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at FormControl2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:16974:19)
at FormControlWrapper (http://localhost:3140/preview/src/components/form-control-wrapper.tsx:10:3)
at http://localhost:3140/preview/src/components/form-controls.tsx:116:5
at EnvironmentProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:20429:11)
at ColorModeProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:1047:5)
at ThemeProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3935:45)
at ThemeProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:5515:11)
at ChakraProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34899:5)
at ChakraProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34923:3)
at IntlProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/react-intl.js?v=34ee91d6:4087:43)
at PreviewWrapper (http://localhost:3140/preview/src/preview-wrapper.jsx:6:27)
at Renderer
[8:25:15 AM] TypeError: Cannot add property current, object is not extensible
at commitAttachRef (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16823:27)
at commitLayoutEffectOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16693:17)
at commitLayoutMountEffects_complete (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17503:17)
at commitLayoutEffects_begin (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17492:15)
at commitLayoutEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17444:11)
at commitRootImpl (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18848:13)
at commitRoot (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18772:13)
at finishConcurrentRender (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18301:15)
at performConcurrentWorkOnRoot (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18215:15)
at workLoop (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:197:42)
[8:25:15 AM] TypeError: Cannot add property current, object is not extensible
at safelyDetachRef (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16265:27)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17025:17)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17033:17)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
[8:25:15 AM] The above error occurred in the <select> component:
at select
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at SelectField2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26652:11)
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26664:19
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at FormControl2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:16974:19)
at FormControlWrapper (http://localhost:3140/preview/src/components/form-control-wrapper.tsx:10:3)
at http://localhost:3140/preview/src/components/form-controls.tsx:116:5
at EnvironmentProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:20429:11)
at ColorModeProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:1047:5)
at ThemeProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3935:45)
at ThemeProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:5515:11)
at ChakraProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34899:5)
at ChakraProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34923:3)
at IntlProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/react-intl.js?v=34ee91d6:4087:43)
at PreviewWrapper (http://localhost:3140/preview/src/preview-wrapper.jsx:6:27)
at Renderer
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
[8:25:15 AM] The above error occurred in the <Renderer> component:
at Renderer
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
```
### Repo link (if available)
https://github.com/trigo-at/previewjs-forwardref-error
(See `Input` component in `index.tsx`)
### Anything else?
_No response_
|
1.0
|
Components with forwardRef are not rendered - ### Describe the bug
Trying to a component build with forwardRef in Preview.js results in various errors messages.
### Reproduction
(Using my example Repo as reference here)
1. Open `index.tsx`
2. Open Preview for `Input` component
3. See error in previewjs log
### Preview.js version
v1.13.0
### Framework
React 18.2.0
### System Info
```shell
System:
OS: Linux 5.15 Ubuntu 22.04.1 LTS 22.04.1 LTS (Jammy Jellyfish)
CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Memory: 22.85 GB / 30.82 GB
Container: Yes
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 18.7.0 - ~/.nvm/versions/node/v18.7.0/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v18.7.0/bin/yarn
npm: 8.19.2 - ~/.nvm/versions/node/v18.7.0/bin/npm
IDEs:
Nano: 6.2 - /usr/bin/nano
VSCode: 1.71.0 - /snap/bin/code
Vim: 8.2 - /usr/bin/vim
Browsers:
Chrome: 105.0.5195.125
Firefox: 104.0.2
```
### Used Package Manager
npm
### Extension logs (useful for crashes)
_No response_
### Preview logs (useful for rendering errors)
```shell
[8:25:15 AM] Warning: Unexpected ref object provided for select. Use either a ref-setter function or React.createRef().
at select
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at SelectField2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26652:11)
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26664:19
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at FormControl2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:16974:19)
at FormControlWrapper (http://localhost:3140/preview/src/components/form-control-wrapper.tsx:10:3)
at http://localhost:3140/preview/src/components/form-controls.tsx:116:5
at EnvironmentProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:20429:11)
at ColorModeProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:1047:5)
at ThemeProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3935:45)
at ThemeProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:5515:11)
at ChakraProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34899:5)
at ChakraProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34923:3)
at IntlProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/react-intl.js?v=34ee91d6:4087:43)
at PreviewWrapper (http://localhost:3140/preview/src/preview-wrapper.jsx:6:27)
at Renderer
[8:25:15 AM] TypeError: Cannot add property current, object is not extensible
at commitAttachRef (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16823:27)
at commitLayoutEffectOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16693:17)
at commitLayoutMountEffects_complete (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17503:17)
at commitLayoutEffects_begin (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17492:15)
at commitLayoutEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17444:11)
at commitRootImpl (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18848:13)
at commitRoot (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18772:13)
at finishConcurrentRender (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18301:15)
at performConcurrentWorkOnRoot (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:18215:15)
at workLoop (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:197:42)
[8:25:15 AM] TypeError: Cannot add property current, object is not extensible
at safelyDetachRef (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:16265:27)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17025:17)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17033:17)
at recursivelyTraverseDeletionEffects (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17016:13)
at commitDeletionEffectsOnFiber (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-LGNAA2TQ.js?v=34ee91d6:17107:15)
[8:25:15 AM] The above error occurred in the <select> component:
at select
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at SelectField2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26652:11)
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:26664:19
at div
at http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3908:46
at FormControl2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:16974:19)
at FormControlWrapper (http://localhost:3140/preview/src/components/form-control-wrapper.tsx:10:3)
at http://localhost:3140/preview/src/components/form-controls.tsx:116:5
at EnvironmentProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:20429:11)
at ColorModeProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:1047:5)
at ThemeProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:3935:45)
at ThemeProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:5515:11)
at ChakraProvider (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34899:5)
at ChakraProvider2 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/chunk-S4HW63PM.js?v=34ee91d6:34923:3)
at IntlProvider3 (http://localhost:3140/preview/node_modules/.previewjs/v7.0.0/vite/deps/react-intl.js?v=34ee91d6:4087:43)
at PreviewWrapper (http://localhost:3140/preview/src/preview-wrapper.jsx:6:27)
at Renderer
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
[8:25:15 AM] The above error occurred in the <Renderer> component:
at Renderer
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
```
### Repo link (if available)
https://github.com/trigo-at/previewjs-forwardref-error
(See `Input` component in `index.tsx`)
### Anything else?
_No response_
|
non_process
|
components with forwardref are not rendered describe the bug trying to a component build with forwardref in preview js results in various errors messages reproduction using my example repo as reference here open index tsx open preview for input component see error in previewjs log preview js version framework react system info shell system os linux ubuntu lts lts jammy jellyfish cpu intel r core tm cpu memory gb gb container yes shell bin zsh binaries node nvm versions node bin node yarn nvm versions node bin yarn npm nvm versions node bin npm ides nano usr bin nano vscode snap bin code vim usr bin vim browsers chrome firefox used package manager npm extension logs useful for crashes no response preview logs useful for rendering errors shell warning unexpected ref object provided for select use either a ref setter function or react createref at select at at at div at at at div at at at formcontrolwrapper at at environmentprovider at colormodeprovider at at at chakraprovider at at at previewwrapper at renderer typeerror cannot add property current object is not extensible at commitattachref at commitlayouteffectonfiber at commitlayoutmounteffects complete at commitlayouteffects begin at commitlayouteffects at commitrootimpl at commitroot at finishconcurrentrender at performconcurrentworkonroot at workloop typeerror cannot add property current object is not extensible at safelydetachref at commitdeletioneffectsonfiber at recursivelytraversedeletioneffects at commitdeletioneffectsonfiber at recursivelytraversedeletioneffects at commitdeletioneffectsonfiber at recursivelytraversedeletioneffects at commitdeletioneffectsonfiber at recursivelytraversedeletioneffects at commitdeletioneffectsonfiber the above error occurred in the component at select at at at div at at at div at at at formcontrolwrapper at at environmentprovider at colormodeprovider at at at chakraprovider at at at previewwrapper at renderer consider adding an error boundary to your tree to customize error handling behavior visit to learn more about error boundaries the above error occurred in the component at renderer consider adding an error boundary to your tree to customize error handling behavior visit to learn more about error boundaries repo link if available see input component in index tsx anything else no response
| 0
|
10,505
| 8,069,813,552
|
IssuesEvent
|
2018-08-06 07:37:01
|
globaleaks/GlobaLeaks
|
https://api.github.com/repos/globaleaks/GlobaLeaks
|
opened
|
Slow down login by requiring a proof of work on authentication handlers
|
C: Backend C: Client F: Security T: Enhancement
|
This ticket is to propose to improve the existing anti-bruteforce mechanisms on authentication handlers by requiring a proof of work.
Curent implementation apply a bruteforce slowdown by slowing down response to login requests: https://github.com/globaleaks/GlobaLeaks/issues/112
|
True
|
Slow down login by requiring a proof of work on authentication handlers - This ticket is to propose to improve the existing anti-bruteforce mechanisms on authentication handlers by requiring a proof of work.
Curent implementation apply a bruteforce slowdown by slowing down response to login requests: https://github.com/globaleaks/GlobaLeaks/issues/112
|
non_process
|
slow down login by requiring a proof of work on authentication handlers this ticket is to propose to improve the existing anti bruteforce mechanisms on authentication handlers by requiring a proof of work curent implementation apply a bruteforce slowdown by slowing down response to login requests
| 0
|
493,698
| 14,236,984,722
|
IssuesEvent
|
2020-11-18 16:39:05
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[Studio-ui] all pages are showing up as disabled in search
|
bug priority: low
|
Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: :
----------
Original JIRA Comments:
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-1990
----------
|
1.0
|
[Studio-ui] all pages are showing up as disabled in search - Original JIRA Fix Versions:
2.5.2, Original JIRA Components:
Studio,
----------
Original JIRA Description: :
----------
Original JIRA Comments:
Original JIRA:
http://issues.craftercms.org/browse/CRAFTERCMS-1990
----------
|
non_process
|
all pages are showing up as disabled in search original jira fix versions original jira components studio original jira description original jira comments original jira
| 0
|
52,380
| 7,760,756,490
|
IssuesEvent
|
2018-06-01 07:29:41
|
prometheus/alertmanager
|
https://api.github.com/repos/prometheus/alertmanager
|
closed
|
Update the Architecture Diagram
|
help wanted kind/documentation low hanging fruit
|
The hand drawn one in the README doesn't exactly look great...
|
1.0
|
Update the Architecture Diagram - The hand drawn one in the README doesn't exactly look great...
|
non_process
|
update the architecture diagram the hand drawn one in the readme doesn t exactly look great
| 0
|
15,572
| 19,703,506,195
|
IssuesEvent
|
2022-01-12 19:08:10
|
googleapis/nodejs-cloud-rad
|
https://api.github.com/repos/googleapis/nodejs-cloud-rad
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* must have required property 'release_level' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* must have required property 'release_level' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json must have required property release level in repo metadata json must have required property client documentation in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
32,156
| 6,721,893,513
|
IssuesEvent
|
2017-10-16 13:27:44
|
zotonic/zotonic
|
https://api.github.com/repos/zotonic/zotonic
|
opened
|
Access control rules: default rules are missing
|
admin-ui defect
|
From Gitter:
> There should at least be “good defaults” in place so that the site works as expected and that you can learn from the existing settings.
> There is actually some "default rules install", but it doesn't seem to be installed.
|
1.0
|
Access control rules: default rules are missing - From Gitter:
> There should at least be “good defaults” in place so that the site works as expected and that you can learn from the existing settings.
> There is actually some "default rules install", but it doesn't seem to be installed.
|
non_process
|
access control rules default rules are missing from gitter there should at least be “good defaults” in place so that the site works as expected and that you can learn from the existing settings there is actually some default rules install but it doesn t seem to be installed
| 0
|
204,891
| 7,092,044,410
|
IssuesEvent
|
2018-01-12 15:16:53
|
guardianproject/haven
|
https://api.github.com/repos/guardianproject/haven
|
closed
|
Alarms dismissed
|
bug low-priority
|
Have there been any issues with this app dismissing Android alarms? My alarms were all dismissed this morning. This is the only new app on my phone. How can I resolve that without deleting the app if it is what caused this? Thanks
|
1.0
|
Alarms dismissed - Have there been any issues with this app dismissing Android alarms? My alarms were all dismissed this morning. This is the only new app on my phone. How can I resolve that without deleting the app if it is what caused this? Thanks
|
non_process
|
alarms dismissed have there been any issues with this app dismissing android alarms my alarms were all dismissed this morning this is the only new app on my phone how can i resolve that without deleting the app if it is what caused this thanks
| 0
|
2,459
| 5,240,735,184
|
IssuesEvent
|
2017-01-31 14:00:07
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Controls on UserForms should resolve the AsTypeName to a coclass instead of an interface.
|
enhancement parse-tree-processing quality-control
|
Currently, RD is querying the `IDispatch` interface for controls on UserForms and using the default interface for the type of the `Declaration`:

While this is _technically_ correct from a purely COM perspective and allows for member resolution, VBA treats them as instances of the coclass, and it would probably be more meaningful to users to treat them the same way:
```
Private Sub UserForm_Activate()
Debug.Print TypeName(Label1) '<--Prints "Label"
End Sub
```
In fact in fm20.dll at least, the default interfaces for the coclasses they're associated with are hidden (that's why you won't find `ILabelControl` in the Object Browser).
RD should resolve these as coclasses, probably via a default interface->coclass lookup table.
|
1.0
|
Controls on UserForms should resolve the AsTypeName to a coclass instead of an interface. - Currently, RD is querying the `IDispatch` interface for controls on UserForms and using the default interface for the type of the `Declaration`:

While this is _technically_ correct from a purely COM perspective and allows for member resolution, VBA treats them as instances of the coclass, and it would probably be more meaningful to users to treat them the same way:
```
Private Sub UserForm_Activate()
Debug.Print TypeName(Label1) '<--Prints "Label"
End Sub
```
In fact in fm20.dll at least, the default interfaces for the coclasses they're associated with are hidden (that's why you won't find `ILabelControl` in the Object Browser).
RD should resolve these as coclasses, probably via a default interface->coclass lookup table.
|
process
|
controls on userforms should resolve the astypename to a coclass instead of an interface currently rd is querying the idispatch interface for controls on userforms and using the default interface for the type of the declaration while this is technically correct from a purely com perspective and allows for member resolution vba treats them as instances of the coclass and it would probably be more meaningful to users to treat them the same way private sub userform activate debug print typename prints label end sub in fact in dll at least the default interfaces for the coclasses they re associated with are hidden that s why you won t find ilabelcontrol in the object browser rd should resolve these as coclasses probably via a default interface coclass lookup table
| 1
|
8,719
| 11,855,647,818
|
IssuesEvent
|
2020-03-25 05:03:34
|
kubeflow/testing
|
https://api.github.com/repos/kubeflow/testing
|
closed
|
Setup KCC to manage kubeflow.org projects
|
area/engprod kind/process priority/p1
|
We are creating more and more GCP projects and GKE clusters to use for administering kubeflow.org.
In order to continue to scale and grow we need to manage this more declaratively and move to GitOps. This will eventually make it easier for Kubeflow contributors to create infrastructure they need just by submitting PRs.
The latest example is @scottilee is trying to setup infrastructure to auto-sync events to the Kubeflow calendar kubeflow/community#288.
To do that we need
1. A new GCP project or reuse an existing GCP project.
1. A GKE cluster
1. A GCP service account
I think we should start using [GCP Cloud Connector](https://cloud.google.com/config-connector/docs/overview) to set this up.
Here's how this should work.
1. All KCC managed projects should be in the GCP folder kf-kcc
* This folder already exists.
1. The project kf-kcc-admin is where we should create a GKE cluster with cloud connector installed
1. GCP projects created via KCC should exist in the subfolder kf-kcc-admin > users
* This sub folder already exists as well.
1. Resources created by KCC should be checked into source control inside kubeflow/testing
1. We should create a google group in https://github.com/kubeflow/internal-acls for folks with permissions to administer/setup KCC
So first we need to setup KCC
1. Create the Google Group
1. Grant the google group appropriate permissions on the folder
1. Setup KCC in project kf-kcc-admin
* There is already a cluster named kf-kcc in that project I think we might already have an older version of KCC installed and setup on it.
* We can try reusing that or starting a clean installation.
Once KCC is setup we should use it to setup the resources for the community calendar
1. Create a GCP project for community management
* Doesn't look like KCC supports project creation so we should do that manually (GoogleCloudPlatform/k8s-config-connector#32)
1. Use KCC to create the appropriate [service account](https://github.com/GoogleCloudPlatform/k8s-config-connector/tree/master/resources/iamserviceaccount) to be used to do the calendar sync.
1. Use KCC to define an IAM policy to give folks work on community maintenance access to that project (e.g. make them project editors)
1. Either use KCC to setup a GKE cluster in that project or use kfctl to deploy a KF cluster with IAP in that project
* We should use GKE workload identity to allow processes in that cluster to authenticate as a google service account
1. Deploy the calendar sync app in that cluster
@derekhh and @scottilee Any interest in trying to take this on?
|
1.0
|
Setup KCC to manage kubeflow.org projects - We are creating more and more GCP projects and GKE clusters to use for administering kubeflow.org.
In order to continue to scale and grow we need to manage this more declaratively and move to GitOps. This will eventually make it easier for Kubeflow contributors to create infrastructure they need just by submitting PRs.
The latest example is @scottilee is trying to setup infrastructure to auto-sync events to the Kubeflow calendar kubeflow/community#288.
To do that we need
1. A new GCP project or reuse an existing GCP project.
1. A GKE cluster
1. A GCP service account
I think we should start using [GCP Cloud Connector](https://cloud.google.com/config-connector/docs/overview) to set this up.
Here's how this should work.
1. All KCC managed projects should be in the GCP folder kf-kcc
* This folder already exists.
1. The project kf-kcc-admin is where we should create a GKE cluster with cloud connector installed
1. GCP projects created via KCC should exist in the subfolder kf-kcc-admin > users
* This sub folder already exists as well.
1. Resources created by KCC should be checked into source control inside kubeflow/testing
1. We should create a google group in https://github.com/kubeflow/internal-acls for folks with permissions to administer/setup KCC
So first we need to setup KCC
1. Create the Google Group
1. Grant the google group appropriate permissions on the folder
1. Setup KCC in project kf-kcc-admin
* There is already a cluster named kf-kcc in that project I think we might already have an older version of KCC installed and setup on it.
* We can try reusing that or starting a clean installation.
Once KCC is setup we should use it to setup the resources for the community calendar
1. Create a GCP project for community management
* Doesn't look like KCC supports project creation so we should do that manually (GoogleCloudPlatform/k8s-config-connector#32)
1. Use KCC to create the appropriate [service account](https://github.com/GoogleCloudPlatform/k8s-config-connector/tree/master/resources/iamserviceaccount) to be used to do the calendar sync.
1. Use KCC to define an IAM policy to give folks work on community maintenance access to that project (e.g. make them project editors)
1. Either use KCC to setup a GKE cluster in that project or use kfctl to deploy a KF cluster with IAP in that project
* We should use GKE workload identity to allow processes in that cluster to authenticate as a google service account
1. Deploy the calendar sync app in that cluster
@derekhh and @scottilee Any interest in trying to take this on?
|
process
|
setup kcc to manage kubeflow org projects we are creating more and more gcp projects and gke clusters to use for administering kubeflow org in order to continue to scale and grow we need to manage this more declaratively and move to gitops this will eventually make it easier for kubeflow contributors to create infrastructure they need just by submitting prs the latest example is scottilee is trying to setup infrastructure to auto sync events to the kubeflow calendar kubeflow community to do that we need a new gcp project or reuse an existing gcp project a gke cluster a gcp service account i think we should start using to set this up here s how this should work all kcc managed projects should be in the gcp folder kf kcc this folder already exists the project kf kcc admin is where we should create a gke cluster with cloud connector installed gcp projects created via kcc should exist in the subfolder kf kcc admin users this sub folder already exists as well resources created by kcc should be checked into source control inside kubeflow testing we should create a google group in for folks with permissions to administer setup kcc so first we need to setup kcc create the google group grant the google group appropriate permissions on the folder setup kcc in project kf kcc admin there is already a cluster named kf kcc in that project i think we might already have an older version of kcc installed and setup on it we can try reusing that or starting a clean installation once kcc is setup we should use it to setup the resources for the community calendar create a gcp project for community management doesn t look like kcc supports project creation so we should do that manually googlecloudplatform config connector use kcc to create the appropriate to be used to do the calendar sync use kcc to define an iam policy to give folks work on community maintenance access to that project e g make them project editors either use kcc to setup a gke cluster in that project or use kfctl to deploy a kf cluster with iap in that project we should use gke workload identity to allow processes in that cluster to authenticate as a google service account deploy the calendar sync app in that cluster derekhh and scottilee any interest in trying to take this on
| 1
|
45,864
| 18,872,116,052
|
IssuesEvent
|
2021-11-13 11:16:19
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
Extension causes high cpu load
|
Language Service more info needed
|
- Issue Type: `Performance`
- Extension Name: `cpptools`
- Extension Version: `1.6.0`
- OS Version: `Windows_NT x64 10.0.19043`
- VS Code version: `1.60.0`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`c:\Users\PHILLI~1\AppData\Local\Temp\ms-vscode.cpptools-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load
|
1.0
|
Extension causes high cpu load - - Issue Type: `Performance`
- Extension Name: `cpptools`
- Extension Version: `1.6.0`
- OS Version: `Windows_NT x64 10.0.19043`
- VS Code version: `1.60.0`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`c:\Users\PHILLI~1\AppData\Local\Temp\ms-vscode.cpptools-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load
|
non_process
|
extension causes high cpu load issue type performance extension name cpptools extension version os version windows nt vs code version warning make sure to attach this file from your home directory warning c users philli appdata local temp ms vscode cpptools unresponsive cpuprofile txt find more details here
| 0
|
66,480
| 8,941,078,344
|
IssuesEvent
|
2019-01-24 02:37:35
|
chocolatemelt/kadopon-server
|
https://api.github.com/repos/chocolatemelt/kadopon-server
|
closed
|
Review game architecture
|
documentation task
|
Since it's been about half a year since the last time work has been done, we should review the current game architecture as well as any other issues that might be a problem if we don't squash them early. The goal is to avoid tech debt at all costs while we're still in the earliest stage of planning.
Game state and game information management are the primary topics to be reviewed.
|
1.0
|
Review game architecture - Since it's been about half a year since the last time work has been done, we should review the current game architecture as well as any other issues that might be a problem if we don't squash them early. The goal is to avoid tech debt at all costs while we're still in the earliest stage of planning.
Game state and game information management are the primary topics to be reviewed.
|
non_process
|
review game architecture since it s been about half a year since the last time work has been done we should review the current game architecture as well as any other issues that might be a problem if we don t squash them early the goal is to avoid tech debt at all costs while we re still in the earliest stage of planning game state and game information management are the primary topics to be reviewed
| 0
|
5,481
| 8,355,839,703
|
IssuesEvent
|
2018-10-02 16:44:51
|
HumanCellAtlas/dcp-community
|
https://api.github.com/repos/HumanCellAtlas/dcp-community
|
opened
|
Updates to RFC process from review
|
rfc-process
|
- [ ] **Summary** edits for "consensus-driven" and "egalitarian" from @briandoconnor
- [ ] **Motivation** edits for [leads tracking feature decision making process](https://github.com/HumanCellAtlas/dcp-community/pull/27/files#r219857033) from @briandoconnor
- [ ] **When is an RFC required?** add suggestions from @briandoconnor and @tburdett
|
1.0
|
Updates to RFC process from review - - [ ] **Summary** edits for "consensus-driven" and "egalitarian" from @briandoconnor
- [ ] **Motivation** edits for [leads tracking feature decision making process](https://github.com/HumanCellAtlas/dcp-community/pull/27/files#r219857033) from @briandoconnor
- [ ] **When is an RFC required?** add suggestions from @briandoconnor and @tburdett
|
process
|
updates to rfc process from review summary edits for consensus driven and egalitarian from briandoconnor motivation edits for from briandoconnor when is an rfc required add suggestions from briandoconnor and tburdett
| 1
|
22,599
| 31,820,779,284
|
IssuesEvent
|
2023-09-14 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 14 Sep 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Dynamic Spectrum Mixer for Visual Recognition
- **Authors:** Zhiqiang Hu, Tao Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2309.06721
- **Pdf link:** https://arxiv.org/pdf/2309.06721
- **Abstract**
Recently, MLP-based vision backbones have achieved promising performance in several visual recognition tasks. However, the existing MLP-based methods directly aggregate tokens with static weights, leaving the adaptability to different images untouched. Moreover, Recent research demonstrates that MLP-Transformer is great at creating long-range dependencies but ineffective at catching high frequencies that primarily transmit local information, which prevents it from applying to the downstream dense prediction tasks, such as semantic segmentation. To address these challenges, we propose a content-adaptive yet computationally efficient structure, dubbed Dynamic Spectrum Mixer (DSM). The DSM represents token interactions in the frequency domain by employing the Discrete Cosine Transform, which can learn long-term spatial dependencies with log-linear complexity. Furthermore, a dynamic spectrum weight generation layer is proposed as the spectrum bands selector, which could emphasize the informative frequency bands while diminishing others. To this end, the technique can efficiently learn detailed features from visual input that contains both high- and low-frequency information. Extensive experiments show that DSM is a powerful and adaptable backbone for a range of visual recognition tasks. Particularly, DSM outperforms previous transformer-based and MLP-based models, on image classification, object detection, and semantic segmentation tasks, such as 83.8 \% top-1 accuracy on ImageNet, and 49.9 \% mIoU on ADE20K.
### Tracking Particles Ejected From Active Asteroid Bennu With Event-Based Vision
- **Authors:** Loïc J. Azzalini, Dario Izzo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.06819
- **Pdf link:** https://arxiv.org/pdf/2309.06819
- **Abstract**
Early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation. During the visit of active asteroid Bennu, the OSIRIS-REx spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events, which ultimately became one of the mission's scientific highlights. To increase the scientific return of similar time-constrained missions, this work proposes an event-based solution that is dedicated to the detection and tracking of centimetre-sized particles. Unlike a standard frame-based camera, the pixels of an event-based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane. As a result of the sparse and asynchronous spatiotemporal output, event cameras combine very high dynamic range and temporal resolution with low-power consumption, which could complement existing onboard imaging techniques. This paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the OSIRIS-REx mission in a photorealistic scene generator and in turn, simulating event-based observations. The resulting streams of spatiotemporal data support future work on event-based multi-object tracking.
## Keyword: event camera
### Tracking Particles Ejected From Active Asteroid Bennu With Event-Based Vision
- **Authors:** Loïc J. Azzalini, Dario Izzo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.06819
- **Pdf link:** https://arxiv.org/pdf/2309.06819
- **Abstract**
Early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation. During the visit of active asteroid Bennu, the OSIRIS-REx spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events, which ultimately became one of the mission's scientific highlights. To increase the scientific return of similar time-constrained missions, this work proposes an event-based solution that is dedicated to the detection and tracking of centimetre-sized particles. Unlike a standard frame-based camera, the pixels of an event-based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane. As a result of the sparse and asynchronous spatiotemporal output, event cameras combine very high dynamic range and temporal resolution with low-power consumption, which could complement existing onboard imaging techniques. This paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the OSIRIS-REx mission in a photorealistic scene generator and in turn, simulating event-based observations. The resulting streams of spatiotemporal data support future work on event-based multi-object tracking.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast
- **Authors:** Zhaodong Sun, Xiaobai Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06924
- **Pdf link:** https://arxiv.org/pdf/2309.06924
- **Abstract**
Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization.
## Keyword: ISP
### GelFlow: Self-supervised Learning of Optical Flow for Vision-Based Tactile Sensor Displacement Measurement
- **Authors:** Zhiyuan Zhang, Hua Yang, Zhouping Yin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06735
- **Pdf link:** https://arxiv.org/pdf/2309.06735
- **Abstract**
High-resolution multi-modality information acquired by vision-based tactile sensors can support more dexterous manipulations for robot fingers. Optical flow is low-level information directly obtained by vision-based tactile sensors, which can be transformed into other modalities like force, geometry and depth. Current vision-tactile sensors employ optical flow methods from OpenCV to estimate the deformation of markers in gels. However, these methods need to be more precise for accurately measuring the displacement of markers during large elastic deformation of the gel, as this can significantly impact the accuracy of downstream tasks. This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors. The proposed method employs a coarse-to-fine strategy to handle large deformations by constructing a multi-scale feature pyramid from the input image. To better deal with the elastic deformation caused by the gel, the Helmholtz velocity decomposition constraint combined with the elastic deformation constraint are adopted to address the distortion rate and area change rate, respectively. A local flow fusion module is designed to smooth the optical flow, taking into account the prior knowledge of the blurred effect of gel deformation. We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods. The results show that the proposed method achieved the highest displacement measurement accuracy, thereby demonstrating its potential for enabling more precise measurement of downstream tasks using vision-based tactile sensors.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Differentiable JPEG: The Devil is in the Details
- **Authors:** Christoph Reich, Biplob Debnath, Deep Patel, Srimat Chakradhar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2309.06978
- **Pdf link:** https://arxiv.org/pdf/2309.06978
- **Abstract**
JPEG remains one of the most widespread lossy image coding methods. However, the non-differentiable nature of JPEG restricts the application in deep learning pipelines. Several differentiable approximations of JPEG have recently been proposed to address this issue. This paper conducts a comprehensive review of existing diff. JPEG approaches and identifies critical details that have been missed by previous methods. To this end, we propose a novel diff. JPEG approach, overcoming previous limitations. Our approach is differentiable w.r.t. the input image, the JPEG quality, the quantization tables, and the color conversion parameters. We evaluate the forward and backward performance of our diff. JPEG approach against existing methods. Additionally, extensive ablations are performed to evaluate crucial design choices. Our proposed diff. JPEG resembles the (non-diff.) reference implementation best, significantly surpassing the recent-best diff. approach by $3.47$dB (PSNR) on average. For strong compression rates, we can even improve PSNR by $9.51$dB. Strong adversarial attack results are yielded by our diff. JPEG, demonstrating the effective gradient approximation. Our code is available at https://github.com/necla-ml/Diff-JPEG.
## Keyword: RAW
### Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast
- **Authors:** Zhaodong Sun, Xiaobai Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06924
- **Pdf link:** https://arxiv.org/pdf/2309.06924
- **Abstract**
Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization.
### Tree-Structured Shading Decomposition
- **Authors:** Chen Geng, Hong-Xing Yu, Sharon Zhang, Maneesh Agrawala, Jiajun Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2309.07122
- **Pdf link:** https://arxiv.org/pdf/2309.07122
- **Abstract**
We study inferring a tree-structured representation from a single image for object shading. Prior work typically uses the parametric or measured representation to model shading, which is neither interpretable nor easily editable. We propose using the shade tree representation, which combines basic shading nodes and compositing methods to factorize object surface shading. The shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner. A main challenge in inferring the shade tree is that the inference problem involves both the discrete tree structure and the continuous parameters of the tree nodes. We propose a hybrid approach to address this issue. We introduce an auto-regressive inference model to generate a rough estimation of the tree structure and node parameters, and then we fine-tune the inferred shade tree through an optimization algorithm. We show experiments on synthetic images, captured reflectance, real images, and non-realistic vector drawings, allowing downstream applications such as material editing, vectorized shading, and relighting. Project website: https://chen-geng.com/inv-shade-trees
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Thu, 14 Sep 23 - ## Keyword: events
### Dynamic Spectrum Mixer for Visual Recognition
- **Authors:** Zhiqiang Hu, Tao Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2309.06721
- **Pdf link:** https://arxiv.org/pdf/2309.06721
- **Abstract**
Recently, MLP-based vision backbones have achieved promising performance in several visual recognition tasks. However, the existing MLP-based methods directly aggregate tokens with static weights, leaving the adaptability to different images untouched. Moreover, Recent research demonstrates that MLP-Transformer is great at creating long-range dependencies but ineffective at catching high frequencies that primarily transmit local information, which prevents it from applying to the downstream dense prediction tasks, such as semantic segmentation. To address these challenges, we propose a content-adaptive yet computationally efficient structure, dubbed Dynamic Spectrum Mixer (DSM). The DSM represents token interactions in the frequency domain by employing the Discrete Cosine Transform, which can learn long-term spatial dependencies with log-linear complexity. Furthermore, a dynamic spectrum weight generation layer is proposed as the spectrum bands selector, which could emphasize the informative frequency bands while diminishing others. To this end, the technique can efficiently learn detailed features from visual input that contains both high- and low-frequency information. Extensive experiments show that DSM is a powerful and adaptable backbone for a range of visual recognition tasks. Particularly, DSM outperforms previous transformer-based and MLP-based models, on image classification, object detection, and semantic segmentation tasks, such as 83.8 \% top-1 accuracy on ImageNet, and 49.9 \% mIoU on ADE20K.
### Tracking Particles Ejected From Active Asteroid Bennu With Event-Based Vision
- **Authors:** Loïc J. Azzalini, Dario Izzo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.06819
- **Pdf link:** https://arxiv.org/pdf/2309.06819
- **Abstract**
Early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation. During the visit of active asteroid Bennu, the OSIRIS-REx spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events, which ultimately became one of the mission's scientific highlights. To increase the scientific return of similar time-constrained missions, this work proposes an event-based solution that is dedicated to the detection and tracking of centimetre-sized particles. Unlike a standard frame-based camera, the pixels of an event-based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane. As a result of the sparse and asynchronous spatiotemporal output, event cameras combine very high dynamic range and temporal resolution with low-power consumption, which could complement existing onboard imaging techniques. This paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the OSIRIS-REx mission in a photorealistic scene generator and in turn, simulating event-based observations. The resulting streams of spatiotemporal data support future work on event-based multi-object tracking.
## Keyword: event camera
### Tracking Particles Ejected From Active Asteroid Bennu With Event-Based Vision
- **Authors:** Loïc J. Azzalini, Dario Izzo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.06819
- **Pdf link:** https://arxiv.org/pdf/2309.06819
- **Abstract**
Early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation. During the visit of active asteroid Bennu, the OSIRIS-REx spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events, which ultimately became one of the mission's scientific highlights. To increase the scientific return of similar time-constrained missions, this work proposes an event-based solution that is dedicated to the detection and tracking of centimetre-sized particles. Unlike a standard frame-based camera, the pixels of an event-based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane. As a result of the sparse and asynchronous spatiotemporal output, event cameras combine very high dynamic range and temporal resolution with low-power consumption, which could complement existing onboard imaging techniques. This paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the OSIRIS-REx mission in a photorealistic scene generator and in turn, simulating event-based observations. The resulting streams of spatiotemporal data support future work on event-based multi-object tracking.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast
- **Authors:** Zhaodong Sun, Xiaobai Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06924
- **Pdf link:** https://arxiv.org/pdf/2309.06924
- **Abstract**
Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization.
## Keyword: ISP
### GelFlow: Self-supervised Learning of Optical Flow for Vision-Based Tactile Sensor Displacement Measurement
- **Authors:** Zhiyuan Zhang, Hua Yang, Zhouping Yin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06735
- **Pdf link:** https://arxiv.org/pdf/2309.06735
- **Abstract**
High-resolution multi-modality information acquired by vision-based tactile sensors can support more dexterous manipulations for robot fingers. Optical flow is low-level information directly obtained by vision-based tactile sensors, which can be transformed into other modalities like force, geometry and depth. Current vision-tactile sensors employ optical flow methods from OpenCV to estimate the deformation of markers in gels. However, these methods need to be more precise for accurately measuring the displacement of markers during large elastic deformation of the gel, as this can significantly impact the accuracy of downstream tasks. This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors. The proposed method employs a coarse-to-fine strategy to handle large deformations by constructing a multi-scale feature pyramid from the input image. To better deal with the elastic deformation caused by the gel, the Helmholtz velocity decomposition constraint combined with the elastic deformation constraint are adopted to address the distortion rate and area change rate, respectively. A local flow fusion module is designed to smooth the optical flow, taking into account the prior knowledge of the blurred effect of gel deformation. We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods. The results show that the proposed method achieved the highest displacement measurement accuracy, thereby demonstrating its potential for enabling more precise measurement of downstream tasks using vision-based tactile sensors.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Differentiable JPEG: The Devil is in the Details
- **Authors:** Christoph Reich, Biplob Debnath, Deep Patel, Srimat Chakradhar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2309.06978
- **Pdf link:** https://arxiv.org/pdf/2309.06978
- **Abstract**
JPEG remains one of the most widespread lossy image coding methods. However, the non-differentiable nature of JPEG restricts the application in deep learning pipelines. Several differentiable approximations of JPEG have recently been proposed to address this issue. This paper conducts a comprehensive review of existing diff. JPEG approaches and identifies critical details that have been missed by previous methods. To this end, we propose a novel diff. JPEG approach, overcoming previous limitations. Our approach is differentiable w.r.t. the input image, the JPEG quality, the quantization tables, and the color conversion parameters. We evaluate the forward and backward performance of our diff. JPEG approach against existing methods. Additionally, extensive ablations are performed to evaluate crucial design choices. Our proposed diff. JPEG resembles the (non-diff.) reference implementation best, significantly surpassing the recent-best diff. approach by $3.47$dB (PSNR) on average. For strong compression rates, we can even improve PSNR by $9.51$dB. Strong adversarial attack results are yielded by our diff. JPEG, demonstrating the effective gradient approximation. Our code is available at https://github.com/necla-ml/Diff-JPEG.
## Keyword: RAW
### Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast
- **Authors:** Zhaodong Sun, Xiaobai Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.06924
- **Pdf link:** https://arxiv.org/pdf/2309.06924
- **Abstract**
Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization.
### Tree-Structured Shading Decomposition
- **Authors:** Chen Geng, Hong-Xing Yu, Sharon Zhang, Maneesh Agrawala, Jiajun Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2309.07122
- **Pdf link:** https://arxiv.org/pdf/2309.07122
- **Abstract**
We study inferring a tree-structured representation from a single image for object shading. Prior work typically uses the parametric or measured representation to model shading, which is neither interpretable nor easily editable. We propose using the shade tree representation, which combines basic shading nodes and compositing methods to factorize object surface shading. The shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner. A main challenge in inferring the shade tree is that the inference problem involves both the discrete tree structure and the continuous parameters of the tree nodes. We propose a hybrid approach to address this issue. We introduce an auto-regressive inference model to generate a rough estimation of the tree structure and node parameters, and then we fine-tune the inferred shade tree through an optimization algorithm. We show experiments on synthetic images, captured reflectance, real images, and non-realistic vector drawings, allowing downstream applications such as material editing, vectorized shading, and relighting. Project website: https://chen-geng.com/inv-shade-trees
## Keyword: raw image
There is no result
|
process
|
new submissions for thu sep keyword events dynamic spectrum mixer for visual recognition authors zhiqiang hu tao yu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract recently mlp based vision backbones have achieved promising performance in several visual recognition tasks however the existing mlp based methods directly aggregate tokens with static weights leaving the adaptability to different images untouched moreover recent research demonstrates that mlp transformer is great at creating long range dependencies but ineffective at catching high frequencies that primarily transmit local information which prevents it from applying to the downstream dense prediction tasks such as semantic segmentation to address these challenges we propose a content adaptive yet computationally efficient structure dubbed dynamic spectrum mixer dsm the dsm represents token interactions in the frequency domain by employing the discrete cosine transform which can learn long term spatial dependencies with log linear complexity furthermore a dynamic spectrum weight generation layer is proposed as the spectrum bands selector which could emphasize the informative frequency bands while diminishing others to this end the technique can efficiently learn detailed features from visual input that contains both high and low frequency information extensive experiments show that dsm is a powerful and adaptable backbone for a range of visual recognition tasks particularly dsm outperforms previous transformer based and mlp based models on image classification object detection and semantic segmentation tasks such as top accuracy on imagenet and miou on tracking particles ejected from active asteroid bennu with event based vision authors loïc j azzalini dario izzo subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation during the visit of active asteroid bennu the osiris rex spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events which ultimately became one of the mission s scientific highlights to increase the scientific return of similar time constrained missions this work proposes an event based solution that is dedicated to the detection and tracking of centimetre sized particles unlike a standard frame based camera the pixels of an event based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane as a result of the sparse and asynchronous spatiotemporal output event cameras combine very high dynamic range and temporal resolution with low power consumption which could complement existing onboard imaging techniques this paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the osiris rex mission in a photorealistic scene generator and in turn simulating event based observations the resulting streams of spatiotemporal data support future work on event based multi object tracking keyword event camera tracking particles ejected from active asteroid bennu with event based vision authors loïc j azzalini dario izzo subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract early detection and tracking of ejecta in the vicinity of small solar system bodies is crucial to guarantee spacecraft safety and support scientific observation during the visit of active asteroid bennu the osiris rex spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events which ultimately became one of the mission s scientific highlights to increase the scientific return of similar time constrained missions this work proposes an event based solution that is dedicated to the detection and tracking of centimetre sized particles unlike a standard frame based camera the pixels of an event based camera independently trigger events indicating whether the scene brightness has increased or decreased at that time and location in the sensor plane as a result of the sparse and asynchronous spatiotemporal output event cameras combine very high dynamic range and temporal resolution with low power consumption which could complement existing onboard imaging techniques this paper motivates the use of a scientific event camera by reconstructing the particle ejection episodes reported by the osiris rex mission in a photorealistic scene generator and in turn simulating event based observations the resulting streams of spatiotemporal data support future work on event based multi object tracking keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb contrast phys unsupervised and weakly supervised video based remote physiological measurement via spatiotemporal contrast authors zhaodong sun xiaobai li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract video based remote physiological measurement utilizes facial videos to measure the blood volume change signal which is also called remote photoplethysmography rppg supervised methods for rppg measurements have been shown to achieve good performance however the drawback of these methods is that they require facial videos with ground truth gt physiological signals which are often costly and difficult to obtain in this paper we propose contrast phys a method that can be trained in both unsupervised and weakly supervised settings we employ a model to generate multiple spatiotemporal rppg signals and incorporate prior knowledge of rppg into a contrastive loss function we further incorporate the gt signals into contrastive learning to adapt to partial or misaligned labels the contrastive loss encourages rppg gt signals from the same video to be grouped together while pushing those from different videos apart we evaluate our methods on five publicly available datasets that include both rgb and near infrared videos contrast phys outperforms the state of the art supervised methods even when using partially available or misaligned gt signals or no labels at all additionally we highlight the advantages of our methods in terms of computational efficiency noise robustness and generalization keyword isp gelflow self supervised learning of optical flow for vision based tactile sensor displacement measurement authors zhiyuan zhang hua yang zhouping yin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract high resolution multi modality information acquired by vision based tactile sensors can support more dexterous manipulations for robot fingers optical flow is low level information directly obtained by vision based tactile sensors which can be transformed into other modalities like force geometry and depth current vision tactile sensors employ optical flow methods from opencv to estimate the deformation of markers in gels however these methods need to be more precise for accurately measuring the displacement of markers during large elastic deformation of the gel as this can significantly impact the accuracy of downstream tasks this study proposes a self supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision based tactile sensors the proposed method employs a coarse to fine strategy to handle large deformations by constructing a multi scale feature pyramid from the input image to better deal with the elastic deformation caused by the gel the helmholtz velocity decomposition constraint combined with the elastic deformation constraint are adopted to address the distortion rate and area change rate respectively a local flow fusion module is designed to smooth the optical flow taking into account the prior knowledge of the blurred effect of gel deformation we trained the proposed self supervised network using an open source dataset and compared it with traditional and deep learning based optical flow methods the results show that the proposed method achieved the highest displacement measurement accuracy thereby demonstrating its potential for enabling more precise measurement of downstream tasks using vision based tactile sensors keyword image signal processing there is no result keyword image signal process there is no result keyword compression differentiable jpeg the devil is in the details authors christoph reich biplob debnath deep patel srimat chakradhar subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract jpeg remains one of the most widespread lossy image coding methods however the non differentiable nature of jpeg restricts the application in deep learning pipelines several differentiable approximations of jpeg have recently been proposed to address this issue this paper conducts a comprehensive review of existing diff jpeg approaches and identifies critical details that have been missed by previous methods to this end we propose a novel diff jpeg approach overcoming previous limitations our approach is differentiable w r t the input image the jpeg quality the quantization tables and the color conversion parameters we evaluate the forward and backward performance of our diff jpeg approach against existing methods additionally extensive ablations are performed to evaluate crucial design choices our proposed diff jpeg resembles the non diff reference implementation best significantly surpassing the recent best diff approach by db psnr on average for strong compression rates we can even improve psnr by db strong adversarial attack results are yielded by our diff jpeg demonstrating the effective gradient approximation our code is available at keyword raw contrast phys unsupervised and weakly supervised video based remote physiological measurement via spatiotemporal contrast authors zhaodong sun xiaobai li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract video based remote physiological measurement utilizes facial videos to measure the blood volume change signal which is also called remote photoplethysmography rppg supervised methods for rppg measurements have been shown to achieve good performance however the drawback of these methods is that they require facial videos with ground truth gt physiological signals which are often costly and difficult to obtain in this paper we propose contrast phys a method that can be trained in both unsupervised and weakly supervised settings we employ a model to generate multiple spatiotemporal rppg signals and incorporate prior knowledge of rppg into a contrastive loss function we further incorporate the gt signals into contrastive learning to adapt to partial or misaligned labels the contrastive loss encourages rppg gt signals from the same video to be grouped together while pushing those from different videos apart we evaluate our methods on five publicly available datasets that include both rgb and near infrared videos contrast phys outperforms the state of the art supervised methods even when using partially available or misaligned gt signals or no labels at all additionally we highlight the advantages of our methods in terms of computational efficiency noise robustness and generalization tree structured shading decomposition authors chen geng hong xing yu sharon zhang maneesh agrawala jiajun wu subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we study inferring a tree structured representation from a single image for object shading prior work typically uses the parametric or measured representation to model shading which is neither interpretable nor easily editable we propose using the shade tree representation which combines basic shading nodes and compositing methods to factorize object surface shading the shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner a main challenge in inferring the shade tree is that the inference problem involves both the discrete tree structure and the continuous parameters of the tree nodes we propose a hybrid approach to address this issue we introduce an auto regressive inference model to generate a rough estimation of the tree structure and node parameters and then we fine tune the inferred shade tree through an optimization algorithm we show experiments on synthetic images captured reflectance real images and non realistic vector drawings allowing downstream applications such as material editing vectorized shading and relighting project website keyword raw image there is no result
| 1
|
3,755
| 6,733,154,234
|
IssuesEvent
|
2017-10-18 14:00:40
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
closed
|
Price record management trigger.
|
process workflow
|
Update the price record when the contract item details is processed (#12).
|
1.0
|
Price record management trigger. - Update the price record when the contract item details is processed (#12).
|
process
|
price record management trigger update the price record when the contract item details is processed
| 1
|
19,720
| 26,073,828,414
|
IssuesEvent
|
2022-12-24 07:06:19
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
Incompatibility with Modular Chests and Production Scrap 2 - dependency loop
|
mod:pypostprocessing postprocess-fail compatibility
|
### Mod source
Factorio Mod Portal
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
Error on load with Modular Chests and/or Production Scrap 2.

### Steps to reproduce
1. Install all Pyanodon mods and stlib.
2. Install Modular Chests and/or Production Scrap 2
3. Experience circular dependency.
### Additional context
_No response_
### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9791508/factorio-current.log)
|
2.0
|
Incompatibility with Modular Chests and Production Scrap 2 - dependency loop - ### Mod source
Factorio Mod Portal
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
Error on load with Modular Chests and/or Production Scrap 2.

### Steps to reproduce
1. Install all Pyanodon mods and stlib.
2. Install Modular Chests and/or Production Scrap 2
3. Experience circular dependency.
### Additional context
_No response_
### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9791508/factorio-current.log)
|
process
|
incompatibility with modular chests and production scrap dependency loop mod source factorio mod portal which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem error on load with modular chests and or production scrap steps to reproduce install all pyanodon mods and stlib install modular chests and or production scrap experience circular dependency additional context no response log file
| 1
|
17,268
| 23,051,022,605
|
IssuesEvent
|
2022-07-24 16:24:55
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add The Gout Lovers from “Mystic Pop-Up Bar” (Screenshots and Poster Added)
|
suggested title in process
|
Please add as much of the following info as you can:
Title: The Gout Lovers
Type (film/tv show): Film - romantic drama (foreign - Korean)
Film or show in which it appears: Mystic Pop-Up Bar
Is the parent film/show streaming anywhere? Yes - Netflix
About when in the parent film/show does it appear? starting midway through episode 1x06
Actual footage of the film/show can be seen (yes/no)? Yes - starting at 33:30 until 35:53
Tagline: Why does love have to hurt the closer we get?
Synopsis (written by me): Two young lovers are forced to face the crossroads of their relationship, as the thing that brought them together is the very thing that’s keeping them apart: their severe gout.
***ETA: Reddit says the cast is: Yoon Park and Ha Si-eun.
|
1.0
|
Add The Gout Lovers from “Mystic Pop-Up Bar” (Screenshots and Poster Added) - Please add as much of the following info as you can:
Title: The Gout Lovers
Type (film/tv show): Film - romantic drama (foreign - Korean)
Film or show in which it appears: Mystic Pop-Up Bar
Is the parent film/show streaming anywhere? Yes - Netflix
About when in the parent film/show does it appear? starting midway through episode 1x06
Actual footage of the film/show can be seen (yes/no)? Yes - starting at 33:30 until 35:53
Tagline: Why does love have to hurt the closer we get?
Synopsis (written by me): Two young lovers are forced to face the crossroads of their relationship, as the thing that brought them together is the very thing that’s keeping them apart: their severe gout.
***ETA: Reddit says the cast is: Yoon Park and Ha Si-eun.
|
process
|
add the gout lovers from “mystic pop up bar” screenshots and poster added please add as much of the following info as you can title the gout lovers type film tv show film romantic drama foreign korean film or show in which it appears mystic pop up bar is the parent film show streaming anywhere yes netflix about when in the parent film show does it appear starting midway through episode actual footage of the film show can be seen yes no yes starting at until tagline why does love have to hurt the closer we get synopsis written by me two young lovers are forced to face the crossroads of their relationship as the thing that brought them together is the very thing that’s keeping them apart their severe gout eta reddit says the cast is yoon park and ha si eun
| 1
|
258,356
| 27,563,921,399
|
IssuesEvent
|
2023-03-08 01:16:08
|
jtimberlake/pacbot
|
https://api.github.com/repos/jtimberlake/pacbot
|
opened
|
CVE-2018-11307 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2018-11307 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.7.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/recommendation-enricher/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/azure-discovery/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/pacman-cloud-discovery/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-efs-1.11.636.jar (Root Library)
- aws-java-sdk-core-1.11.636.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6.
<p>Publish Date: 2019-07-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11307>CVE-2018-11307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-07-09</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (com.amazonaws:aws-java-sdk-efs): 1.11.903</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2018-11307 (High) detected in multiple libraries - ## CVE-2018-11307 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.7.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/recommendation-enricher/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/azure-discovery/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jobs/pacman-cloud-discovery/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-efs-1.11.636.jar (Root Library)
- aws-java-sdk-core-1.11.636.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.5. Use of Jackson default typing along with a gadget class from iBatis allows exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6.
<p>Publish Date: 2019-07-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11307>CVE-2018-11307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-07-09</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.6.7.4</p>
<p>Direct dependency fix Resolution (com.amazonaws:aws-java-sdk-efs): 1.11.903</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jobs recommendation enricher pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jobs azure discovery pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jobs pacman cloud discovery pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy aws java sdk efs jar root library aws java sdk core jar x jackson databind jar vulnerable library found in base branch master vulnerability details an issue was discovered in fasterxml jackson databind through use of jackson default typing along with a gadget class from ibatis allows exfiltration of content fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution com amazonaws aws java sdk efs rescue worker helmet automatic remediation is available for this issue
| 0
|
12,813
| 15,189,087,430
|
IssuesEvent
|
2021-02-15 15:56:42
|
Geonovum/disgeo-arch
|
https://api.github.com/repos/Geonovum/disgeo-arch
|
closed
|
Scope omvat ook processen
|
In Behandeling In behandeling - voorstel processen e.d. Processen Functies Componenten
|
Scope omvat ook processen
De titel “Architectuurbeschrijving Voorzieningen” dekt niet helemaal de lading. Het document benoemt naast datamanagement / gegevensmanagement van geo-basisgegevens ook functies en processen. Het document raakt daarmee de processen (uit de Organisatie-laag).
|
2.0
|
Scope omvat ook processen - Scope omvat ook processen
De titel “Architectuurbeschrijving Voorzieningen” dekt niet helemaal de lading. Het document benoemt naast datamanagement / gegevensmanagement van geo-basisgegevens ook functies en processen. Het document raakt daarmee de processen (uit de Organisatie-laag).
|
process
|
scope omvat ook processen scope omvat ook processen de titel “architectuurbeschrijving voorzieningen” dekt niet helemaal de lading het document benoemt naast datamanagement gegevensmanagement van geo basisgegevens ook functies en processen het document raakt daarmee de processen uit de organisatie laag
| 1
|
53,473
| 22,821,023,327
|
IssuesEvent
|
2022-07-12 02:19:28
|
Azure/AppConfiguration
|
https://api.github.com/repos/Azure/AppConfiguration
|
closed
|
Locking a feature flag / key leads to invalid state
|
bug service
|
To reproduce:
- Go to Feature manager
- Right click a feature / key -> Lock
- Everything seems fine (Success message appears)
- Refresh page
- You an error "Some of your feature flags are not valid. Please navigate to Advanced Edit for more details."
The name + id of the flag are missing in the list.
If you open it in Advanced edit you see random (hashed?) characters like
"BGNCL2sUEmbC8Uxx [...] rtB66"
|
1.0
|
Locking a feature flag / key leads to invalid state - To reproduce:
- Go to Feature manager
- Right click a feature / key -> Lock
- Everything seems fine (Success message appears)
- Refresh page
- You an error "Some of your feature flags are not valid. Please navigate to Advanced Edit for more details."
The name + id of the flag are missing in the list.
If you open it in Advanced edit you see random (hashed?) characters like
"BGNCL2sUEmbC8Uxx [...] rtB66"
|
non_process
|
locking a feature flag key leads to invalid state to reproduce go to feature manager right click a feature key lock everything seems fine success message appears refresh page you an error some of your feature flags are not valid please navigate to advanced edit for more details the name id of the flag are missing in the list if you open it in advanced edit you see random hashed characters like
| 0
|
20,004
| 26,479,242,605
|
IssuesEvent
|
2023-01-17 13:33:23
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Tue Jan 17 03:48 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3938105139)**
<hidden value="integration-test-status-comment"></hidden>
***
### ❌ [build against SDK] Integration test FAILED
Requested by @firebase-workflow-trigger[bot] on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Mon Jan 16 05:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3930317548)**
| Failures | Configs |
|----------|---------|
| firestore | [BUILD] [ERROR] [Android] [1/3 os: windows]<br/> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Tue Jan 17 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3938563311)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Tue Jan 17 03:48 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3938105139)**
<hidden value="integration-test-status-comment"></hidden>
***
### ❌ [build against SDK] Integration test FAILED
Requested by @firebase-workflow-trigger[bot] on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Mon Jan 16 05:47 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3930317548)**
| Failures | Configs |
|----------|---------|
| firestore | [BUILD] [ERROR] [Android] [1/3 os: windows]<br/> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Tue Jan 17 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3938563311)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst ❌ nbsp integration test failed requested by firebase workflow trigger on commit last updated mon jan pst failures configs firestore add flaky tests to ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst
| 1
|
14,745
| 18,015,671,157
|
IssuesEvent
|
2021-09-16 13:41:13
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
Duplicate key error when 2 record attribute have names ending with 'is'
|
type: bug process: waiting for feedback in: core
|
This bug is similar to #1402.
Let be a request payload:
```java
record Payload(@JsonProperty("fooBasis") String fooBasis, @JsonProperty("barBasis") String barBasis) {
@JsonCreator
Payload{}
}
```
Let be a controller:
```java
@Controller
class MyController {
@GetMapping("/{id}")
ResponseEntity<?> getById(@PathVariable("id") String id){
return ResponseEntity.status(HttpStatus.NOT_IMPLEMENTED).build();
}
@PutMapping("/{id}")
ResponseEntity<?> put(@PathVariable("id") String id, @RequestBody Payload payload){
return ResponseEntity.status(HttpStatus.NOT_IMPLEMENTED).build();
}
}
```
When constructing the affordance to the PUT method:
```java
linkTo(methodOn(MyController.class).getById("foo"))
.withSelfRel()
.andAffordance(
afford(methodOn(PhaseController.class).put("foo", null)));
```
It will fail with:
```java
Duplicate key (attempted merging values org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@6d532813 and org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@4b51cb8f)
java.lang.IllegalStateException: Duplicate key (attempted merging values org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@6d532813 and org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@4b51cb8f)
at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:135)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:182)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at org.springframework.hateoas.mediatype.TypeBasedPayloadMetadata.<init>(TypeBasedPayloadMetadata.java:46)
at org.springframework.hateoas.mediatype.PropertyUtils.lambda$getExposedProperties$2(PropertyUtils.java:140)
at java.base/java.util.concurrent.ConcurrentMap.computeIfAbsent(ConcurrentMap.java:330)
at org.springframework.hateoas.mediatype.PropertyUtils.getExposedProperties(PropertyUtils.java:133)
at org.springframework.hateoas.mediatype.Affordances$AffordanceBuilder.withInput(Affordances.java:171)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.lambda$create$1(SpringAffordanceBuilder.java:104)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.lambda$create$2(SpringAffordanceBuilder.java:111)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.getAffordances(SpringAffordanceBuilder.java:69)
at org.springframework.hateoas.server.core.WebHandler.lambda$linkTo$0(WebHandler.java:166)
at org.springframework.hateoas.server.core.WebHandler.linkTo(WebHandler.java:85)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilderFactory.linkTo(WebMvcLinkBuilderFactory.java:127)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo(WebMvcLinkBuilder.java:167)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.afford(WebMvcLinkBuilder.java:186)
```
This is caused by `org.springframework.core.convert.Property#resolveName`:
```java
private String resolveName() {
if (this.readMethod != null) { // true
int index = this.readMethod.getName().indexOf("get"); // -1
if (index != -1) { // false
index += 3;
}
else {
index = this.readMethod.getName().indexOf("is"); // 6
if (index != -1) { // true
index += 2; // 8
}
else {
// Record-style plain accessor method, e.g. name()
index = 0;
}
}
return StringUtils.uncapitalize(this.readMethod.getName().substring(index)); // ""
}
else if (this.writeMethod != null) {
int index = this.writeMethod.getName().indexOf("set");
if (index == -1) {
throw new IllegalArgumentException("Not a setter method");
}
index += 3;
return StringUtils.uncapitalize(this.writeMethod.getName().substring(index));
}
else {
throw new IllegalStateException("Property is neither readable nor writeable");
}
}
```
When the property name ends with `get` or `is`, the resolved name is an empty String. This leads to 2 properties named "" leading to the duplicate key exception.
I was going for a PR in https://github.com/spring-projects/spring-framework/, but was blocked by https://github.com/spring-projects/spring-framework/issues/27420. Anyway, I guess this will lead to update `spring-framework` dependency in `spring-hateoas`.
|
1.0
|
Duplicate key error when 2 record attribute have names ending with 'is' - This bug is similar to #1402.
Let be a request payload:
```java
record Payload(@JsonProperty("fooBasis") String fooBasis, @JsonProperty("barBasis") String barBasis) {
@JsonCreator
Payload{}
}
```
Let be a controller:
```java
@Controller
class MyController {
@GetMapping("/{id}")
ResponseEntity<?> getById(@PathVariable("id") String id){
return ResponseEntity.status(HttpStatus.NOT_IMPLEMENTED).build();
}
@PutMapping("/{id}")
ResponseEntity<?> put(@PathVariable("id") String id, @RequestBody Payload payload){
return ResponseEntity.status(HttpStatus.NOT_IMPLEMENTED).build();
}
}
```
When constructing the affordance to the PUT method:
```java
linkTo(methodOn(MyController.class).getById("foo"))
.withSelfRel()
.andAffordance(
afford(methodOn(PhaseController.class).put("foo", null)));
```
It will fail with:
```java
Duplicate key (attempted merging values org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@6d532813 and org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@4b51cb8f)
java.lang.IllegalStateException: Duplicate key (attempted merging values org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@6d532813 and org.springframework.hateoas.mediatype.PropertyUtils$Jsr303AwarePropertyMetadata@4b51cb8f)
at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:135)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:182)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at org.springframework.hateoas.mediatype.TypeBasedPayloadMetadata.<init>(TypeBasedPayloadMetadata.java:46)
at org.springframework.hateoas.mediatype.PropertyUtils.lambda$getExposedProperties$2(PropertyUtils.java:140)
at java.base/java.util.concurrent.ConcurrentMap.computeIfAbsent(ConcurrentMap.java:330)
at org.springframework.hateoas.mediatype.PropertyUtils.getExposedProperties(PropertyUtils.java:133)
at org.springframework.hateoas.mediatype.Affordances$AffordanceBuilder.withInput(Affordances.java:171)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.lambda$create$1(SpringAffordanceBuilder.java:104)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.lambda$create$2(SpringAffordanceBuilder.java:111)
at org.springframework.hateoas.server.core.SpringAffordanceBuilder.getAffordances(SpringAffordanceBuilder.java:69)
at org.springframework.hateoas.server.core.WebHandler.lambda$linkTo$0(WebHandler.java:166)
at org.springframework.hateoas.server.core.WebHandler.linkTo(WebHandler.java:85)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilderFactory.linkTo(WebMvcLinkBuilderFactory.java:127)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo(WebMvcLinkBuilder.java:167)
at org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.afford(WebMvcLinkBuilder.java:186)
```
This is caused by `org.springframework.core.convert.Property#resolveName`:
```java
private String resolveName() {
if (this.readMethod != null) { // true
int index = this.readMethod.getName().indexOf("get"); // -1
if (index != -1) { // false
index += 3;
}
else {
index = this.readMethod.getName().indexOf("is"); // 6
if (index != -1) { // true
index += 2; // 8
}
else {
// Record-style plain accessor method, e.g. name()
index = 0;
}
}
return StringUtils.uncapitalize(this.readMethod.getName().substring(index)); // ""
}
else if (this.writeMethod != null) {
int index = this.writeMethod.getName().indexOf("set");
if (index == -1) {
throw new IllegalArgumentException("Not a setter method");
}
index += 3;
return StringUtils.uncapitalize(this.writeMethod.getName().substring(index));
}
else {
throw new IllegalStateException("Property is neither readable nor writeable");
}
}
```
When the property name ends with `get` or `is`, the resolved name is an empty String. This leads to 2 properties named "" leading to the duplicate key exception.
I was going for a PR in https://github.com/spring-projects/spring-framework/, but was blocked by https://github.com/spring-projects/spring-framework/issues/27420. Anyway, I guess this will lead to update `spring-framework` dependency in `spring-hateoas`.
|
process
|
duplicate key error when record attribute have names ending with is this bug is similar to let be a request payload java record payload jsonproperty foobasis string foobasis jsonproperty barbasis string barbasis jsoncreator payload let be a controller java controller class mycontroller getmapping id responseentity getbyid pathvariable id string id return responseentity status httpstatus not implemented build putmapping id responseentity put pathvariable id string id requestbody payload payload return responseentity status httpstatus not implemented build when constructing the affordance to the put method java linkto methodon mycontroller class getbyid foo withselfrel andaffordance afford methodon phasecontroller class put foo null it will fail with java duplicate key attempted merging values org springframework hateoas mediatype propertyutils and org springframework hateoas mediatype propertyutils java lang illegalstateexception duplicate key attempted merging values org springframework hateoas mediatype propertyutils and org springframework hateoas mediatype propertyutils at java base java util stream collectors duplicatekeyexception collectors java at java base java util stream collectors lambda uniqkeysmapaccumulator collectors java at java base java util stream reduceops accept reduceops java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util spliterators arrayspliterator foreachremaining spliterators java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream reduceops reduceop evaluatesequential reduceops java at java base java util stream abstractpipeline evaluate abstractpipeline java at java base java util stream referencepipeline collect referencepipeline java at org springframework hateoas mediatype typebasedpayloadmetadata typebasedpayloadmetadata java at org springframework hateoas mediatype propertyutils lambda getexposedproperties propertyutils java at java base java util concurrent concurrentmap computeifabsent concurrentmap java at org springframework hateoas mediatype propertyutils getexposedproperties propertyutils java at org springframework hateoas mediatype affordances affordancebuilder withinput affordances java at org springframework hateoas server core springaffordancebuilder lambda create springaffordancebuilder java at java base java util stream referencepipeline accept referencepipeline java at java base java util arraylist arraylistspliterator foreachremaining arraylist java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream reduceops reduceop evaluatesequential reduceops java at java base java util stream abstractpipeline evaluate abstractpipeline java at java base java util stream referencepipeline collect referencepipeline java at org springframework hateoas server core springaffordancebuilder lambda create springaffordancebuilder java at org springframework hateoas server core springaffordancebuilder getaffordances springaffordancebuilder java at org springframework hateoas server core webhandler lambda linkto webhandler java at org springframework hateoas server core webhandler linkto webhandler java at org springframework hateoas server mvc webmvclinkbuilderfactory linkto webmvclinkbuilderfactory java at org springframework hateoas server mvc webmvclinkbuilder linkto webmvclinkbuilder java at org springframework hateoas server mvc webmvclinkbuilder afford webmvclinkbuilder java this is caused by org springframework core convert property resolvename java private string resolvename if this readmethod null true int index this readmethod getname indexof get if index false index else index this readmethod getname indexof is if index true index else record style plain accessor method e g name index return stringutils uncapitalize this readmethod getname substring index else if this writemethod null int index this writemethod getname indexof set if index throw new illegalargumentexception not a setter method index return stringutils uncapitalize this writemethod getname substring index else throw new illegalstateexception property is neither readable nor writeable when the property name ends with get or is the resolved name is an empty string this leads to properties named leading to the duplicate key exception i was going for a pr in but was blocked by anyway i guess this will lead to update spring framework dependency in spring hateoas
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.