Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,111
| 11,301,007,350
|
IssuesEvent
|
2020-01-17 14:46:44
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
entry into host through natural portals
|
multi-species process
|
Penetration by a symbiont into a host organism via naturally occurring openings in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Secondary IDs
I can't immediately think of any genes in a symbiont that would be annotated to this. Does anyone have any examples?
If not I suggest obsoletion.
There is a single annotation from
The intestinal resident Candida glabrata opportunistically infects humans. However few genetic factors for adaptation in the intestine are identified in this fungus. Here we describe the C. glabrata CYB2 gene encoding lactate dehydrogenase as an adaptation factor for survival in the intestine. CYB2 was identified as a virulence factor by a silkworm infection study. To determine the function of CYB2, we analysed in vitro phenotypes of the mutant Ξcyb2. The Ξcyb2 mutant grew well in glucose medium under aerobic and anaerobic conditions, was not supersensitive to nitric oxide which has fungicidal-effect in phagocytes, and had normal levels of general virulence factors protease, lipase and adherence activities. A previous report suggested that Cyb2p is responsible for lactate assimilation. Additionally, it was speculated that lactate assimilation was required for Candida virulence because Candida must synthesize glucose via gluconeogenesis under glucose-limited conditions such as in the host. Indeed, the Ξcyb2 mutant could not grow on lactate medium in which lactate is the sole carbon source in the absence of glucose, indicating that Cyb2p plays a role in lactate assimilation. We hypothesized that Cyb2p-mediated lactate assimilation is necessary for proliferation in the intestinal tract, as the intestine is rich in lactate produced by bacteria flora, but not glucose. The Ξcyb2 mutant showed 100-fold decreased adaptation and few cells of Saccharomyces cerevisiae can adapt in mouse ceca. Interestingly, C. glabrata could assimilate lactate under hypoxic conditions, dependent on CYB2, but not yeast S. cerevisiae. Because accessible oxygen is limited in the intestine, the ability for lactate assimilation in hypoxic conditions may provide an advantage for a pathogenic yeast. From those results, we conclude that Cyb2p-mediated lactate assimilation is an intestinal adaptation factor of C. glabrata.
Supporting Data
I don't get this annotation from this paper?
|
1.0
|
entry into host through natural portals - Penetration by a symbiont into a host organism via naturally occurring openings in the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Secondary IDs
I can't immediately think of any genes in a symbiont that would be annotated to this. Does anyone have any examples?
If not I suggest obsoletion.
There is a single annotation from
The intestinal resident Candida glabrata opportunistically infects humans. However few genetic factors for adaptation in the intestine are identified in this fungus. Here we describe the C. glabrata CYB2 gene encoding lactate dehydrogenase as an adaptation factor for survival in the intestine. CYB2 was identified as a virulence factor by a silkworm infection study. To determine the function of CYB2, we analysed in vitro phenotypes of the mutant Ξcyb2. The Ξcyb2 mutant grew well in glucose medium under aerobic and anaerobic conditions, was not supersensitive to nitric oxide which has fungicidal-effect in phagocytes, and had normal levels of general virulence factors protease, lipase and adherence activities. A previous report suggested that Cyb2p is responsible for lactate assimilation. Additionally, it was speculated that lactate assimilation was required for Candida virulence because Candida must synthesize glucose via gluconeogenesis under glucose-limited conditions such as in the host. Indeed, the Ξcyb2 mutant could not grow on lactate medium in which lactate is the sole carbon source in the absence of glucose, indicating that Cyb2p plays a role in lactate assimilation. We hypothesized that Cyb2p-mediated lactate assimilation is necessary for proliferation in the intestinal tract, as the intestine is rich in lactate produced by bacteria flora, but not glucose. The Ξcyb2 mutant showed 100-fold decreased adaptation and few cells of Saccharomyces cerevisiae can adapt in mouse ceca. Interestingly, C. glabrata could assimilate lactate under hypoxic conditions, dependent on CYB2, but not yeast S. cerevisiae. Because accessible oxygen is limited in the intestine, the ability for lactate assimilation in hypoxic conditions may provide an advantage for a pathogenic yeast. From those results, we conclude that Cyb2p-mediated lactate assimilation is an intestinal adaptation factor of C. glabrata.
Supporting Data
I don't get this annotation from this paper?
|
process
|
entry into host through natural portals penetration by a symbiont into a host organism via naturally occurring openings in the host organism the host is defined as the larger of the organisms involved in a symbiotic interaction secondary ids i can t immediately think of any genes in a symbiont that would be annotated to this does anyone have any examples if not i suggest obsoletion there is a single annotation from the intestinal resident candida glabrata opportunistically infects humans however few genetic factors for adaptation in the intestine are identified in this fungus here we describe the c glabrata gene encoding lactate dehydrogenase as an adaptation factor for survival in the intestine was identified as a virulence factor by a silkworm infection study to determine the function of we analysed in vitro phenotypes of the mutant the mutant grew well in glucose medium under aerobic and anaerobic conditions was not supersensitive to nitric oxide which has fungicidal effect in phagocytes and had normal levels of general virulence factors protease lipase and adherence activities a previous report suggested that is responsible for lactate assimilation additionally it was speculated that lactate assimilation was required for candida virulence because candida must synthesize glucose via gluconeogenesis under glucose limited conditions such as in the host indeed the mutant could not grow on lactate medium in which lactate is the sole carbon source in the absence of glucose indicating that plays a role in lactate assimilation we hypothesized that mediated lactate assimilation is necessary for proliferation in the intestinal tract as the intestine is rich in lactate produced by bacteria flora but not glucose the mutant showed fold decreased adaptation and few cells of saccharomyces cerevisiae can adapt in mouse ceca interestingly c glabrata could assimilate lactate under hypoxic conditions dependent on but not yeast s cerevisiae because accessible oxygen is limited in the intestine the ability for lactate assimilation in hypoxic conditions may provide an advantage for a pathogenic yeast from those results we conclude that mediated lactate assimilation is an intestinal adaptation factor of c glabrata supporting data i don t get this annotation from this paper
| 1
|
708,110
| 24,330,708,988
|
IssuesEvent
|
2022-09-30 19:06:57
|
o3de/o3de
|
https://api.github.com/repos/o3de/o3de
|
closed
|
Project Manager: Template download is incorrectly using the default project folder
|
kind/bug needs-triage sig/content priority/critical feature/project-manager
|
Downloading a template sets the download location to the O3DE project folder. There is no way of getting the folder a template is located within Project Manager after it is installed so there is no easy way to find this on disk due to it being in an unexpected location.
|
1.0
|
Project Manager: Template download is incorrectly using the default project folder - Downloading a template sets the download location to the O3DE project folder. There is no way of getting the folder a template is located within Project Manager after it is installed so there is no easy way to find this on disk due to it being in an unexpected location.
|
non_process
|
project manager template download is incorrectly using the default project folder downloading a template sets the download location to the project folder there is no way of getting the folder a template is located within project manager after it is installed so there is no easy way to find this on disk due to it being in an unexpected location
| 0
|
75,447
| 9,855,138,560
|
IssuesEvent
|
2019-06-19 18:37:14
|
planetlabs/planet-client-python
|
https://api.github.com/repos/planetlabs/planet-client-python
|
closed
|
Docs referencing non-existing "await" method of Response class
|
documentation
|
[Here](http://planetlabs.github.io/planet-client-python/api/reference.html#planet.api.models.Response.await) the response object is said to have the method `await` instead of `wait`, as defined [here](https://github.com/planetlabs/planet-client-python/blob/47f2db53e5ae7429f55a263fed83e97d8cf93a64/planet/api/models.py#L66).
I assume this happened as the wiki was not re-generated after [this commit](https://github.com/planetlabs/planet-client-python/commit/bbf63a7a85bbe071d6e8af2e94b89ba1a44be1ec#diff-edf2d4beca0941e2e0c5162c561c0f61).
|
1.0
|
Docs referencing non-existing "await" method of Response class - [Here](http://planetlabs.github.io/planet-client-python/api/reference.html#planet.api.models.Response.await) the response object is said to have the method `await` instead of `wait`, as defined [here](https://github.com/planetlabs/planet-client-python/blob/47f2db53e5ae7429f55a263fed83e97d8cf93a64/planet/api/models.py#L66).
I assume this happened as the wiki was not re-generated after [this commit](https://github.com/planetlabs/planet-client-python/commit/bbf63a7a85bbe071d6e8af2e94b89ba1a44be1ec#diff-edf2d4beca0941e2e0c5162c561c0f61).
|
non_process
|
docs referencing non existing await method of response class the response object is said to have the method await instead of wait as defined i assume this happened as the wiki was not re generated after
| 0
|
3,397
| 6,517,783,899
|
IssuesEvent
|
2017-08-28 03:15:40
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
With getBlock -s:c blockNum, if blockNum is not found in cache, error out.
|
status-inprocess tools-getBlock type-enhancement
|
Currently it simply reports an empty block.
|
1.0
|
With getBlock -s:c blockNum, if blockNum is not found in cache, error out. - Currently it simply reports an empty block.
|
process
|
with getblock s c blocknum if blocknum is not found in cache error out currently it simply reports an empty block
| 1
|
13,705
| 16,463,985,511
|
IssuesEvent
|
2021-05-22 03:04:45
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
closed
|
[wgsl-in] panic on return without location
|
area: processing kind: bug
|
This code:
```
[[stage(vertex)]]
fn main() -> f32 {
return 0.0;
}
```
should error because the return has no location, but produces this panic, using naga 0.4.0:
```
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:90:52
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\std\src\panicking.rs:493
1: core::panicking::panic_fmt
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\core\src\panicking.rs:92
2: core::panicking::panic
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\core\src\panicking.rs:50
3: core::option::Option<mut naga::Binding*>::unwrap<mut naga::Binding*>
at \.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\option.rs:386
4: naga::proc::interpolator::{{impl}}::apply_common_default_interpolation::default_binding_or_struct
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:90
5: naga::Module::apply_common_default_interpolation
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:135
6: naga::front::wgsl::Parser::parse
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\front\wgsl\mod.rs:2745
7: naga::front::wgsl::parse_str
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\front\wgsl\mod.rs:2754
8: build_script_build::main
at .\build.rs:24
9: core::ops::function::FnOnce::call_once<fn(),tuple<>>
at \.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:227
```
If the return value is given a location attribute, parsing succeeds as expected:
```
// no panic
[[stage(vertex)]]
fn main() -> [[location(0)]] f32 {
return 0.0;
}
```
This also applies to fields on returned structs without a location:
```
// also panics
struct Output {
[[location(0)]] ok: f32;
bad: f32;
};
[[stage(vertex)]]
fn main() -> Output {
var output: Output;
output.ok = 0.0;
output.bad = 0.0;
return output;
}
```
|
1.0
|
[wgsl-in] panic on return without location - This code:
```
[[stage(vertex)]]
fn main() -> f32 {
return 0.0;
}
```
should error because the return has no location, but produces this panic, using naga 0.4.0:
```
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:90:52
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\std\src\panicking.rs:493
1: core::panicking::panic_fmt
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\core\src\panicking.rs:92
2: core::panicking::panic
at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0\/library\core\src\panicking.rs:50
3: core::option::Option<mut naga::Binding*>::unwrap<mut naga::Binding*>
at \.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\option.rs:386
4: naga::proc::interpolator::{{impl}}::apply_common_default_interpolation::default_binding_or_struct
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:90
5: naga::Module::apply_common_default_interpolation
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\proc\interpolator.rs:135
6: naga::front::wgsl::Parser::parse
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\front\wgsl\mod.rs:2745
7: naga::front::wgsl::parse_str
at \.cargo\registry\src\github.com-1ecc6299db9ec823\naga-0.4.0\src\front\wgsl\mod.rs:2754
8: build_script_build::main
at .\build.rs:24
9: core::ops::function::FnOnce::call_once<fn(),tuple<>>
at \.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs:227
```
If the return value is given a location attribute, parsing succeeds as expected:
```
// no panic
[[stage(vertex)]]
fn main() -> [[location(0)]] f32 {
return 0.0;
}
```
This also applies to fields on returned structs without a location:
```
// also panics
struct Output {
[[location(0)]] ok: f32;
bad: f32;
};
[[stage(vertex)]]
fn main() -> Output {
var output: Output;
output.ok = 0.0;
output.bad = 0.0;
return output;
}
```
|
process
|
panic on return without location this code fn main return should error because the return has no location but produces this panic using naga thread main panicked at called option unwrap on a none value cargo registry src github com naga src proc interpolator rs stack backtrace std panicking begin panic handler at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core panicking panic at rustc library core src panicking rs core option option unwrap at rustup toolchains stable pc windows msvc lib rustlib src rust library core src option rs naga proc interpolator impl apply common default interpolation default binding or struct at cargo registry src github com naga src proc interpolator rs naga module apply common default interpolation at cargo registry src github com naga src proc interpolator rs naga front wgsl parser parse at cargo registry src github com naga src front wgsl mod rs naga front wgsl parse str at cargo registry src github com naga src front wgsl mod rs build script build main at build rs core ops function fnonce call once at rustup toolchains stable pc windows msvc lib rustlib src rust library core src ops function rs if the return value is given a location attribute parsing succeeds as expected no panic fn main return this also applies to fields on returned structs without a location also panics struct output ok bad fn main output var output output output ok output bad return output
| 1
|
249,328
| 26,911,949,391
|
IssuesEvent
|
2023-02-07 01:01:50
|
jerusdp/lambda-rust
|
https://api.github.com/repos/jerusdp/lambda-rust
|
opened
|
CVE-2023-22466 (Medium) detected in tokio-1.14.0.crate
|
security vulnerability
|
## CVE-2023-22466 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tokio-1.14.0.crate</b></p></summary>
<p>An event-driven, non-blocking I/O platform for writing asynchronous I/O
backed applications.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/tokio/1.14.0/download">https://crates.io/api/v1/crates/tokio/1.14.0/download</a></p>
<p>
Dependency Hierarchy:
- :x: **tokio-1.14.0.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jerusdp/lambda-rust/commit/7565f209acfb2245001469561ca5db54d7fa7c27">7565f209acfb2245001469561ca5db54d7fa7c27</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Tokio is a runtime for writing applications with Rust. Starting with version 1.7.0 and prior to versions 1.18.4, 1.20.3, and 1.23.1, when configuring a Windows named pipe server, setting `pipe_mode` will reset `reject_remote_clients` to `false`. If the application has previously configured `reject_remote_clients` to `true`, this effectively undoes the configuration. Remote clients may only access the named pipe if the named pipe's associated path is accessible via a publicly shared folder (SMB). Versions 1.23.1, 1.20.3, and 1.18.4 have been patched. The fix will also be present in all releases starting from version 1.24.0. Named pipes were introduced to Tokio in version 1.7.0, so releases older than 1.7.0 are not affected. As a workaround, ensure that `pipe_mode` is set first after initializing a `ServerOptions`.
<p>Publish Date: 2023-01-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-22466>CVE-2023-22466</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tokio-rs/tokio/security/advisories/GHSA-7rrj-xr53-82p7">https://github.com/tokio-rs/tokio/security/advisories/GHSA-7rrj-xr53-82p7</a></p>
<p>Release Date: 2023-01-04</p>
<p>Fix Resolution: tokio - 1.18.4,1.20.3,1.23.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-22466 (Medium) detected in tokio-1.14.0.crate - ## CVE-2023-22466 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tokio-1.14.0.crate</b></p></summary>
<p>An event-driven, non-blocking I/O platform for writing asynchronous I/O
backed applications.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/tokio/1.14.0/download">https://crates.io/api/v1/crates/tokio/1.14.0/download</a></p>
<p>
Dependency Hierarchy:
- :x: **tokio-1.14.0.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jerusdp/lambda-rust/commit/7565f209acfb2245001469561ca5db54d7fa7c27">7565f209acfb2245001469561ca5db54d7fa7c27</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Tokio is a runtime for writing applications with Rust. Starting with version 1.7.0 and prior to versions 1.18.4, 1.20.3, and 1.23.1, when configuring a Windows named pipe server, setting `pipe_mode` will reset `reject_remote_clients` to `false`. If the application has previously configured `reject_remote_clients` to `true`, this effectively undoes the configuration. Remote clients may only access the named pipe if the named pipe's associated path is accessible via a publicly shared folder (SMB). Versions 1.23.1, 1.20.3, and 1.18.4 have been patched. The fix will also be present in all releases starting from version 1.24.0. Named pipes were introduced to Tokio in version 1.7.0, so releases older than 1.7.0 are not affected. As a workaround, ensure that `pipe_mode` is set first after initializing a `ServerOptions`.
<p>Publish Date: 2023-01-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-22466>CVE-2023-22466</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tokio-rs/tokio/security/advisories/GHSA-7rrj-xr53-82p7">https://github.com/tokio-rs/tokio/security/advisories/GHSA-7rrj-xr53-82p7</a></p>
<p>Release Date: 2023-01-04</p>
<p>Fix Resolution: tokio - 1.18.4,1.20.3,1.23.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tokio crate cve medium severity vulnerability vulnerable library tokio crate an event driven non blocking i o platform for writing asynchronous i o backed applications library home page a href dependency hierarchy x tokio crate vulnerable library found in head commit a href found in base branch master vulnerability details tokio is a runtime for writing applications with rust starting with version and prior to versions and when configuring a windows named pipe server setting pipe mode will reset reject remote clients to false if the application has previously configured reject remote clients to true this effectively undoes the configuration remote clients may only access the named pipe if the named pipe s associated path is accessible via a publicly shared folder smb versions and have been patched the fix will also be present in all releases starting from version named pipes were introduced to tokio in version so releases older than are not affected as a workaround ensure that pipe mode is set first after initializing a serveroptions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tokio step up your open source security game with mend
| 0
|
295,062
| 9,072,141,733
|
IssuesEvent
|
2019-02-15 01:30:46
|
matsengrp/olmsted
|
https://api.github.com/repos/matsengrp/olmsted
|
closed
|
Add pie charts for timepoint multiplicities
|
data-in high-priority vega-viz
|
There are analogs of `multiplicity` and `cluster_multiplicity` which account for timepoints: `multiplicities` and `cluster_multiplicities`. These plural attributes point to `:` separated lists of multiplicities, ordered per timepoint in the `timepoints` column (also `:` separated). This is a little nonstandard, and would maybe be better as json, but it follows the partis output convention.
We can use vega's arc marcs to create these little pie charts. We can even use `innerRadius` to make these pies hollow in the center, so that they overlap more gracefully. Vega docs:
https://vega.github.io/vega/docs/marks/arc/
Note that this will mean some preprocessing, as here the datum level of the mark is unique combinations of `(minadcl_representative, timepoint)`. So we'll have to "flatten" this information out of the nested `multiplicities` and `cluster_multiplicities` cols.
|
1.0
|
Add pie charts for timepoint multiplicities - There are analogs of `multiplicity` and `cluster_multiplicity` which account for timepoints: `multiplicities` and `cluster_multiplicities`. These plural attributes point to `:` separated lists of multiplicities, ordered per timepoint in the `timepoints` column (also `:` separated). This is a little nonstandard, and would maybe be better as json, but it follows the partis output convention.
We can use vega's arc marcs to create these little pie charts. We can even use `innerRadius` to make these pies hollow in the center, so that they overlap more gracefully. Vega docs:
https://vega.github.io/vega/docs/marks/arc/
Note that this will mean some preprocessing, as here the datum level of the mark is unique combinations of `(minadcl_representative, timepoint)`. So we'll have to "flatten" this information out of the nested `multiplicities` and `cluster_multiplicities` cols.
|
non_process
|
add pie charts for timepoint multiplicities there are analogs of multiplicity and cluster multiplicity which account for timepoints multiplicities and cluster multiplicities these plural attributes point to separated lists of multiplicities ordered per timepoint in the timepoints column also separated this is a little nonstandard and would maybe be better as json but it follows the partis output convention we can use vega s arc marcs to create these little pie charts we can even use innerradius to make these pies hollow in the center so that they overlap more gracefully vega docs note that this will mean some preprocessing as here the datum level of the mark is unique combinations of minadcl representative timepoint so we ll have to flatten this information out of the nested multiplicities and cluster multiplicities cols
| 0
|
4,587
| 7,225,656,926
|
IssuesEvent
|
2018-02-10 00:05:49
|
rust-lang-nursery/futures-rs
|
https://api.github.com/repos/rust-lang-nursery/futures-rs
|
closed
|
Rename `poll_complete` to `poll_flush`
|
0.1-incompatible 0.2-cleanup
|
Clarify the semantics of `poll_complete` by renaming to `poll_flush`.
|
True
|
Rename `poll_complete` to `poll_flush` - Clarify the semantics of `poll_complete` by renaming to `poll_flush`.
|
non_process
|
rename poll complete to poll flush clarify the semantics of poll complete by renaming to poll flush
| 0
|
8,957
| 12,067,927,487
|
IssuesEvent
|
2020-04-16 14:05:41
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
Long authentication time for SFTP
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile:
|
**Describe the bug**
Currently takes around 30s - 1min to authenticate with Sentinel. This time is infeasible if many files need to be processed in a timely manner. It should only take 1-2 seconds.
**To Reproduce**
1. Create request to check-in file
2. Observe logs for job
3. See that authentication takes a long time
**Expected behavior**
Authentication should take 1-2 seconds.
**Screenshots**
N/A
**Additional context**
A credible fix could be switching to use keys instead of username/password auth
|
1.0
|
Long authentication time for SFTP - **Describe the bug**
Currently takes around 30s - 1min to authenticate with Sentinel. This time is infeasible if many files need to be processed in a timely manner. It should only take 1-2 seconds.
**To Reproduce**
1. Create request to check-in file
2. Observe logs for job
3. See that authentication takes a long time
**Expected behavior**
Authentication should take 1-2 seconds.
**Screenshots**
N/A
**Additional context**
A credible fix could be switching to use keys instead of username/password auth
|
process
|
long authentication time for sftp describe the bug currently takes around to authenticate with sentinel this time is infeasible if many files need to be processed in a timely manner it should only take seconds to reproduce create request to check in file observe logs for job see that authentication takes a long time expected behavior authentication should take seconds screenshots n a additional context a credible fix could be switching to use keys instead of username password auth
| 1
|
6,230
| 9,179,030,341
|
IssuesEvent
|
2019-03-05 01:25:46
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Alternate Names for POIs
|
processed
|
When folks in Portland search for things named PDX, PCC and MLK, they are looking for Portland's Airport, Portland Community College and SE Martin Luther King Blvd.
In TriMet's current geocoder, we have a list of synonyms that currently get loaded into SOLR: https://github.com/OpenTransitTools/loader/blob/master/ott/loader/solr/conf/synonyms.txt
The instance of Pelias that TriMet will replace SOLR will need to be loaded with a similar set of synonyms and alternate names:
https://mapzen.com/products/search/?query=PDX&endpoint=autocomplete
So having a way to define alternative naming would be good. I can see multiple data sources supplying the original POI (e.g., think "Portland International Airport" is from both OSM and OA; we also have a lot of transit landmarks that have common alternative names / abbreviations).
Question: should alt naming come from the source, or a separate synonyms file?
|
1.0
|
Alternate Names for POIs - When folks in Portland search for things named PDX, PCC and MLK, they are looking for Portland's Airport, Portland Community College and SE Martin Luther King Blvd.
In TriMet's current geocoder, we have a list of synonyms that currently get loaded into SOLR: https://github.com/OpenTransitTools/loader/blob/master/ott/loader/solr/conf/synonyms.txt
The instance of Pelias that TriMet will replace SOLR will need to be loaded with a similar set of synonyms and alternate names:
https://mapzen.com/products/search/?query=PDX&endpoint=autocomplete
So having a way to define alternative naming would be good. I can see multiple data sources supplying the original POI (e.g., think "Portland International Airport" is from both OSM and OA; we also have a lot of transit landmarks that have common alternative names / abbreviations).
Question: should alt naming come from the source, or a separate synonyms file?
|
process
|
alternate names for pois when folks in portland search for things named pdx pcc and mlk they are looking for portland s airport portland community college and se martin luther king blvd in trimet s current geocoder we have a list of synonyms that currently get loaded into solr the instance of pelias that trimet will replace solr will need to be loaded with a similar set of synonyms and alternate names so having a way to define alternative naming would be good i can see multiple data sources supplying the original poi e g think portland international airport is from both osm and oa we also have a lot of transit landmarks that have common alternative names abbreviations question should alt naming come from the source or a separate synonyms file
| 1
|
7
| 2,496,208,264
|
IssuesEvent
|
2015-01-06 17:49:10
|
vivo-isf/vivo-isf-ontology
|
https://api.github.com/repos/vivo-isf/vivo-isf-ontology
|
closed
|
circadian rhythm
|
biological_process imported
|
_From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on October 09, 2012 08:19:31_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
circadian rhythm
\<b>Please provide a textual definition (with source):</b>
Circadian rhythm is the regular recurrence, in cycles of about 24 hours, of biological processes or activities, such as sensitivity to drugs and stimuli, hormone secretion, sleeping, and feeding.
from MeSH: \<a href="http://www.ncbi.nlm.nih.gov/mesh/68002940" rel="nofollow">http://www.ncbi.nlm.nih.gov/mesh/68002940</a>
\<b>Please add an example of usage for proposed term:</b>
describing resources (via biological process field) that are used to investigate this type of chronobiology phenomena
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=124_
|
1.0
|
circadian rhythm - _From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on October 09, 2012 08:19:31_
\<b>**** Use the form below to request a new term ****</b>
\<b>**** Scroll down to see a term request example ****</b>
\<b>Please indicate the label for the proposed term:</b>
circadian rhythm
\<b>Please provide a textual definition (with source):</b>
Circadian rhythm is the regular recurrence, in cycles of about 24 hours, of biological processes or activities, such as sensitivity to drugs and stimuli, hormone secretion, sleeping, and feeding.
from MeSH: \<a href="http://www.ncbi.nlm.nih.gov/mesh/68002940" rel="nofollow">http://www.ncbi.nlm.nih.gov/mesh/68002940</a>
\<b>Please add an example of usage for proposed term:</b>
describing resources (via biological process field) that are used to investigate this type of chronobiology phenomena
\<b>Please provide any additional optional information below. (e.g. desired</b>
\<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b>
\<b>[ ] Instrument</b>
[X] Biological process
\<b>[ ] Disease</b>
\<b>[ ] Human studies</b>
\<b>[ ] Instrument</b>
\<b>[ ] Organism</b>
\<b>[ ] Reagent</b>
\<b>[ ] Software</b>
\<b>[ ] Technique</b>
\<b>[ ] Organization</b>
_Original issue: http://code.google.com/p/eagle-i/issues/detail?id=124_
|
process
|
circadian rhythm from on october use the form below to request a new term scroll down to see a term request example please indicate the label for the proposed term circadian rhythm please provide a textual definition with source circadian rhythm is the regular recurrence in cycles of about hours of biological processes or activities such as sensitivity to drugs and stimuli hormone secretion sleeping and feeding from mesh please add an example of usage for proposed term describing resources via biological process field that are used to investigate this type of chronobiology phenomena please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization original issue
| 1
|
322,313
| 23,901,997,524
|
IssuesEvent
|
2022-09-08 19:43:00
|
ubeac/svelte
|
https://api.github.com/repos/ubeac/svelte
|
closed
|
Source Code Preview Component
|
documentation enhancement core Preview
|
usage:
```svelte
<Preview>
<ButtonColors />
</Preview>
<!-- we should write a preprocessor which finds <ButtonColors> component's source code and insert code sections as prop for Preview component -->
<Preview markup={`
<Button color={color}>Color Example</Button>
`}
script={`
let color = red;\n$: classes= "btn-" + color;\n
`}
style={`
.btn { background-color: red; }
`}>
<ButtonColors/>
</Preview>
```
or simpler (similar to storybook) usage:
```svelte
<Preview component={ButtonColors} />
```
or from file url:
```svelte
<Preview src="./example/button-colors.svelte" />
<!-- should import button-colors.svelte file and render inside Preview component (default slot) -->
```
|
1.0
|
Source Code Preview Component - usage:
```svelte
<Preview>
<ButtonColors />
</Preview>
<!-- we should write a preprocessor which finds <ButtonColors> component's source code and insert code sections as prop for Preview component -->
<Preview markup={`
<Button color={color}>Color Example</Button>
`}
script={`
let color = red;\n$: classes= "btn-" + color;\n
`}
style={`
.btn { background-color: red; }
`}>
<ButtonColors/>
</Preview>
```
or simpler (similar to storybook) usage:
```svelte
<Preview component={ButtonColors} />
```
or from file url:
```svelte
<Preview src="./example/button-colors.svelte" />
<!-- should import button-colors.svelte file and render inside Preview component (default slot) -->
```
|
non_process
|
source code preview component usage svelte component s source code and insert code sections as prop for preview component preview markup color example script let color red n classes btn color n style btn background color red or simpler similar to storybook usage svelte or from file url svelte
| 0
|
19,743
| 26,098,525,334
|
IssuesEvent
|
2022-12-27 02:00:06
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Tue, 27 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### HandsOff: Labeled Dataset Generation With No Additional Human Annotations
- **Authors:** Austin Xu, Mariya I. Vasileva, Achal Dave, Arjun Seshadri
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12645
- **Pdf link:** https://arxiv.org/pdf/2212.12645
- **Abstract**
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
### TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation
- **Authors:** Hanzhi Chen, Fabian Manhardt, Nassir Navab, Benjamin Busam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12902
- **Pdf link:** https://arxiv.org/pdf/2212.12902
- **Abstract**
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
## Keyword: ISP
### Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms
- **Authors:** James Gallagher, Edward Oughton
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12616
- **Pdf link:** https://arxiv.org/pdf/2212.12616
- **Abstract**
Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature .The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Hyperspherical Quantization: Toward Smaller and More Accurate Models
- **Authors:** Dan Liu, Xi Chen, Chen Ma, Xue Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12653
- **Pdf link:** https://arxiv.org/pdf/2212.12653
- **Abstract**
Model quantization enables the deployment of deep neural networks under resource-constrained devices. Vector quantization aims at reducing the model size by indexing model weights with full-precision embeddings, i.e., codewords, while the index needs to be restored to 32-bit during computation. Binary and other low-precision quantization methods can reduce the model size up to 32$\times$, however, at the cost of a considerable accuracy drop. In this paper, we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models. By integrating hyperspherical learning, pruning and reinitialization, our proposed Hyperspherical Quantization (HQ) method reduces the cosine distance between the full-precision and ternary weights, thus reducing the bias of the straight-through gradient estimator during ternary quantization. Compared with existing work at similar compression levels ($\sim$30$\times$, $\sim$40$\times$), our method significantly improves the test accuracy and reduces the model size.
### DDH-QA: A Dynamic Digital Humans Quality Assessment Database
- **Authors:** Zicheng Zhang, Yingjie Zhou, Wei Sun, Wei Lu, Xiongkuo Min, Yu Wang, Guangtao Zhai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12734
- **Pdf link:** https://arxiv.org/pdf/2212.12734
- **Abstract**
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH). However, most current quality assessment research focuses on evaluating static 3D models and usually ignores motion distortions. Therefore, in this paper, we construct a large-scale dynamic digital human quality assessment (DDH-QA) database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of DDHs. Both model-based distortion (noise, compression) and motion-based distortion (binding error, motion unnaturalness) are taken into consideration. Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end. Afterward, we render the video sequences of the distorted DDHs as the evaluation media and carry out a well-controlled subjective experiment. Then a benchmark experiment is conducted with the state-of-the-art video quality assessment (VQA) methods and the experimental results show that existing VQA methods are limited in assessing the perceptual loss of DDHs. The database will be made publicly available to facilitate future research.
### BD-KD: Balancing the Divergences for Online Knowledge Distillation
- **Authors:** Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross, James J. Clark
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12965
- **Pdf link:** https://arxiv.org/pdf/2212.12965
- **Abstract**
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the teacher and the student are learning collaboratively, has also gained much interest due to its ability to improve on the performance of the networks involved. The Kullback-Leibler (KL) divergence ensures the proper knowledge transfer between the teacher and student. However, most online KD techniques present some bottlenecks under the network capacity gap. By cooperatively and simultaneously training, the models the KL distance becomes incapable of properly minimizing the teacher's and student's distributions. Alongside accuracy, critical edge device applications are in need of well-calibrated compact networks. Confidence calibration provides a sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of Divergences for online Knowledge Distillation. We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process. We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network. We conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state-of-the-art online and offline KD techniques.
## Keyword: RAW
### HandsOff: Labeled Dataset Generation With No Additional Human Annotations
- **Authors:** Austin Xu, Mariya I. Vasileva, Achal Dave, Arjun Seshadri
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12645
- **Pdf link:** https://arxiv.org/pdf/2212.12645
- **Abstract**
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
### TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation
- **Authors:** Hanzhi Chen, Fabian Manhardt, Nassir Navab, Benjamin Busam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12902
- **Pdf link:** https://arxiv.org/pdf/2212.12902
- **Abstract**
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
### Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models
- **Authors:** Zijian Zhang, Zhou Zhao, Zhijie Lin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.12990
- **Pdf link:** https://arxiv.org/pdf/2212.12990
- **Abstract**
Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose \textbf{P}re-trained \textbf{D}PM \textbf{A}uto\textbf{E}ncoding (\textbf{PDAE}), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Tue, 27 Dec 22 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### HandsOff: Labeled Dataset Generation With No Additional Human Annotations
- **Authors:** Austin Xu, Mariya I. Vasileva, Achal Dave, Arjun Seshadri
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12645
- **Pdf link:** https://arxiv.org/pdf/2212.12645
- **Abstract**
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
### TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation
- **Authors:** Hanzhi Chen, Fabian Manhardt, Nassir Navab, Benjamin Busam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12902
- **Pdf link:** https://arxiv.org/pdf/2212.12902
- **Abstract**
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
## Keyword: ISP
### Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms
- **Authors:** James Gallagher, Edward Oughton
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12616
- **Pdf link:** https://arxiv.org/pdf/2212.12616
- **Abstract**
Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature .The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Hyperspherical Quantization: Toward Smaller and More Accurate Models
- **Authors:** Dan Liu, Xi Chen, Chen Ma, Xue Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12653
- **Pdf link:** https://arxiv.org/pdf/2212.12653
- **Abstract**
Model quantization enables the deployment of deep neural networks under resource-constrained devices. Vector quantization aims at reducing the model size by indexing model weights with full-precision embeddings, i.e., codewords, while the index needs to be restored to 32-bit during computation. Binary and other low-precision quantization methods can reduce the model size up to 32$\times$, however, at the cost of a considerable accuracy drop. In this paper, we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models. By integrating hyperspherical learning, pruning and reinitialization, our proposed Hyperspherical Quantization (HQ) method reduces the cosine distance between the full-precision and ternary weights, thus reducing the bias of the straight-through gradient estimator during ternary quantization. Compared with existing work at similar compression levels ($\sim$30$\times$, $\sim$40$\times$), our method significantly improves the test accuracy and reduces the model size.
### DDH-QA: A Dynamic Digital Humans Quality Assessment Database
- **Authors:** Zicheng Zhang, Yingjie Zhou, Wei Sun, Wei Lu, Xiongkuo Min, Yu Wang, Guangtao Zhai
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12734
- **Pdf link:** https://arxiv.org/pdf/2212.12734
- **Abstract**
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH). However, most current quality assessment research focuses on evaluating static 3D models and usually ignores motion distortions. Therefore, in this paper, we construct a large-scale dynamic digital human quality assessment (DDH-QA) database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of DDHs. Both model-based distortion (noise, compression) and motion-based distortion (binding error, motion unnaturalness) are taken into consideration. Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end. Afterward, we render the video sequences of the distorted DDHs as the evaluation media and carry out a well-controlled subjective experiment. Then a benchmark experiment is conducted with the state-of-the-art video quality assessment (VQA) methods and the experimental results show that existing VQA methods are limited in assessing the perceptual loss of DDHs. The database will be made publicly available to facilitate future research.
### BD-KD: Balancing the Divergences for Online Knowledge Distillation
- **Authors:** Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross, James J. Clark
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12965
- **Pdf link:** https://arxiv.org/pdf/2212.12965
- **Abstract**
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the teacher and the student are learning collaboratively, has also gained much interest due to its ability to improve on the performance of the networks involved. The Kullback-Leibler (KL) divergence ensures the proper knowledge transfer between the teacher and student. However, most online KD techniques present some bottlenecks under the network capacity gap. By cooperatively and simultaneously training, the models the KL distance becomes incapable of properly minimizing the teacher's and student's distributions. Alongside accuracy, critical edge device applications are in need of well-calibrated compact networks. Confidence calibration provides a sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of Divergences for online Knowledge Distillation. We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process. We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network. We conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state-of-the-art online and offline KD techniques.
## Keyword: RAW
### HandsOff: Labeled Dataset Generation With No Additional Human Annotations
- **Authors:** Austin Xu, Mariya I. Vasileva, Achal Dave, Arjun Seshadri
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.12645
- **Pdf link:** https://arxiv.org/pdf/2212.12645
- **Abstract**
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
### TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose Estimation
- **Authors:** Hanzhi Chen, Fabian Manhardt, Nassir Navab, Benjamin Busam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.12902
- **Pdf link:** https://arxiv.org/pdf/2212.12902
- **Abstract**
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
### Unsupervised Representation Learning from Pre-trained Diffusion Probabilistic Models
- **Authors:** Zijian Zhang, Zhou Zhao, Zhijie Lin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.12990
- **Pdf link:** https://arxiv.org/pdf/2212.12990
- **Abstract**
Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples. Recently, diffusion autoencoders (Diff-AE) have been proposed to explore DPMs for representation learning via autoencoding. Their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional DPM as the decoder for reconstructing images. Considering that training DPMs from scratch will take a long time and there have existed numerous pre-trained DPMs, we propose \textbf{P}re-trained \textbf{D}PM \textbf{A}uto\textbf{E}ncoding (\textbf{PDAE}), a general method to adapt existing pre-trained DPMs to the decoders for image reconstruction, with better training efficiency and performance than Diff-AE. Specifically, we find that the reason that pre-trained DPMs fail to reconstruct an image from its latent variables is due to the information loss of forward process, which causes a gap between their predicted posterior mean and the true one. From this perspective, the classifier-guided sampling method can be explained as computing an extra mean shift to fill the gap, reconstructing the lost class information in samples. These imply that the gap corresponds to the lost information of the image, and we can reconstruct the image by filling the gap. Drawing inspiration from this, we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible, in this way, the encoder is forced to learn as much information as possible from images to help the filling. By reusing a part of network of pre-trained DPMs and redesigning the weighting scheme of diffusion loss, PDAE can learn meaningful representations from images efficiently. Extensive experiments demonstrate the effectiveness, efficiency and flexibility of PDAE.
## Keyword: raw image
There is no result
|
process
|
new submissions for tue dec keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb handsoff labeled dataset generation with no additional human annotations authors austin xu mariya i vasileva achal dave arjun seshadri subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract recent work leverages the expressive power of generative adversarial networks gans to generate labeled synthetic datasets these dataset generation methods often require new annotations of synthetic images which forces practitioners to seek out annotators curate a set of synthetic images and ensure the quality of generated labels we introduce the handsoff framework a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than pre existing labeled images our framework avoids the practical drawbacks of prior work by unifying the field of gan inversion with dataset generation we generate datasets with rich pixel wise labels in multiple challenging domains such as faces cars full body human poses and urban driving scenes our method achieves state of the art performance in semantic segmentation keypoint detection and depth estimation compared to prior dataset generation approaches and transfer learning baselines we additionally showcase its ability to address broad challenges in model development which stem from fixed hand annotated datasets such as the long tail problem in semantic segmentation texpose neural texture learning for self supervised object pose estimation authors hanzhi chen fabian manhardt nassir navab benjamin busam subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this paper we introduce neural texture learning for object pose estimation from synthetic data and a few unlabelled real images our major contribution is a novel learning scheme which removes the drawbacks of previous works namely the strong dependency on co modalities or additional refinement these have been previously necessary to provide training signals for convergence we formulate such a scheme as two sub optimisation problems on texture learning and pose learning we separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel perfect synthetic data combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry to alleviate pose noise and segmentation imperfection present during the texture learning phase we propose a surfel based adversarial training loss together with texture regularisation from synthetic data we demonstrate that the proposed approach significantly outperforms the recent state of the art methods without ground truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes remarkably our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance keyword isp assessing thermal imagery integration into object detection methods on ground based and air based collection platforms authors james gallagher edward oughton subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract object detection models commonly deployed on uncrewed aerial systems uas focus on identifying objects in the visible spectrum using red green blue rgb imagery however there is growing interest in fusing rgb with thermal long wave infrared lwir images to increase the performance of object detection machine learning ml models currently lwir ml models have received less research attention especially for both ground and air based platforms leading to a lack of baseline performance metrics evaluating lwir rgb and lwir rgb fused object detection models therefore this research contributes such quantitative metrics to the literature the results found that the ground based blended rgb lwir model exhibited superior performance compared to the rgb or lwir approaches achieving a map of additionally the blended rgb lwir model was also the only object detection model to work in both day and night conditions providing superior operational capabilities this research additionally contributes a novel labelled training dataset of images for rgb lwir and rgb lwir fused imagery collected from ground based and air based platforms enabling further multispectral machine driven object detection research keyword image signal processing there is no result keyword image signal process there is no result keyword compression hyperspherical quantization toward smaller and more accurate models authors dan liu xi chen chen ma xue liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract model quantization enables the deployment of deep neural networks under resource constrained devices vector quantization aims at reducing the model size by indexing model weights with full precision embeddings i e codewords while the index needs to be restored to bit during computation binary and other low precision quantization methods can reduce the model size up to times however at the cost of a considerable accuracy drop in this paper we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models by integrating hyperspherical learning pruning and reinitialization our proposed hyperspherical quantization hq method reduces the cosine distance between the full precision and ternary weights thus reducing the bias of the straight through gradient estimator during ternary quantization compared with existing work at similar compression levels sim times sim times our method significantly improves the test accuracy and reduces the model size ddh qa a dynamic digital humans quality assessment database authors zicheng zhang yingjie zhou wei sun wei lu xiongkuo min yu wang guangtao zhai subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in recent years large amounts of effort have been put into pushing forward the real world application of dynamic digital human ddh however most current quality assessment research focuses on evaluating static models and usually ignores motion distortions therefore in this paper we construct a large scale dynamic digital human quality assessment ddh qa database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of ddhs both model based distortion noise compression and motion based distortion binding error motion unnaturalness are taken into consideration ten types of common motion are employed to drive the ddhs and a total of ddhs are generated in the end afterward we render the video sequences of the distorted ddhs as the evaluation media and carry out a well controlled subjective experiment then a benchmark experiment is conducted with the state of the art video quality assessment vqa methods and the experimental results show that existing vqa methods are limited in assessing the perceptual loss of ddhs the database will be made publicly available to facilitate future research bd kd balancing the divergences for online knowledge distillation authors ibtihel amara nazanin sepahvand brett h meyer warren j gross james j clark subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract knowledge distillation kd has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower capacity models online distillation in which both the teacher and the student are learning collaboratively has also gained much interest due to its ability to improve on the performance of the networks involved the kullback leibler kl divergence ensures the proper knowledge transfer between the teacher and student however most online kd techniques present some bottlenecks under the network capacity gap by cooperatively and simultaneously training the models the kl distance becomes incapable of properly minimizing the teacher s and student s distributions alongside accuracy critical edge device applications are in need of well calibrated compact networks confidence calibration provides a sensible way of getting trustworthy predictions we propose bd kd balancing of divergences for online knowledge distillation we show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network s learning process we demonstrate that by performing this balancing design at the level of the student distillation loss we improve upon both performance accuracy and calibration of the compact student network we conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including cifar cifar tiny imagenet and imagenet we illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state of the art online and offline kd techniques keyword raw handsoff labeled dataset generation with no additional human annotations authors austin xu mariya i vasileva achal dave arjun seshadri subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract recent work leverages the expressive power of generative adversarial networks gans to generate labeled synthetic datasets these dataset generation methods often require new annotations of synthetic images which forces practitioners to seek out annotators curate a set of synthetic images and ensure the quality of generated labels we introduce the handsoff framework a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than pre existing labeled images our framework avoids the practical drawbacks of prior work by unifying the field of gan inversion with dataset generation we generate datasets with rich pixel wise labels in multiple challenging domains such as faces cars full body human poses and urban driving scenes our method achieves state of the art performance in semantic segmentation keypoint detection and depth estimation compared to prior dataset generation approaches and transfer learning baselines we additionally showcase its ability to address broad challenges in model development which stem from fixed hand annotated datasets such as the long tail problem in semantic segmentation texpose neural texture learning for self supervised object pose estimation authors hanzhi chen fabian manhardt nassir navab benjamin busam subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this paper we introduce neural texture learning for object pose estimation from synthetic data and a few unlabelled real images our major contribution is a novel learning scheme which removes the drawbacks of previous works namely the strong dependency on co modalities or additional refinement these have been previously necessary to provide training signals for convergence we formulate such a scheme as two sub optimisation problems on texture learning and pose learning we separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel perfect synthetic data combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry to alleviate pose noise and segmentation imperfection present during the texture learning phase we propose a surfel based adversarial training loss together with texture regularisation from synthetic data we demonstrate that the proposed approach significantly outperforms the recent state of the art methods without ground truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes remarkably our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance unsupervised representation learning from pre trained diffusion probabilistic models authors zijian zhang zhou zhao zhijie lin subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract diffusion probabilistic models dpms have shown a powerful capacity of generating high quality image samples recently diffusion autoencoders diff ae have been proposed to explore dpms for representation learning via autoencoding their key idea is to jointly train an encoder for discovering meaningful representations from images and a conditional dpm as the decoder for reconstructing images considering that training dpms from scratch will take a long time and there have existed numerous pre trained dpms we propose textbf p re trained textbf d pm textbf a uto textbf e ncoding textbf pdae a general method to adapt existing pre trained dpms to the decoders for image reconstruction with better training efficiency and performance than diff ae specifically we find that the reason that pre trained dpms fail to reconstruct an image from its latent variables is due to the information loss of forward process which causes a gap between their predicted posterior mean and the true one from this perspective the classifier guided sampling method can be explained as computing an extra mean shift to fill the gap reconstructing the lost class information in samples these imply that the gap corresponds to the lost information of the image and we can reconstruct the image by filling the gap drawing inspiration from this we employ a trainable model to predict a mean shift according to encoded representation and train it to fill as much gap as possible in this way the encoder is forced to learn as much information as possible from images to help the filling by reusing a part of network of pre trained dpms and redesigning the weighting scheme of diffusion loss pdae can learn meaningful representations from images efficiently extensive experiments demonstrate the effectiveness efficiency and flexibility of pdae keyword raw image there is no result
| 1
|
93,438
| 8,415,594,443
|
IssuesEvent
|
2018-10-13 16:18:33
|
junit-team/junit5
|
https://api.github.com/repos/junit-team/junit5
|
closed
|
Support hexadecimal values in arguments converters
|
component: Jupiter status: in progress theme: parameterized tests type: enhancement
|
## Overview
Itβs a feature request: support converting argument strings containing integral literals in hexadecimal to the corresponding types on top of decimal.
### Description
Currently one cannot use hexadecimal integral values when specifying arguments, e.g., this won't work:
```java
@CsvSource({
"1",
"0x1", // Won't be converted
}) // CSV source for illustrative purposes: ValueSource would suite here better
@ParameterizedTest
void test(int v) {
```
The cases when one needs to use hex include copying values in hex from some specification; when binary representation (i.e., which bit in the value is set) matters.
### Workaround
Use String as argument type and convert yourself with, e.g., Integer::decode.
## Deliverables
- [ ] ...
|
1.0
|
Support hexadecimal values in arguments converters - ## Overview
Itβs a feature request: support converting argument strings containing integral literals in hexadecimal to the corresponding types on top of decimal.
### Description
Currently one cannot use hexadecimal integral values when specifying arguments, e.g., this won't work:
```java
@CsvSource({
"1",
"0x1", // Won't be converted
}) // CSV source for illustrative purposes: ValueSource would suite here better
@ParameterizedTest
void test(int v) {
```
The cases when one needs to use hex include copying values in hex from some specification; when binary representation (i.e., which bit in the value is set) matters.
### Workaround
Use String as argument type and convert yourself with, e.g., Integer::decode.
## Deliverables
- [ ] ...
|
non_process
|
support hexadecimal values in arguments converters overview itβs a feature request support converting argument strings containing integral literals in hexadecimal to the corresponding types on top of decimal description currently one cannot use hexadecimal integral values when specifying arguments e g this won t work java csvsource won t be converted csv source for illustrative purposes valuesource would suite here better parameterizedtest void test int v the cases when one needs to use hex include copying values in hex from some specification when binary representation i e which bit in the value is set matters workaround use string as argument type and convert yourself with e g integer decode deliverables
| 0
|
6,528
| 9,621,912,938
|
IssuesEvent
|
2019-05-14 11:51:31
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Query GO:0022885 bacteriocin transmembrane transporter activity and children
|
multi-species process
|
GO:0022885 bacteriocin transmembrane transporter activity is one of these 'hijacked' activities: "Colicins are protein toxins produced by Escherichia coli to kill related bacteria. They must cross the target cell outer membrane (OM), and some must also cross the inner membrane (IM). To accomplish cellular import, colicins have parasitized E. coli nutrient transporters as well as IM and periplasmic proteins normally used to maintain cell wall integrity or provide energy for nutrient uptake through transporters." (https://www.annualreviews.org/doi/10.1146/annurev-genet-110711-155427)
So, the normal function of transporter of the target bacteria is **not** to import these toxins.
@jimhu-tamu @mgiglio99 @cmungall @pmasson55 @nsuvarnaiari @jrr-cptdyl27
@ValWood
How do we want to capture that ?
(Note: There are 4 children of GO:0022885 bacteriocin transmembrane transporter activity and children; only GO:0042912 has annotations)
GO:0022885 bacteriocin transmembrane transporter activity
- GO:0043214 bacteriocin-transporting ATPase activity
- GO:0042912 colicin transmembrane transporter activity
-- GO:0042913 group A colicin transmembrane transporter activity
Thanks, Pascale
|
1.0
|
Query GO:0022885 bacteriocin transmembrane transporter activity and children - GO:0022885 bacteriocin transmembrane transporter activity is one of these 'hijacked' activities: "Colicins are protein toxins produced by Escherichia coli to kill related bacteria. They must cross the target cell outer membrane (OM), and some must also cross the inner membrane (IM). To accomplish cellular import, colicins have parasitized E. coli nutrient transporters as well as IM and periplasmic proteins normally used to maintain cell wall integrity or provide energy for nutrient uptake through transporters." (https://www.annualreviews.org/doi/10.1146/annurev-genet-110711-155427)
So, the normal function of transporter of the target bacteria is **not** to import these toxins.
@jimhu-tamu @mgiglio99 @cmungall @pmasson55 @nsuvarnaiari @jrr-cptdyl27
@ValWood
How do we want to capture that ?
(Note: There are 4 children of GO:0022885 bacteriocin transmembrane transporter activity and children; only GO:0042912 has annotations)
GO:0022885 bacteriocin transmembrane transporter activity
- GO:0043214 bacteriocin-transporting ATPase activity
- GO:0042912 colicin transmembrane transporter activity
-- GO:0042913 group A colicin transmembrane transporter activity
Thanks, Pascale
|
process
|
query go bacteriocin transmembrane transporter activity and children go bacteriocin transmembrane transporter activity is one of these hijacked activities colicins are protein toxins produced by escherichia coli to kill related bacteria they must cross the target cell outer membrane om and some must also cross the inner membrane im to accomplish cellular import colicins have parasitized e coli nutrient transporters as well as im and periplasmic proteins normally used to maintain cell wall integrity or provide energy for nutrient uptake through transporters so the normal function of transporter of the target bacteria is not to import these toxins jimhu tamu cmungall nsuvarnaiari jrr valwood how do we want to capture that note there are children of go bacteriocin transmembrane transporter activity and children only go has annotations go bacteriocin transmembrane transporter activity go bacteriocin transporting atpase activity go colicin transmembrane transporter activity go group a colicin transmembrane transporter activity thanks pascale
| 1
|
115,106
| 17,272,679,668
|
IssuesEvent
|
2021-07-22 22:25:46
|
kapseliboi/pix
|
https://api.github.com/repos/kapseliboi/pix
|
opened
|
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz
|
security vulnerability
|
## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary>
<p>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- ember-auto-import-1.11.3.tgz (Root Library)
- core-7.7.7.tgz
- json5-2.1.1.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>
Dependency Hierarchy:
- ember-auto-import-1.11.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/pix/commit/ab3d784d8d71f1e71696b31b35fcf310c7a6d96a">ab3d784d8d71f1e71696b31b35fcf310c7a6d96a</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary>
<p>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- ember-auto-import-1.11.3.tgz (Root Library)
- core-7.7.7.tgz
- json5-2.1.1.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>
Dependency Hierarchy:
- ember-auto-import-1.11.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/pix/commit/ab3d784d8d71f1e71696b31b35fcf310c7a6d96a">ab3d784d8d71f1e71696b31b35fcf310c7a6d96a</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href dependency hierarchy ember auto import tgz root library core tgz tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href dependency hierarchy ember auto import tgz root library mkdirp tgz x minimist tgz vulnerable library found in head commit a href found in base branch dev vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource
| 0
|
209,076
| 23,681,700,030
|
IssuesEvent
|
2022-08-28 22:11:01
|
meramsey/user-alias
|
https://api.github.com/repos/meramsey/user-alias
|
closed
|
jdom-1.1.3.jar: 1 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jdom-1.1.3.jar</b></p></summary>
<p>A complete, Java-based solution for accessing, manipulating,
and outputting XML data</p>
<p>Library home page: <a href="http://www.jdom.org">http://www.jdom.org</a></p>
<p>Path to dependency file: /extension/pom.xml</p>
<p>Path to vulnerable library: /repository/org/jdom/jdom/1.1.3/jdom-1.1.3.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/meramsey/user-alias/commit/f2be907aeaa8c6567433543468dc2c648ee24183">f2be907aeaa8c6567433543468dc2c648ee24183</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-33813](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33813) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jdom-1.1.3.jar | Direct | org.apache.servicemix.bundles:org.apache.servicemix.bundles.jdom - 2.0.5_1;org.jdom:jdom2:2.0.6.1 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-33813</summary>
### Vulnerable Library - <b>jdom-1.1.3.jar</b></p>
<p>A complete, Java-based solution for accessing, manipulating,
and outputting XML data</p>
<p>Library home page: <a href="http://www.jdom.org">http://www.jdom.org</a></p>
<p>Path to dependency file: /extension/pom.xml</p>
<p>Path to vulnerable library: /repository/org/jdom/jdom/1.1.3/jdom-1.1.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jdom-1.1.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/meramsey/user-alias/commit/f2be907aeaa8c6567433543468dc2c648ee24183">f2be907aeaa8c6567433543468dc2c648ee24183</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An XXE issue in SAXBuilder in JDOM through 2.0.6 allows attackers to cause a denial of service via a crafted HTTP request.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33813>CVE-2021-33813</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-33813">https://nvd.nist.gov/vuln/detail/CVE-2021-33813</a></p>
<p>Release Date: 2021-06-16</p>
<p>Fix Resolution: org.apache.servicemix.bundles:org.apache.servicemix.bundles.jdom - 2.0.5_1;org.jdom:jdom2:2.0.6.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
jdom-1.1.3.jar: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jdom-1.1.3.jar</b></p></summary>
<p>A complete, Java-based solution for accessing, manipulating,
and outputting XML data</p>
<p>Library home page: <a href="http://www.jdom.org">http://www.jdom.org</a></p>
<p>Path to dependency file: /extension/pom.xml</p>
<p>Path to vulnerable library: /repository/org/jdom/jdom/1.1.3/jdom-1.1.3.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/meramsey/user-alias/commit/f2be907aeaa8c6567433543468dc2c648ee24183">f2be907aeaa8c6567433543468dc2c648ee24183</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-33813](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33813) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | jdom-1.1.3.jar | Direct | org.apache.servicemix.bundles:org.apache.servicemix.bundles.jdom - 2.0.5_1;org.jdom:jdom2:2.0.6.1 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-33813</summary>
### Vulnerable Library - <b>jdom-1.1.3.jar</b></p>
<p>A complete, Java-based solution for accessing, manipulating,
and outputting XML data</p>
<p>Library home page: <a href="http://www.jdom.org">http://www.jdom.org</a></p>
<p>Path to dependency file: /extension/pom.xml</p>
<p>Path to vulnerable library: /repository/org/jdom/jdom/1.1.3/jdom-1.1.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jdom-1.1.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/meramsey/user-alias/commit/f2be907aeaa8c6567433543468dc2c648ee24183">f2be907aeaa8c6567433543468dc2c648ee24183</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An XXE issue in SAXBuilder in JDOM through 2.0.6 allows attackers to cause a denial of service via a crafted HTTP request.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33813>CVE-2021-33813</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-33813">https://nvd.nist.gov/vuln/detail/CVE-2021-33813</a></p>
<p>Release Date: 2021-06-16</p>
<p>Fix Resolution: org.apache.servicemix.bundles:org.apache.servicemix.bundles.jdom - 2.0.5_1;org.jdom:jdom2:2.0.6.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
jdom jar vulnerabilities highest severity is vulnerable library jdom jar a complete java based solution for accessing manipulating and outputting xml data library home page a href path to dependency file extension pom xml path to vulnerable library repository org jdom jdom jdom jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high jdom jar direct org apache servicemix bundles org apache servicemix bundles jdom org jdom details cve vulnerable library jdom jar a complete java based solution for accessing manipulating and outputting xml data library home page a href path to dependency file extension pom xml path to vulnerable library repository org jdom jdom jdom jar dependency hierarchy x jdom jar vulnerable library found in head commit a href found in base branch master vulnerability details an xxe issue in saxbuilder in jdom through allows attackers to cause a denial of service via a crafted http request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache servicemix bundles org apache servicemix bundles jdom org jdom step up your open source security game with mend
| 0
|
7,032
| 10,192,792,825
|
IssuesEvent
|
2019-08-12 12:09:16
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
waiting for approval status isnt saved and it deletes the partner list
|
2.0.9 Documents Meetings Process bug Projects Settings Tasks
|
create an entity
assign another user to it
the other user presses waiting for approval
the partner list is deleted and the waiting for approval status isnt saved

after refresh:

|
1.0
|
waiting for approval status isnt saved and it deletes the partner list - create an entity
assign another user to it
the other user presses waiting for approval
the partner list is deleted and the waiting for approval status isnt saved

after refresh:

|
process
|
waiting for approval status isnt saved and it deletes the partner list create an entity assign another user to it the other user presses waiting for approval the partner list is deleted and the waiting for approval status isnt saved after refresh
| 1
|
20,563
| 27,223,787,564
|
IssuesEvent
|
2023-02-21 08:11:58
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
reopened
|
`aquery` of `CppLink` outputs extraneous param file path
|
type: support / not a bug (process) team-Rules-CPP
|
### Description of the bug:
In `aquery` outputs, by default command line arguments are expanded, but for `CppLink` action, the result contains both the expanded arguments and the param file path.
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
foo.cc
```cpp
int main() { return 0; }
```
BUILD
```
cc_binary(
name='foo',
srcs=['foo.cc']
)
```
Run:
```
bazel aquery --output=text 'mnemonic(CppLink, //:foo)'
```
```
action 'Linking foo'
Mnemonic: CppLink
Target: //:foo
Configuration: darwin-fastbuild
Execution platform: @local_config_platform//:host
ActionKey: 3b8a3654f4da51780786de7e6765419b258a366ceaf520aedcd03ac382f7366a
Inputs: [bazel-out/darwin-fastbuild/bin/_objs/foo/foo.o, bazel-out/darwin-fastbuild/bin/foo-2.params, external/local_config_cc/cc_wrapper.sh, external/local_config_cc/libtool, external/local_config_cc/libtool_check_unique, external/local_config_cc/make_hashed_objlist.py, external/local_config_cc/wrapped_clang, external/local_config_cc/wrapped_clang_pp, external/local_config_cc/xcrunwrapper.sh]
Outputs: [bazel-out/darwin-fastbuild/bin/foo]
ExecutionInfo: {requires-darwin: '', supports-xcode-requirements-set: ''}
Command Line: (exec external/local_config_cc/cc_wrapper.sh \
-lc++ \
-fobjc-link-runtime \
-Wl,-S \
-o \
bazel-out/darwin-fastbuild/bin/foo \
bazel-out/darwin-fastbuild/bin/_objs/foo/foo.o \
-headerpad_max_install_names \
-no-canonical-prefixes \
-target \
x86_64-apple-macosx12.3 \
-Xlinker \
-no_deduplicate \
-lc++ \
-target \
x86_64-apple-macosx12.3 \
@bazel-out/darwin-fastbuild/bin/foo-2.params)
# Configuration: a42135a42aad3da7e3af209ce54745fb0d0306dc29e1f3dc84d7d58372421fc9
# Execution platform: @local_config_platform//:host
ExecutionInfo: {requires-darwin: '', supports-xcode-requirements-set: ''}
```
The command line arguments from the param file are already expanded, but there is still a `@bazel-out/darwin-fastbuild/bin/foo-2.params` argument at the end.
This doesn't reproduce for param files created by Starlark rules.
### Which operating system are you running Bazel on?
macOS and Linux
### What is the output of `bazel info release`?
release 6.0.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
`aquery` of `CppLink` outputs extraneous param file path - ### Description of the bug:
In `aquery` outputs, by default command line arguments are expanded, but for `CppLink` action, the result contains both the expanded arguments and the param file path.
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
foo.cc
```cpp
int main() { return 0; }
```
BUILD
```
cc_binary(
name='foo',
srcs=['foo.cc']
)
```
Run:
```
bazel aquery --output=text 'mnemonic(CppLink, //:foo)'
```
```
action 'Linking foo'
Mnemonic: CppLink
Target: //:foo
Configuration: darwin-fastbuild
Execution platform: @local_config_platform//:host
ActionKey: 3b8a3654f4da51780786de7e6765419b258a366ceaf520aedcd03ac382f7366a
Inputs: [bazel-out/darwin-fastbuild/bin/_objs/foo/foo.o, bazel-out/darwin-fastbuild/bin/foo-2.params, external/local_config_cc/cc_wrapper.sh, external/local_config_cc/libtool, external/local_config_cc/libtool_check_unique, external/local_config_cc/make_hashed_objlist.py, external/local_config_cc/wrapped_clang, external/local_config_cc/wrapped_clang_pp, external/local_config_cc/xcrunwrapper.sh]
Outputs: [bazel-out/darwin-fastbuild/bin/foo]
ExecutionInfo: {requires-darwin: '', supports-xcode-requirements-set: ''}
Command Line: (exec external/local_config_cc/cc_wrapper.sh \
-lc++ \
-fobjc-link-runtime \
-Wl,-S \
-o \
bazel-out/darwin-fastbuild/bin/foo \
bazel-out/darwin-fastbuild/bin/_objs/foo/foo.o \
-headerpad_max_install_names \
-no-canonical-prefixes \
-target \
x86_64-apple-macosx12.3 \
-Xlinker \
-no_deduplicate \
-lc++ \
-target \
x86_64-apple-macosx12.3 \
@bazel-out/darwin-fastbuild/bin/foo-2.params)
# Configuration: a42135a42aad3da7e3af209ce54745fb0d0306dc29e1f3dc84d7d58372421fc9
# Execution platform: @local_config_platform//:host
ExecutionInfo: {requires-darwin: '', supports-xcode-requirements-set: ''}
```
The command line arguments from the param file are already expanded, but there is still a `@bazel-out/darwin-fastbuild/bin/foo-2.params` argument at the end.
This doesn't reproduce for param files created by Starlark rules.
### Which operating system are you running Bazel on?
macOS and Linux
### What is the output of `bazel info release`?
release 6.0.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
aquery of cpplink outputs extraneous param file path description of the bug in aquery outputs by default command line arguments are expanded but for cpplink action the result contains both the expanded arguments and the param file path what s the simplest easiest way to reproduce this bug please provide a minimal example if possible foo cc cpp int main return build cc binary name foo srcs run bazel aquery output text mnemonic cpplink foo action linking foo mnemonic cpplink target foo configuration darwin fastbuild execution platform local config platform host actionkey inputs outputs executioninfo requires darwin supports xcode requirements set command line exec external local config cc cc wrapper sh lc fobjc link runtime wl s o bazel out darwin fastbuild bin foo bazel out darwin fastbuild bin objs foo foo o headerpad max install names no canonical prefixes target apple xlinker no deduplicate lc target apple bazel out darwin fastbuild bin foo params configuration execution platform local config platform host executioninfo requires darwin supports xcode requirements set the command line arguments from the param file are already expanded but there is still a bazel out darwin fastbuild bin foo params argument at the end this doesn t reproduce for param files created by starlark rules which operating system are you running bazel on macos and linux what is the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
| 1
|
544
| 3,004,602,890
|
IssuesEvent
|
2015-07-26 03:47:15
|
open-app/holodex
|
https://api.github.com/repos/open-app/holodex
|
closed
|
[Process Improvement] Give issues a time estimate
|
process
|
Time estimates may be entirely inaccurate and sft. But may help communicate to others what may happen by when
|
1.0
|
[Process Improvement] Give issues a time estimate - Time estimates may be entirely inaccurate and sft. But may help communicate to others what may happen by when
|
process
|
give issues a time estimate time estimates may be entirely inaccurate and sft but may help communicate to others what may happen by when
| 1
|
184,102
| 14,969,329,168
|
IssuesEvent
|
2021-01-27 18:00:01
|
nilearn/nilearn
|
https://api.github.com/repos/nilearn/nilearn
|
closed
|
High Variance Confounds - add memory parameter
|
Documentation
|
I think it would be practical for users when nilearn.image.high_variance_confounds has an option to assign a memory like in the niftilabelsmasker.
Of course it is possible to use joblib but this may not be so easy or accessible for everyone and just like e.g. extracting timeseries, calculating high variance confounds can take some time.
|
1.0
|
High Variance Confounds - add memory parameter - I think it would be practical for users when nilearn.image.high_variance_confounds has an option to assign a memory like in the niftilabelsmasker.
Of course it is possible to use joblib but this may not be so easy or accessible for everyone and just like e.g. extracting timeseries, calculating high variance confounds can take some time.
|
non_process
|
high variance confounds add memory parameter i think it would be practical for users when nilearn image high variance confounds has an option to assign a memory like in the niftilabelsmasker of course it is possible to use joblib but this may not be so easy or accessible for everyone and just like e g extracting timeseries calculating high variance confounds can take some time
| 0
|
51,211
| 26,965,217,792
|
IssuesEvent
|
2023-02-08 21:42:07
|
mawenzy/p5control
|
https://api.github.com/repos/mawenzy/p5control
|
opened
|
can rpyc be operated asynchronous?
|
enhancement performance
|
Using `async_`, calls can be executed asynchronous, but there is not direct signal slot mechanism to integrate it into the qt event loop. Maybe there is a way to make the `rpyc` calls async in order to dispatch the request and the pick up at that point when the result is there.
|
True
|
can rpyc be operated asynchronous? - Using `async_`, calls can be executed asynchronous, but there is not direct signal slot mechanism to integrate it into the qt event loop. Maybe there is a way to make the `rpyc` calls async in order to dispatch the request and the pick up at that point when the result is there.
|
non_process
|
can rpyc be operated asynchronous using async calls can be executed asynchronous but there is not direct signal slot mechanism to integrate it into the qt event loop maybe there is a way to make the rpyc calls async in order to dispatch the request and the pick up at that point when the result is there
| 0
|
14,015
| 10,085,205,415
|
IssuesEvent
|
2019-07-25 17:31:07
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
GenerateAnswer with context: {} return 'No good match found in KB.'
|
assigned-to-author cognitive-services/svc product-question triaged
|
When I call generateanswer with the following payload (similar to example above - https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation#json-request-to-return-initial-answer-and-follow-up-prompts)
{
"question": "who are you",
"context": {},
"top": 5,
"scoreThreshold": 50
}
QnA Maker returns No good match found in KB. However, when I changed the payload to
{
"question": "who are you",
"context": null,
"top": 5,
"scoreThreshold": 50
}
QnA Maker returns the expected result. The only difference is 'context': {} vs 'context': null. My test kb only has items from chit chat professional. Please update the documentations in the link above to remove 'context': {}
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 38f6b983-0c19-6439-78df-ea7bdab3f94e
* Version Independent ID: 15dc8197-fed8-a94f-4143-2799d5e6a002
* Content: [Multi-turn conversations - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation#json-request-to-return-initial-answer-and-follow-up-prompts)
* Content Source: [articles/cognitive-services/QnAMaker/How-To/multiturn-conversation.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/How-To/multiturn-conversation.md)
* Service: **cognitive-services**
* GitHub Login: @diberry
* Microsoft Alias: **diberry**
|
1.0
|
GenerateAnswer with context: {} return 'No good match found in KB.' - When I call generateanswer with the following payload (similar to example above - https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation#json-request-to-return-initial-answer-and-follow-up-prompts)
{
"question": "who are you",
"context": {},
"top": 5,
"scoreThreshold": 50
}
QnA Maker returns No good match found in KB. However, when I changed the payload to
{
"question": "who are you",
"context": null,
"top": 5,
"scoreThreshold": 50
}
QnA Maker returns the expected result. The only difference is 'context': {} vs 'context': null. My test kb only has items from chit chat professional. Please update the documentations in the link above to remove 'context': {}
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 38f6b983-0c19-6439-78df-ea7bdab3f94e
* Version Independent ID: 15dc8197-fed8-a94f-4143-2799d5e6a002
* Content: [Multi-turn conversations - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation#json-request-to-return-initial-answer-and-follow-up-prompts)
* Content Source: [articles/cognitive-services/QnAMaker/How-To/multiturn-conversation.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/How-To/multiturn-conversation.md)
* Service: **cognitive-services**
* GitHub Login: @diberry
* Microsoft Alias: **diberry**
|
non_process
|
generateanswer with context return no good match found in kb when i call generateanswer with the following payload similar to example above question who are you context top scorethreshold qna maker returns no good match found in kb however when i changed the payload to question who are you context null top scorethreshold qna maker returns the expected result the only difference is context vs context null my test kb only has items from chit chat professional please update the documentations in the link above to remove context document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service cognitive services github login diberry microsoft alias diberry
| 0
|
128,830
| 12,392,702,508
|
IssuesEvent
|
2020-05-20 14:23:03
|
codesquad-member-2020/baseball-08
|
https://api.github.com/repos/codesquad-member-2020/baseball-08
|
closed
|
νλ‘μ νΈ μ€κ° 리뷰
|
documentation
|
μΈμμ μ
λλ€. μ§§κ³ λͺ
λ£ν μ€ν¬λΌ λ΄μ©! (not bad,. very good...)
https://github.com/codesquad-member-2020/baseball-08/wiki/Scrum
Type Script λΌκ³ νμ§λ§κ³ TypeScript γ
리λͺ¨νΈλ
Όμ μμ£Ό μ’κ³ μ~ π₯
https://github.com/codesquad-member-2020/baseball-08/issues/5
λλΆλΆμ feature(issue)κ° subtaskλ₯Ό κ°μ§κ³ μλλ°μ. κ·Έ μ΅λ κ°―μκ° 4κ°μ λλ₯Ό μ μ§ν΄μ μ’μκ±° κ°μ΅λλ€.
μ΄μλ μκ² λλ μ μμΌλ©΄ κ·Έκ² μ’μ£ .
μ 체μ μΌλ‘ λ£°μ΄ λ§μ보μ΄λλ°, μ€μ νκ°λ°μ ν λ μ΄λ€ λλλ€μΈμ§ μ’ κΆκΈνλ€μ. (μ΄λ°κ² μ’ νΌκ³€ν μλ μμ λ―) μ§λκ°λ€κ° λ¬Όμ΄λ³Όκ²μ γ
|
1.0
|
νλ‘μ νΈ μ€κ° 리뷰 - μΈμμ μ
λλ€. μ§§κ³ λͺ
λ£ν μ€ν¬λΌ λ΄μ©! (not bad,. very good...)
https://github.com/codesquad-member-2020/baseball-08/wiki/Scrum
Type Script λΌκ³ νμ§λ§κ³ TypeScript γ
리λͺ¨νΈλ
Όμ μμ£Ό μ’κ³ μ~ π₯
https://github.com/codesquad-member-2020/baseball-08/issues/5
λλΆλΆμ feature(issue)κ° subtaskλ₯Ό κ°μ§κ³ μλλ°μ. κ·Έ μ΅λ κ°―μκ° 4κ°μ λλ₯Ό μ μ§ν΄μ μ’μκ±° κ°μ΅λλ€.
μ΄μλ μκ² λλ μ μμΌλ©΄ κ·Έκ² μ’μ£ .
μ 체μ μΌλ‘ λ£°μ΄ λ§μ보μ΄λλ°, μ€μ νκ°λ°μ ν λ μ΄λ€ λλλ€μΈμ§ μ’ κΆκΈνλ€μ. (μ΄λ°κ² μ’ νΌκ³€ν μλ μμ λ―) μ§λκ°λ€κ° λ¬Όμ΄λ³Όκ²μ γ
|
non_process
|
νλ‘μ νΈ μ€κ° 리뷰 μΈμμ μ
λλ€ μ§§κ³ λͺ
λ£ν μ€ν¬λΌ λ΄μ© not bad very good type script λΌκ³ νμ§λ§κ³ typescript γ
리λͺ¨νΈλ
Όμ μμ£Ό μ’κ³ μ π₯ λλΆλΆμ feature issue κ° subtaskλ₯Ό κ°μ§κ³ μλλ°μ κ·Έ μ΅λ κ°―μκ° μ μ§ν΄μ μ’μκ±° κ°μ΅λλ€ μ΄μλ μκ² λλ μ μμΌλ©΄ κ·Έκ² μ’μ£ μ 체μ μΌλ‘ λ£°μ΄ λ§μ보μ΄λλ° μ€μ νκ°λ°μ ν λ μ΄λ€ λλλ€μΈμ§ μ’ κΆκΈνλ€μ μ΄λ°κ² μ’ νΌκ³€ν μλ μμ λ― μ§λκ°λ€κ° λ¬Όμ΄λ³Όκ²μ γ
| 0
|
14,367
| 17,390,631,640
|
IssuesEvent
|
2021-08-02 06:49:14
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
closed
|
Provide easy way to test user-defined message handlers
|
area:message-processing enhancement testing
|
### Discussed in https://github.com/arcus-azure/arcus.messaging/discussions/155
<div type='discussions-op-text'>
<sup>Originally posted by **stijnmoreels** February 19, 2021</sup>
After the routing is determined #141 , we could maybe rethink our integration tests as they are all relying on external dependencies (Azure Service Bus & Event Grid) to test sometimes not dependency-related stuff but more message handling functionality.
We could change some of our tests to only test the necessary functionality by sending our own test messages directly in the router instead of going via the Azure Service Bus. Of course we can still have some tests that do go through Azure Service Bus and the message pump, but if we extract also some of these message handling tests, we can more extensively test by sending more 'extreme' kind of messages with maybe malicious inputs which we cannot do right now. </div>
|
1.0
|
Provide easy way to test user-defined message handlers - ### Discussed in https://github.com/arcus-azure/arcus.messaging/discussions/155
<div type='discussions-op-text'>
<sup>Originally posted by **stijnmoreels** February 19, 2021</sup>
After the routing is determined #141 , we could maybe rethink our integration tests as they are all relying on external dependencies (Azure Service Bus & Event Grid) to test sometimes not dependency-related stuff but more message handling functionality.
We could change some of our tests to only test the necessary functionality by sending our own test messages directly in the router instead of going via the Azure Service Bus. Of course we can still have some tests that do go through Azure Service Bus and the message pump, but if we extract also some of these message handling tests, we can more extensively test by sending more 'extreme' kind of messages with maybe malicious inputs which we cannot do right now. </div>
|
process
|
provide easy way to test user defined message handlers discussed in originally posted by stijnmoreels february after the routing is determined we could maybe rethink our integration tests as they are all relying on external dependencies azure service bus event grid to test sometimes not dependency related stuff but more message handling functionality we could change some of our tests to only test the necessary functionality by sending our own test messages directly in the router instead of going via the azure service bus of course we can still have some tests that do go through azure service bus and the message pump but if we extract also some of these message handling tests we can more extensively test by sending more extreme kind of messages with maybe malicious inputs which we cannot do right now
| 1
|
2,525
| 5,288,350,405
|
IssuesEvent
|
2017-02-08 14:56:17
|
mesosphere/marathon
|
https://api.github.com/repos/mesosphere/marathon
|
closed
|
#5085 scale test recovering from group deploy timeout
|
Epic:Improve CI and Release Process sprint-3 task
|
group was ignored because it often failed. It was added back in but has a failure mode that fails the build. Let's fix it.
|
1.0
|
#5085 scale test recovering from group deploy timeout - group was ignored because it often failed. It was added back in but has a failure mode that fails the build. Let's fix it.
|
process
|
scale test recovering from group deploy timeout group was ignored because it often failed it was added back in but has a failure mode that fails the build let s fix it
| 1
|
136,343
| 11,046,944,650
|
IssuesEvent
|
2019-12-09 17:53:35
|
web-platform-tests/wpt
|
https://api.github.com/repos/web-platform-tests/wpt
|
closed
|
test_driver.set_permission() needs to take a descriptor, not a permission name
|
infra priority:roadmap testdriver.js
|
The new API added in #20461 is insufficient to handle permission descriptor types that take extra fields (e.g. `MidiPermissionDescriptor` or `PushPermissionDescriptor`), as its first parameter is a string rather than a `PermissionDescriptor` object.
|
1.0
|
test_driver.set_permission() needs to take a descriptor, not a permission name - The new API added in #20461 is insufficient to handle permission descriptor types that take extra fields (e.g. `MidiPermissionDescriptor` or `PushPermissionDescriptor`), as its first parameter is a string rather than a `PermissionDescriptor` object.
|
non_process
|
test driver set permission needs to take a descriptor not a permission name the new api added in is insufficient to handle permission descriptor types that take extra fields e g midipermissiondescriptor or pushpermissiondescriptor as its first parameter is a string rather than a permissiondescriptor object
| 0
|
11,320
| 14,138,370,901
|
IssuesEvent
|
2020-11-10 08:19:38
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
closed
|
There should be an easy way to run *exactly* the same checks locally as on github workflow
|
C-process T-enhancement
|
For example, running `cargo clippy` locally won't do, because what we actually run as a workflow is this check:
```
/usr/share/rust/.cargo/bin/cargo clippy --message-format=json --all-targets --all-features -- -D warnings
```
So your `cargo clippy` run may be fine, but if you wait on github to run its version, you can get errors that you missed.
We should have a way to say, preferably in a single command: "run locally *exactly* the same checks that we're running on the test infra", as a way to streamline fixing error checks. Having to keep mental accounting on exact command lines to use or waiting on github workflow are error prone and lengthy.
|
1.0
|
There should be an easy way to run *exactly* the same checks locally as on github workflow - For example, running `cargo clippy` locally won't do, because what we actually run as a workflow is this check:
```
/usr/share/rust/.cargo/bin/cargo clippy --message-format=json --all-targets --all-features -- -D warnings
```
So your `cargo clippy` run may be fine, but if you wait on github to run its version, you can get errors that you missed.
We should have a way to say, preferably in a single command: "run locally *exactly* the same checks that we're running on the test infra", as a way to streamline fixing error checks. Having to keep mental accounting on exact command lines to use or waiting on github workflow are error prone and lengthy.
|
process
|
there should be an easy way to run exactly the same checks locally as on github workflow for example running cargo clippy locally won t do because what we actually run as a workflow is this check usr share rust cargo bin cargo clippy message format json all targets all features d warnings so your cargo clippy run may be fine but if you wait on github to run its version you can get errors that you missed we should have a way to say preferably in a single command run locally exactly the same checks that we re running on the test infra as a way to streamline fixing error checks having to keep mental accounting on exact command lines to use or waiting on github workflow are error prone and lengthy
| 1
|
278,613
| 24,163,208,407
|
IssuesEvent
|
2022-09-22 13:16:01
|
lbryio/lbry-sdk
|
https://api.github.com/repos/lbryio/lbry-sdk
|
opened
|
test_es_sync_utility: TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType'
|
area: tests
|
https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064
Transaction 'b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901' lookup is failing, and we get back None instead of raw TX bytes.
```
2022-09-22 12:49:48,178 - lbry.wallet.network - DEBUG - send blockchain.transaction.get_batch('b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901',) to localhost:50002 (30 timeout)
[6446](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6447)
2022-09-22 12:49:48,331 - lbry.wallet.network - DEBUG - response blockchain.transaction.get_batch('b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901',) from localhost:50002 (30 timeout) -> {'b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901': [None, {'block_height': -1}]}
[6447](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6448)
2022-09-22 12:49:48,333 - asyncio - ERROR - Task exception was never retrieved
[6448](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6449)
future: <Task finished coro=<Ledger.update_history() done, defined at /home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py:513> exception=TypeError("argument should be bytes, buffer or ASCII string, not 'NoneType'") created at /home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/tasks.py:16>
[6449](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6450)
source_traceback: Object created at (most recent call last):
[6450](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6451)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/bin/coverage", line 8, in <module>
[6451](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6452)
sys.exit(main())
[6452](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6453)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 943, in main
[6453](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6454)
status = CoverageScript().command_line(argv)
[6454](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6455)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 659, in command_line
[6455](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6456)
return self.do_run(options, args)
[6456](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6457)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 830, in do_run
[6457](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6458)
runner.run()
[6458](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6459)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/execfile.py", line 199, in run
[6459](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6460)
exec(code, main_mod.__dict__)
[6460](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6461)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/__main__.py", line 18, in <module>
[6461](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6462)
main(module=None)
[6462](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6463)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/main.py", line 101, in __init__
[6463](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6464)
self.runTests()
[6464](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6465)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/main.py", line 271, in runTests
[6465](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6466)
self.result = testRunner.run(self.test)
[6466](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6467)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/runner.py", line 176, in run
[6467](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6468)
test(result)
[6468](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6469)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6469](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6470)
return self.run(*args, **kwds)
[6470](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6471)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6471](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6472)
test(result)
[6472](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6473)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6473](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6474)
return self.run(*args, **kwds)
[6474](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6475)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6475](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6476)
test(result)
[6476](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6477)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6477](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6478)
return self.run(*args, **kwds)
[6478](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6479)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6479](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6480)
test(result)
[6480](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6481)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/case.py", line 676, in __call__
[6481](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6482)
return self.run(*args, **kwds)
[6482](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6483)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 145, in run
[6483](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6484)
self.loop.run_until_complete(maybe_coroutine)
[6484](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6485)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 574, in run_until_complete
[6485](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6486)
self.run_forever()
[6486](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6487)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
[6487](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6488)
self._run_once()
[6488](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6489)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 1778, in _run_once
[6489](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6490)
handle._run()
[6490](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6491)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/events.py", line 88, in _run
[6491](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6492)
self._context.run(self._callback, *self._args)
[6492](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6493)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/rpc/session.py", line 433, in _handle_request
[6493](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6494)
result = await self.handle_request(request)
[6494](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6495)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/network.py", line 145, in handle_request
[6495](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6496)
controller.add(request.args)
[6496](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6497)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 84, in add
[6497](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6498)
lambda subscription: None if skip else subscription._add(event)
[6498](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6499)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 70, in _notify_and_ensure_future
[6499](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6500)
maybe_coroutine = notify(subscription)
[6500](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6501)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 84, in <lambda>
[6501](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6502)
lambda subscription: None if skip else subscription._add(event)
[6502](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6503)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 32, in _add
[6503](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6504)
return self._on_data(data)
[6504](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6505)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 511, in process_status_update
[6505](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6506)
self._update_tasks.add(self.update_history(address, remote_status))
[6506](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6507)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/tasks.py", line 16, in add
[6507](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6508)
task = self._loop.create_task(coro)
[6508](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6509)
Traceback (most recent call last):
[6509](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6510)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 559, in update_history
[6510](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6511)
async for tx in self.request_synced_transactions(to_request, remote_history_txids, address):
[6511](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6512)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 667, in request_synced_transactions
[6512](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6513)
async for txs in self.request_transactions(((txid, height) for txid, height in to_request.values())):
[6513](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6514)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 660, in request_transactions
[6514](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6515)
txs = await self._single_batch(batch, remote_heights)
[6515](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6516)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 679, in _single_batch
[6516](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6517)
tx = Transaction(unhexlify(raw), height=remote_height)
[6517](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6518)
TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType'
[6518](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6519)```
```
This happens several times, then the test fails.
```
8274
======================================================================
[8275](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8276)
ERROR: test_es_sync_utility (integration.blockchain.test_wallet_server_sessions.TestESSync)
[8276](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8277)
----------------------------------------------------------------------
[8277](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8278)
Traceback (most recent call last):
[8278](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8279)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 145, in run
[8279](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8280)
self.loop.run_until_complete(maybe_coroutine)
[8280](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8281)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
[8281](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8282)
return future.result()
[8282](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8283)
File "/home/runner/work/lbry-sdk/lbry-sdk/tests/integration/blockchain/test_wallet_server_sessions.py", line 144, in test_es_sync_utility
[8283](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8284)
self.assertEqual(11, len(await self.claim_search(order_by=['height'])))
[8284](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8285)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 708, in claim_search
[8285](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8286)
return (await self.out(self.daemon.jsonrpc_claim_search(**kwargs)))['items']
[8286](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8287)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 554, in out
[8287](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8288)
return json.loads(jsonrpc_dumps_pretty(await awaitable, ledger=self.ledger))['result']
[8288](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8289)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/extras/daemon/daemon.py", line 2644, in jsonrpc_claim_search
[8289](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8290)
txos, blocked, _, total = await self.ledger.claim_search(wallet.accounts, **kwargs)
[8290](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8291)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 909, in claim_search
[8291](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8292)
include_is_my_output=include_is_my_output
[8292](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8293)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 801, in _inflate_outputs
[8293](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8294)
async for tx in self.request_transactions(tuple(outputs.txs), cached=True):
[8294](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8295)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 660, in request_transactions
[8295](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8296)
txs = await self._single_batch(batch, remote_heights)
[8296](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8297)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 679, in _single_batch
[8297](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8298)
tx = Transaction(unhexlify(raw), height=remote_height)
[8298](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8299)
TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType'
[8299](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8300)
[8300](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8301)
----------------------------------------------------------------------
```
|
1.0
|
test_es_sync_utility: TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType' - https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064
Transaction 'b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901' lookup is failing, and we get back None instead of raw TX bytes.
```
2022-09-22 12:49:48,178 - lbry.wallet.network - DEBUG - send blockchain.transaction.get_batch('b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901',) to localhost:50002 (30 timeout)
[6446](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6447)
2022-09-22 12:49:48,331 - lbry.wallet.network - DEBUG - response blockchain.transaction.get_batch('b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901',) from localhost:50002 (30 timeout) -> {'b01ca9a00c494dc971d0c0a1af3d32db242fd9c09a11b1a903b274234e7b9901': [None, {'block_height': -1}]}
[6447](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6448)
2022-09-22 12:49:48,333 - asyncio - ERROR - Task exception was never retrieved
[6448](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6449)
future: <Task finished coro=<Ledger.update_history() done, defined at /home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py:513> exception=TypeError("argument should be bytes, buffer or ASCII string, not 'NoneType'") created at /home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/tasks.py:16>
[6449](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6450)
source_traceback: Object created at (most recent call last):
[6450](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6451)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/bin/coverage", line 8, in <module>
[6451](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6452)
sys.exit(main())
[6452](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6453)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 943, in main
[6453](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6454)
status = CoverageScript().command_line(argv)
[6454](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6455)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 659, in command_line
[6455](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6456)
return self.do_run(options, args)
[6456](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6457)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/cmdline.py", line 830, in do_run
[6457](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6458)
runner.run()
[6458](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6459)
File "/home/runner/work/lbry-sdk/lbry-sdk/.tox/blockchain/lib/python3.7/site-packages/coverage/execfile.py", line 199, in run
[6459](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6460)
exec(code, main_mod.__dict__)
[6460](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6461)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/__main__.py", line 18, in <module>
[6461](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6462)
main(module=None)
[6462](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6463)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/main.py", line 101, in __init__
[6463](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6464)
self.runTests()
[6464](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6465)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/main.py", line 271, in runTests
[6465](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6466)
self.result = testRunner.run(self.test)
[6466](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6467)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/runner.py", line 176, in run
[6467](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6468)
test(result)
[6468](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6469)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6469](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6470)
return self.run(*args, **kwds)
[6470](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6471)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6471](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6472)
test(result)
[6472](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6473)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6473](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6474)
return self.run(*args, **kwds)
[6474](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6475)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6475](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6476)
test(result)
[6476](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6477)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 84, in __call__
[6477](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6478)
return self.run(*args, **kwds)
[6478](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6479)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/suite.py", line 122, in run
[6479](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6480)
test(result)
[6480](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6481)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/unittest/case.py", line 676, in __call__
[6481](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6482)
return self.run(*args, **kwds)
[6482](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6483)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 145, in run
[6483](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6484)
self.loop.run_until_complete(maybe_coroutine)
[6484](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6485)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 574, in run_until_complete
[6485](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6486)
self.run_forever()
[6486](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6487)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
[6487](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6488)
self._run_once()
[6488](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6489)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 1778, in _run_once
[6489](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6490)
handle._run()
[6490](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6491)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/events.py", line 88, in _run
[6491](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6492)
self._context.run(self._callback, *self._args)
[6492](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6493)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/rpc/session.py", line 433, in _handle_request
[6493](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6494)
result = await self.handle_request(request)
[6494](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6495)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/network.py", line 145, in handle_request
[6495](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6496)
controller.add(request.args)
[6496](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6497)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 84, in add
[6497](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6498)
lambda subscription: None if skip else subscription._add(event)
[6498](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6499)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 70, in _notify_and_ensure_future
[6499](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6500)
maybe_coroutine = notify(subscription)
[6500](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6501)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 84, in <lambda>
[6501](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6502)
lambda subscription: None if skip else subscription._add(event)
[6502](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6503)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/stream.py", line 32, in _add
[6503](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6504)
return self._on_data(data)
[6504](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6505)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 511, in process_status_update
[6505](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6506)
self._update_tasks.add(self.update_history(address, remote_status))
[6506](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6507)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/tasks.py", line 16, in add
[6507](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6508)
task = self._loop.create_task(coro)
[6508](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6509)
Traceback (most recent call last):
[6509](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6510)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 559, in update_history
[6510](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6511)
async for tx in self.request_synced_transactions(to_request, remote_history_txids, address):
[6511](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6512)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 667, in request_synced_transactions
[6512](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6513)
async for txs in self.request_transactions(((txid, height) for txid, height in to_request.values())):
[6513](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6514)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 660, in request_transactions
[6514](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6515)
txs = await self._single_batch(batch, remote_heights)
[6515](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6516)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 679, in _single_batch
[6516](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6517)
tx = Transaction(unhexlify(raw), height=remote_height)
[6517](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6518)
TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType'
[6518](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:6519)```
```
This happens several times, then the test fails.
```
8274
======================================================================
[8275](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8276)
ERROR: test_es_sync_utility (integration.blockchain.test_wallet_server_sessions.TestESSync)
[8276](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8277)
----------------------------------------------------------------------
[8277](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8278)
Traceback (most recent call last):
[8278](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8279)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 145, in run
[8279](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8280)
self.loop.run_until_complete(maybe_coroutine)
[8280](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8281)
File "/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
[8281](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8282)
return future.result()
[8282](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8283)
File "/home/runner/work/lbry-sdk/lbry-sdk/tests/integration/blockchain/test_wallet_server_sessions.py", line 144, in test_es_sync_utility
[8283](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8284)
self.assertEqual(11, len(await self.claim_search(order_by=['height'])))
[8284](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8285)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 708, in claim_search
[8285](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8286)
return (await self.out(self.daemon.jsonrpc_claim_search(**kwargs)))['items']
[8286](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8287)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/testcase.py", line 554, in out
[8287](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8288)
return json.loads(jsonrpc_dumps_pretty(await awaitable, ledger=self.ledger))['result']
[8288](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8289)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/extras/daemon/daemon.py", line 2644, in jsonrpc_claim_search
[8289](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8290)
txos, blocked, _, total = await self.ledger.claim_search(wallet.accounts, **kwargs)
[8290](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8291)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 909, in claim_search
[8291](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8292)
include_is_my_output=include_is_my_output
[8292](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8293)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 801, in _inflate_outputs
[8293](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8294)
async for tx in self.request_transactions(tuple(outputs.txs), cached=True):
[8294](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8295)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 660, in request_transactions
[8295](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8296)
txs = await self._single_batch(batch, remote_heights)
[8296](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8297)
File "/home/runner/work/lbry-sdk/lbry-sdk/lbry/wallet/ledger.py", line 679, in _single_batch
[8297](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8298)
tx = Transaction(unhexlify(raw), height=remote_height)
[8298](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8299)
TypeError: argument should be bytes, buffer or ASCII string, not 'NoneType'
[8299](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8300)
[8300](https://github.com/moodyjon/lbry-sdk/actions/runs/3105406501/jobs/5030990064#step:11:8301)
----------------------------------------------------------------------
```
|
non_process
|
test es sync utility typeerror argument should be bytes buffer or ascii string not nonetype transaction lookup is failing and we get back none instead of raw tx bytes lbry wallet network debug send blockchain transaction get batch to localhost timeout lbry wallet network debug response blockchain transaction get batch from localhost timeout asyncio error task exception was never retrieved future exception typeerror argument should be bytes buffer or ascii string not nonetype created at home runner work lbry sdk lbry sdk lbry wallet tasks py source traceback object created at most recent call last file home runner work lbry sdk lbry sdk tox blockchain bin coverage line in sys exit main file home runner work lbry sdk lbry sdk tox blockchain lib site packages coverage cmdline py line in main status coveragescript command line argv file home runner work lbry sdk lbry sdk tox blockchain lib site packages coverage cmdline py line in command line return self do run options args file home runner work lbry sdk lbry sdk tox blockchain lib site packages coverage cmdline py line in do run runner run file home runner work lbry sdk lbry sdk tox blockchain lib site packages coverage execfile py line in run exec code main mod dict file opt hostedtoolcache python lib unittest main py line in main module none file opt hostedtoolcache python lib unittest main py line in init self runtests file opt hostedtoolcache python lib unittest main py line in runtests self result testrunner run self test file opt hostedtoolcache python lib unittest runner py line in run test result file opt hostedtoolcache python lib unittest suite py line in call return self run args kwds file opt hostedtoolcache python lib unittest suite py line in run test result file opt hostedtoolcache python lib unittest suite py line in call return self run args kwds file opt hostedtoolcache python lib unittest suite py line in run test result file opt hostedtoolcache python lib unittest suite py line in call return self run args kwds file opt hostedtoolcache python lib unittest suite py line in run test result file opt hostedtoolcache python lib unittest case py line in call return self run args kwds file home runner work lbry sdk lbry sdk lbry testcase py line in run self loop run until complete maybe coroutine file opt hostedtoolcache python lib asyncio base events py line in run until complete self run forever file opt hostedtoolcache python lib asyncio base events py line in run forever self run once file opt hostedtoolcache python lib asyncio base events py line in run once handle run file opt hostedtoolcache python lib asyncio events py line in run self context run self callback self args file home runner work lbry sdk lbry sdk lbry wallet rpc session py line in handle request result await self handle request request file home runner work lbry sdk lbry sdk lbry wallet network py line in handle request controller add request args file home runner work lbry sdk lbry sdk lbry wallet stream py line in add lambda subscription none if skip else subscription add event file home runner work lbry sdk lbry sdk lbry wallet stream py line in notify and ensure future maybe coroutine notify subscription file home runner work lbry sdk lbry sdk lbry wallet stream py line in lambda subscription none if skip else subscription add event file home runner work lbry sdk lbry sdk lbry wallet stream py line in add return self on data data file home runner work lbry sdk lbry sdk lbry wallet ledger py line in process status update self update tasks add self update history address remote status file home runner work lbry sdk lbry sdk lbry wallet tasks py line in add task self loop create task coro traceback most recent call last file home runner work lbry sdk lbry sdk lbry wallet ledger py line in update history async for tx in self request synced transactions to request remote history txids address file home runner work lbry sdk lbry sdk lbry wallet ledger py line in request synced transactions async for txs in self request transactions txid height for txid height in to request values file home runner work lbry sdk lbry sdk lbry wallet ledger py line in request transactions txs await self single batch batch remote heights file home runner work lbry sdk lbry sdk lbry wallet ledger py line in single batch tx transaction unhexlify raw height remote height typeerror argument should be bytes buffer or ascii string not nonetype this happens several times then the test fails error test es sync utility integration blockchain test wallet server sessions testessync traceback most recent call last file home runner work lbry sdk lbry sdk lbry testcase py line in run self loop run until complete maybe coroutine file opt hostedtoolcache python lib asyncio base events py line in run until complete return future result file home runner work lbry sdk lbry sdk tests integration blockchain test wallet server sessions py line in test es sync utility self assertequal len await self claim search order by file home runner work lbry sdk lbry sdk lbry testcase py line in claim search return await self out self daemon jsonrpc claim search kwargs file home runner work lbry sdk lbry sdk lbry testcase py line in out return json loads jsonrpc dumps pretty await awaitable ledger self ledger file home runner work lbry sdk lbry sdk lbry extras daemon daemon py line in jsonrpc claim search txos blocked total await self ledger claim search wallet accounts kwargs file home runner work lbry sdk lbry sdk lbry wallet ledger py line in claim search include is my output include is my output file home runner work lbry sdk lbry sdk lbry wallet ledger py line in inflate outputs async for tx in self request transactions tuple outputs txs cached true file home runner work lbry sdk lbry sdk lbry wallet ledger py line in request transactions txs await self single batch batch remote heights file home runner work lbry sdk lbry sdk lbry wallet ledger py line in single batch tx transaction unhexlify raw height remote height typeerror argument should be bytes buffer or ascii string not nonetype
| 0
|
6,349
| 9,398,651,321
|
IssuesEvent
|
2019-04-08 12:48:34
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
reopened
|
Inbox show incorrect order
|
2.0.6.1 Fixed Process bug bug critical
|
The inbox show in the top the updates that were done in the oldest date, and in the most down it's the newest updates.
It need's to be from the newest to the oldest, So in the top (the first) will be the most new update.
|
1.0
|
Inbox show incorrect order - The inbox show in the top the updates that were done in the oldest date, and in the most down it's the newest updates.
It need's to be from the newest to the oldest, So in the top (the first) will be the most new update.
|
process
|
inbox show incorrect order the inbox show in the top the updates that were done in the oldest date and in the most down it s the newest updates it need s to be from the newest to the oldest so in the top the first will be the most new update
| 1
|
81,286
| 10,118,853,894
|
IssuesEvent
|
2019-07-31 09:59:47
|
ushahidi/opendesign
|
https://api.github.com/repos/ushahidi/opendesign
|
closed
|
Pull quote template
|
Design Highest Priority Social Media
|
Design template for pull quotes for social medias.
.psd or .xd format for @thesocialdetail to be able to edit/amend quickly for socials.
|
1.0
|
Pull quote template - Design template for pull quotes for social medias.
.psd or .xd format for @thesocialdetail to be able to edit/amend quickly for socials.
|
non_process
|
pull quote template design template for pull quotes for social medias psd or xd format for thesocialdetail to be able to edit amend quickly for socials
| 0
|
290,199
| 25,042,422,806
|
IssuesEvent
|
2022-11-04 22:42:43
|
nrwl/nx
|
https://api.github.com/repos/nrwl/nx
|
closed
|
Cannot setup cypress component testing for a lib or app via the generator
|
type: bug blocked: repro needed scope: testing tools
|
<!-- Please do your best to fill out all of the sections below! -->
## Current Behavior
Cannot setup cypress component testing for a lib or app via the generator
I get the error
```
bmathew@Blessans-MacBook-Pro-4 personio-web % nx g @nrwl/react:cypress-component-configuration --project=my-new-lib --verbose
> NX Generating @nrwl/react:cypress-component-configuration
β Automatically generate tests for components declared in this project? (y/N) Β· false
> NX Cannot read properties of undefined (reading 'config')
TypeError: Cannot read properties of undefined (reading 'config')
at /Users/bmathew/Desktop/repos/personio-web/node_modules/.pnpm/@nrwl+react@14.8.4_734a1c5f3d4ccdfc9ca0aa1d480d891c/node_modules/@nrwl/react/src/generators/cypress-component-configuration/lib/update-configs.js:14:32
at Generator.next (<anonymous>)
at fulfilled (/Users/bmathew/Desktop/repos/personio-web/node_modules/.pnpm/tslib@2.4.0/node_modules/tslib/tslib.js:115:62)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
```
## Expected Behavior
Cypress correctly setup
## Steps to Reproduce
Im not sure how. Cause it works fine for a fresh nx workspace.
### Environment
```
Node : 16.10.0
OS : darwin arm64
pnpm : 6.32.9
nx : 14.8.4
@nrwl/angular : Not Found
@nrwl/cypress : 14.8.4
@nrwl/detox : Not Found
@nrwl/devkit : 14.8.4
@nrwl/esbuild : Not Found
@nrwl/eslint-plugin-nx : 14.8.4
@nrwl/expo : Not Found
@nrwl/express : Not Found
@nrwl/jest : 14.8.4
@nrwl/js : 14.8.4
@nrwl/linter : 14.8.4
@nrwl/nest : 14.8.4
@nrwl/next : 14.8.4
@nrwl/node : 14.8.4
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : 14.8.4
@nrwl/react : 14.8.4
@nrwl/react-native : Not Found
@nrwl/rollup : 14.8.4
@nrwl/schematics : Not Found
@nrwl/storybook : 14.8.4
@nrwl/web : 14.8.4
@nrwl/webpack : 14.8.4
@nrwl/workspace : 14.8.4
typescript : 4.4.4
---------------------------------------
Local workspace plugins:
@personio-web/nx-plugin
---------------------------------------
Community plugins:
@nx-tools/nx-docker: 2.3.0
```
|
1.0
|
Cannot setup cypress component testing for a lib or app via the generator - <!-- Please do your best to fill out all of the sections below! -->
## Current Behavior
Cannot setup cypress component testing for a lib or app via the generator
I get the error
```
bmathew@Blessans-MacBook-Pro-4 personio-web % nx g @nrwl/react:cypress-component-configuration --project=my-new-lib --verbose
> NX Generating @nrwl/react:cypress-component-configuration
β Automatically generate tests for components declared in this project? (y/N) Β· false
> NX Cannot read properties of undefined (reading 'config')
TypeError: Cannot read properties of undefined (reading 'config')
at /Users/bmathew/Desktop/repos/personio-web/node_modules/.pnpm/@nrwl+react@14.8.4_734a1c5f3d4ccdfc9ca0aa1d480d891c/node_modules/@nrwl/react/src/generators/cypress-component-configuration/lib/update-configs.js:14:32
at Generator.next (<anonymous>)
at fulfilled (/Users/bmathew/Desktop/repos/personio-web/node_modules/.pnpm/tslib@2.4.0/node_modules/tslib/tslib.js:115:62)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
```
## Expected Behavior
Cypress correctly setup
## Steps to Reproduce
Im not sure how. Cause it works fine for a fresh nx workspace.
### Environment
```
Node : 16.10.0
OS : darwin arm64
pnpm : 6.32.9
nx : 14.8.4
@nrwl/angular : Not Found
@nrwl/cypress : 14.8.4
@nrwl/detox : Not Found
@nrwl/devkit : 14.8.4
@nrwl/esbuild : Not Found
@nrwl/eslint-plugin-nx : 14.8.4
@nrwl/expo : Not Found
@nrwl/express : Not Found
@nrwl/jest : 14.8.4
@nrwl/js : 14.8.4
@nrwl/linter : 14.8.4
@nrwl/nest : 14.8.4
@nrwl/next : 14.8.4
@nrwl/node : 14.8.4
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : 14.8.4
@nrwl/react : 14.8.4
@nrwl/react-native : Not Found
@nrwl/rollup : 14.8.4
@nrwl/schematics : Not Found
@nrwl/storybook : 14.8.4
@nrwl/web : 14.8.4
@nrwl/webpack : 14.8.4
@nrwl/workspace : 14.8.4
typescript : 4.4.4
---------------------------------------
Local workspace plugins:
@personio-web/nx-plugin
---------------------------------------
Community plugins:
@nx-tools/nx-docker: 2.3.0
```
|
non_process
|
cannot setup cypress component testing for a lib or app via the generator current behavior cannot setup cypress component testing for a lib or app via the generator i get the error bmathew blessans macbook pro personio web nx g nrwl react cypress component configuration project my new lib verbose nx generating nrwl react cypress component configuration β automatically generate tests for components declared in this project y n Β· false nx cannot read properties of undefined reading config typeerror cannot read properties of undefined reading config at users bmathew desktop repos personio web node modules pnpm nrwl react node modules nrwl react src generators cypress component configuration lib update configs js at generator next at fulfilled users bmathew desktop repos personio web node modules pnpm tslib node modules tslib tslib js at processticksandrejections node internal process task queues expected behavior cypress correctly setup steps to reproduce im not sure how cause it works fine for a fresh nx workspace environment node os darwin pnpm nx nrwl angular not found nrwl cypress nrwl detox not found nrwl devkit nrwl esbuild not found nrwl eslint plugin nx nrwl expo not found nrwl express not found nrwl jest nrwl js nrwl linter nrwl nest nrwl next nrwl node nrwl nx cloud not found nrwl nx plugin nrwl react nrwl react native not found nrwl rollup nrwl schematics not found nrwl storybook nrwl web nrwl webpack nrwl workspace typescript local workspace plugins personio web nx plugin community plugins nx tools nx docker
| 0
|
14,972
| 18,471,719,971
|
IssuesEvent
|
2021-10-17 21:07:04
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
EPSO failed to process with status Error Rdf Labels
|
ontology processing problem
|
Received notification on the support list that [EPSO](https://bioportal.bioontology.org/ontologies/EPSO) isn't processing properly. Status on the summary page indicates that processing fails during label generation. Full stack trace from production log:
```
I, [2021-06-21T12:32:43.707538 #17470] INFO -- : ["OWLAPI Java command: parsing finished successfully."]
I, [2021-06-21T12:32:43.707700 #17470] INFO -- : ["Output size 1345338 in `/srv/ncbo/repository/EPSO/4/owlapi.xrdf`"]
I, [2021-06-21T12:32:44.148764 #17470] INFO -- : ["Triples /srv/ncbo/repository/EPSO/4/owlapi.xrdf appended in <http://data.bioontology.org/ontologies/EPSO/submissions/4>"]
E, [2021-06-21T12:32:44.220770 #17470] ERROR -- : ["Exception: Rapper cannot parse turtle file at /tmp/data_triple_store20210621-17470-17dzhud: rapper: Parsing URI file:///tmp/data_triple_store20210621-17470-17dzhud with parser turtle
rapper: Serializing with serializer ntriples
rapper: Error - URI file:///tmp/data_triple_store20210621-17470-17dzhud:5 - syntax error at '<'
rapper: Parsing returned 4 triples
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:68:in `bnodes_filter_file'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:90:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:141:in `append_data_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:153:in `append_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:632:in `generate_missing_labels_pre'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:550:in `call'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:550:in `block (2 levels) in loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:504:in `block in process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:500:in `delete_if'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:500:in `process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:549:in `block in loop_classes'
/usr/local/rbenv/versions/2.6.6/lib/ruby/2.6.0/benchmark.rb:308:in `realtime'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:531:in `loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:1002:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:47:in `block in process_queue_submissions'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `each'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `process_queue_submissions'
/srv/ncbo/ncbo_cron/bin/ncbo_cron:246:in `block (3 levels) in <main>'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:65:in `block (3 levels) in scheduled_locking_job'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `fork'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `block (2 levels) in scheduled_locking_job'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:43:in `lock'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:234:in `lock'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:50:in `block in scheduled_locking_job'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:230:in `trigger_block'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:204:in `block in trigger'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/scheduler.rb:430:in `block in trigger_job'"]
```
|
1.0
|
EPSO failed to process with status Error Rdf Labels - Received notification on the support list that [EPSO](https://bioportal.bioontology.org/ontologies/EPSO) isn't processing properly. Status on the summary page indicates that processing fails during label generation. Full stack trace from production log:
```
I, [2021-06-21T12:32:43.707538 #17470] INFO -- : ["OWLAPI Java command: parsing finished successfully."]
I, [2021-06-21T12:32:43.707700 #17470] INFO -- : ["Output size 1345338 in `/srv/ncbo/repository/EPSO/4/owlapi.xrdf`"]
I, [2021-06-21T12:32:44.148764 #17470] INFO -- : ["Triples /srv/ncbo/repository/EPSO/4/owlapi.xrdf appended in <http://data.bioontology.org/ontologies/EPSO/submissions/4>"]
E, [2021-06-21T12:32:44.220770 #17470] ERROR -- : ["Exception: Rapper cannot parse turtle file at /tmp/data_triple_store20210621-17470-17dzhud: rapper: Parsing URI file:///tmp/data_triple_store20210621-17470-17dzhud with parser turtle
rapper: Serializing with serializer ntriples
rapper: Error - URI file:///tmp/data_triple_store20210621-17470-17dzhud:5 - syntax error at '<'
rapper: Parsing returned 4 triples
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:68:in `bnodes_filter_file'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:90:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:141:in `append_data_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/goo-b0a542f93804/lib/goo/sparql/client.rb:153:in `append_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:632:in `generate_missing_labels_pre'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:550:in `call'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:550:in `block (2 levels) in loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:504:in `block in process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:500:in `delete_if'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:500:in `process_callbacks'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:549:in `block in loop_classes'
/usr/local/rbenv/versions/2.6.6/lib/ruby/2.6.0/benchmark.rb:308:in `realtime'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:531:in `loop_classes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/bundler/gems/ontologies_linked_data-de150ab9388c/lib/ontologies_linked_data/models/ontology_submission.rb:1002:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:47:in `block in process_queue_submissions'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `each'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:41:in `process_queue_submissions'
/srv/ncbo/ncbo_cron/bin/ncbo_cron:246:in `block (3 levels) in <main>'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:65:in `block (3 levels) in scheduled_locking_job'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `fork'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:51:in `block (2 levels) in scheduled_locking_job'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:43:in `lock'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/mlanett-redis-lock-0.2.7/lib/redis-lock.rb:234:in `lock'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/scheduler.rb:50:in `block in scheduled_locking_job'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:230:in `trigger_block'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/jobs.rb:204:in `block in trigger'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.6.0/gems/rufus-scheduler-2.0.24/lib/rufus/sc/scheduler.rb:430:in `block in trigger_job'"]
```
|
process
|
epso failed to process with status error rdf labels received notification on the support list that isn t processing properly status on the summary page indicates that processing fails during label generation full stack trace from production log i info i info i info e error exception rapper cannot parse turtle file at tmp data triple rapper parsing uri file tmp data triple with parser turtle rapper serializing with serializer ntriples rapper error uri file tmp data triple syntax error at rapper parsing returned triples srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in bnodes filter file srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples no bnodes srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append data triples srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in generate missing labels pre srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in call srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in block levels in loop classes srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in block in process callbacks srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in delete if srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process callbacks srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in block in loop classes usr local rbenv versions lib ruby benchmark rb in realtime srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in loop classes srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process submission srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in process submission srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in block in process queue submissions srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in each srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in process queue submissions srv ncbo ncbo cron bin ncbo cron in block levels in srv ncbo ncbo cron lib ncbo cron scheduler rb in block levels in scheduled locking job srv ncbo ncbo cron lib ncbo cron scheduler rb in fork srv ncbo ncbo cron lib ncbo cron scheduler rb in block levels in scheduled locking job srv ncbo ncbo cron vendor bundle ruby gems mlanett redis lock lib redis lock rb in lock srv ncbo ncbo cron vendor bundle ruby gems mlanett redis lock lib redis lock rb in lock srv ncbo ncbo cron lib ncbo cron scheduler rb in block in scheduled locking job srv ncbo ncbo cron vendor bundle ruby gems rufus scheduler lib rufus sc jobs rb in trigger block srv ncbo ncbo cron vendor bundle ruby gems rufus scheduler lib rufus sc jobs rb in block in trigger srv ncbo ncbo cron vendor bundle ruby gems rufus scheduler lib rufus sc scheduler rb in block in trigger job
| 1
|
10,033
| 13,044,161,490
|
IssuesEvent
|
2020-07-29 03:47:23
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Convert` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Convert` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Convert` from TiDB -
## Description
Port the scalar function `Convert` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function convert from tidb description port the scalar function convert from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
319,922
| 23,795,547,114
|
IssuesEvent
|
2022-09-02 19:11:57
|
hytest-org/hytest
|
https://api.github.com/repos/hytest-org/hytest
|
closed
|
Find function for standard suite in existing Python library or libraries
|
documentation enhancement
|
From Sydney:
| Metric | Existing Python Lib | Reference |
| :----------- | :----------- | :----------- |
| Nash-Sutcliffe efficiency (NSE) | [hydroeval](https://pypi.org/project/hydroeval/), _nse_ |Nash, J. E., & Sutcliffe, J. V. (1970). River flow forecasting through conceptual models part IβA discussion of principles. Journal of hydrology, 10(3), 282-290. |
| Kling-Gupta efficiency (KGE) | [hydroeval](https://pypi.org/project/hydroeval/), _kge_ | Gupta, H. V., Kling, H., Yilmaz, K. K., & Martinez, G. F. (2009). Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. Journal of hydrology, 377(1-2), 80-91. |
| logNSE | [hydroeval](https://pypi.org/project/hydroeval/), _nse_ , using transform=βlogβ | Oudin, L., AndrΓ©assian, V., Mathevet, T., Perrin, C., & Michel, C. (2006). Dynamic averaging of rainfallβrunoff model simulations from complementary model parameterizations. Water Resources Research, 42(7). |
| percent bias | [hydroeval](https://pypi.org/project/hydroeval/), _pbias_ | a measure of the mean tendency of simulated values to be greater or less than associated observed values, units of percent |
| ratio of standard deviation | [Statistics](https://www.geeksforgeeks.org/statistical-functions-in-python-set-2-measure-of-spread/) module in Python provides a function known as stdev(), this is just stdev(simulated values) divided by the stdev(observed values) |standard deviation of simulated values divided by the standard deviation of observed values|
| Pearson Correlation | [pearsonr() SciPy function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html) | K. Pearson (1896, 1900, 1920) |
|Spearman Correlation | [spearmanr() SciPy function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html) | Charles Spearman (1904, 1910) |
|percent bias in midsegment slope of the flow-duration curve (FDC) between Q20-Q70 | exists in R package, hydroGOF, _[pbiasfdc](https://www.rdocumentation.org/packages/hydroGOF/versions/0.4-0/topics/pbiasfdc)_ |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
|percent bias in FDC low-segment volume (Q0-Q30)| see below * |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
|percent bias in FDC high-segment volume (Q98-Q100) | see below * |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
* The last two metrics from Yilmaz et al 2008 are difficult to find online anywhere (in R or any language). These were coded originally by Erin (on our team) in R. The current calculation of all the Yilmaz metrics in the analysis/stats evaluation notebook needs to be double checked against the R code because I was getting different values and I believe the python version may not be correct.
|
1.0
|
Find function for standard suite in existing Python library or libraries - From Sydney:
| Metric | Existing Python Lib | Reference |
| :----------- | :----------- | :----------- |
| Nash-Sutcliffe efficiency (NSE) | [hydroeval](https://pypi.org/project/hydroeval/), _nse_ |Nash, J. E., & Sutcliffe, J. V. (1970). River flow forecasting through conceptual models part IβA discussion of principles. Journal of hydrology, 10(3), 282-290. |
| Kling-Gupta efficiency (KGE) | [hydroeval](https://pypi.org/project/hydroeval/), _kge_ | Gupta, H. V., Kling, H., Yilmaz, K. K., & Martinez, G. F. (2009). Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. Journal of hydrology, 377(1-2), 80-91. |
| logNSE | [hydroeval](https://pypi.org/project/hydroeval/), _nse_ , using transform=βlogβ | Oudin, L., AndrΓ©assian, V., Mathevet, T., Perrin, C., & Michel, C. (2006). Dynamic averaging of rainfallβrunoff model simulations from complementary model parameterizations. Water Resources Research, 42(7). |
| percent bias | [hydroeval](https://pypi.org/project/hydroeval/), _pbias_ | a measure of the mean tendency of simulated values to be greater or less than associated observed values, units of percent |
| ratio of standard deviation | [Statistics](https://www.geeksforgeeks.org/statistical-functions-in-python-set-2-measure-of-spread/) module in Python provides a function known as stdev(), this is just stdev(simulated values) divided by the stdev(observed values) |standard deviation of simulated values divided by the standard deviation of observed values|
| Pearson Correlation | [pearsonr() SciPy function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html) | K. Pearson (1896, 1900, 1920) |
|Spearman Correlation | [spearmanr() SciPy function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html) | Charles Spearman (1904, 1910) |
|percent bias in midsegment slope of the flow-duration curve (FDC) between Q20-Q70 | exists in R package, hydroGOF, _[pbiasfdc](https://www.rdocumentation.org/packages/hydroGOF/versions/0.4-0/topics/pbiasfdc)_ |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
|percent bias in FDC low-segment volume (Q0-Q30)| see below * |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
|percent bias in FDC high-segment volume (Q98-Q100) | see below * |Yilmaz, K. K., Gupta, H. V., & Wagener, T. (2008). A processβbased diagnostic approach to model evaluation: Application to the NWS distributed hydrologic model. Water Resources Research, 44(9). |
* The last two metrics from Yilmaz et al 2008 are difficult to find online anywhere (in R or any language). These were coded originally by Erin (on our team) in R. The current calculation of all the Yilmaz metrics in the analysis/stats evaluation notebook needs to be double checked against the R code because I was getting different values and I believe the python version may not be correct.
|
non_process
|
find function for standard suite in existing python library or libraries from sydney metric existing python lib reference nash sutcliffe efficiency nse nse nash j e sutcliffe j v river flow forecasting through conceptual models part iβa discussion of principles journal of hydrology kling gupta efficiency kge kge gupta h v kling h yilmaz k k martinez g f decomposition of the mean squared error and nse performance criteria implications for improving hydrological modelling journal of hydrology lognse nse using transform βlogβ oudin l andrΓ©assian v mathevet t perrin c michel c dynamic averaging of rainfallβrunoff model simulations from complementary model parameterizations water resources research percent bias pbias a measure of the mean tendency of simulated values to be greater or less than associated observed values units of percent ratio of standard deviation module in python provides a function known as stdev this is just stdev simulated values divided by the stdev observed values standard deviation of simulated values divided by the standard deviation of observed values pearson correlation k pearson spearman correlation charles spearman percent bias in midsegment slope of the flow duration curve fdc between exists in r package hydrogof yilmaz k k gupta h v wagener t a processβbased diagnostic approach to model evaluation application to the nws distributed hydrologic model water resources research percent bias in fdc low segment volume see below yilmaz k k gupta h v wagener t a processβbased diagnostic approach to model evaluation application to the nws distributed hydrologic model water resources research percent bias in fdc high segment volume see below yilmaz k k gupta h v wagener t a processβbased diagnostic approach to model evaluation application to the nws distributed hydrologic model water resources research the last two metrics from yilmaz et al are difficult to find online anywhere in r or any language these were coded originally by erin on our team in r the current calculation of all the yilmaz metrics in the analysis stats evaluation notebook needs to be double checked against the r code because i was getting different values and i believe the python version may not be correct
| 0
|
21,031
| 27,969,940,020
|
IssuesEvent
|
2023-03-25 00:20:22
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
define pipeline-wise scene-referred white value
|
feature: new scope: image processing no-issue-activity
|
**Context**
Scene-referred workflow means white can have any luminance value > 100%. However, some functions like power exhibit a particular behaviour in [0 ; 1] that require normalization by the white value to get the expected behaviour. Such normalizations happen:
* in color balance RGB (white fulcrum),
* in tone equalizer (mask exposure compensation),
* in all LUT-based modules (3D LUTs and tonecurves β they don't happen yet, but they should),
Then, filmic makes use of the scene white corrected by all earlier modules to define the upper bound of its tone mapping.
**Problem**
When editing pictures in batch, only one of them is fully edited (look and corrections), then the edit is transferred to the other pictures of the same series.
Global exposure minor tweaks are however necessary on a picture-wise basis. In this case, all settings of scene white for normalization purposes need to be manually re-adjusted in all the modules that use it. It is unnecessarily time consuming.
**Solution**
1. Create a "global exposure" module. This one would apply the base exposure that is default in scene-referred, would not be allowed multiple instances, but will also define a scene-referred white exposure for the whole pipeline.
2. Create a `compute_white()` method in the IOP API. This will use, as input, a virtual 9Γ9 image with the center 3Γ3 pixels inited to the global exposure scene-referred white. Each module implementing that method will run its own `process()` method with current params on top of that image. At the end, the central white value will serve as an update of the scene-referred white at this stage of the pipe. If the module instance is using drawn mask, `compute_white()` will be bypassed.
3. Allow "auto" or "pipeline" setting for all modules using scene white as a normalization, reading the updated white at the current place in the pipe.
The reason for using an image rather than a single value is to support modules applying blurs and local filters. This also allows a completely blind evaluation of the white point, that doesn't require a-priori knowledge of the internals of the module.
|
1.0
|
define pipeline-wise scene-referred white value - **Context**
Scene-referred workflow means white can have any luminance value > 100%. However, some functions like power exhibit a particular behaviour in [0 ; 1] that require normalization by the white value to get the expected behaviour. Such normalizations happen:
* in color balance RGB (white fulcrum),
* in tone equalizer (mask exposure compensation),
* in all LUT-based modules (3D LUTs and tonecurves β they don't happen yet, but they should),
Then, filmic makes use of the scene white corrected by all earlier modules to define the upper bound of its tone mapping.
**Problem**
When editing pictures in batch, only one of them is fully edited (look and corrections), then the edit is transferred to the other pictures of the same series.
Global exposure minor tweaks are however necessary on a picture-wise basis. In this case, all settings of scene white for normalization purposes need to be manually re-adjusted in all the modules that use it. It is unnecessarily time consuming.
**Solution**
1. Create a "global exposure" module. This one would apply the base exposure that is default in scene-referred, would not be allowed multiple instances, but will also define a scene-referred white exposure for the whole pipeline.
2. Create a `compute_white()` method in the IOP API. This will use, as input, a virtual 9Γ9 image with the center 3Γ3 pixels inited to the global exposure scene-referred white. Each module implementing that method will run its own `process()` method with current params on top of that image. At the end, the central white value will serve as an update of the scene-referred white at this stage of the pipe. If the module instance is using drawn mask, `compute_white()` will be bypassed.
3. Allow "auto" or "pipeline" setting for all modules using scene white as a normalization, reading the updated white at the current place in the pipe.
The reason for using an image rather than a single value is to support modules applying blurs and local filters. This also allows a completely blind evaluation of the white point, that doesn't require a-priori knowledge of the internals of the module.
|
process
|
define pipeline wise scene referred white value context scene referred workflow means white can have any luminance value however some functions like power exhibit a particular behaviour in that require normalization by the white value to get the expected behaviour such normalizations happen in color balance rgb white fulcrum in tone equalizer mask exposure compensation in all lut based modules luts and tonecurves β they don t happen yet but they should then filmic makes use of the scene white corrected by all earlier modules to define the upper bound of its tone mapping problem when editing pictures in batch only one of them is fully edited look and corrections then the edit is transferred to the other pictures of the same series global exposure minor tweaks are however necessary on a picture wise basis in this case all settings of scene white for normalization purposes need to be manually re adjusted in all the modules that use it it is unnecessarily time consuming solution create a global exposure module this one would apply the base exposure that is default in scene referred would not be allowed multiple instances but will also define a scene referred white exposure for the whole pipeline create a compute white method in the iop api this will use as input a virtual Γ image with the center Γ pixels inited to the global exposure scene referred white each module implementing that method will run its own process method with current params on top of that image at the end the central white value will serve as an update of the scene referred white at this stage of the pipe if the module instance is using drawn mask compute white will be bypassed allow auto or pipeline setting for all modules using scene white as a normalization reading the updated white at the current place in the pipe the reason for using an image rather than a single value is to support modules applying blurs and local filters this also allows a completely blind evaluation of the white point that doesn t require a priori knowledge of the internals of the module
| 1
|
327,396
| 24,133,334,720
|
IssuesEvent
|
2022-09-21 09:14:33
|
paddy-exe/GDExtensionSummator
|
https://api.github.com/repos/paddy-exe/GDExtensionSummator
|
closed
|
Running vscode tasks generates `.x86_64` libs, but `.gdextension` file looks for `.64` (Windows)
|
bug documentation
|
As the titles mentions, when running the tasks in vscode on Windows, libraries are automatically compiled and named to `libgdsummator.windows.debug.x86_64.*`, while `.gdextension` file in this repository looks for `.64` files.
Could be useful mentioning that the `.gdextension` file should be modified based on the system type, or giving some extra information about the setup in the README.

**Environment**
*Godot-cpp version:* the one recursively cloned from this template
*Godot Editor version:* 4.0 beta 1
*Windows:* Microsoft Windows 11 PRO Build 22622 x64
|
1.0
|
Running vscode tasks generates `.x86_64` libs, but `.gdextension` file looks for `.64` (Windows) - As the titles mentions, when running the tasks in vscode on Windows, libraries are automatically compiled and named to `libgdsummator.windows.debug.x86_64.*`, while `.gdextension` file in this repository looks for `.64` files.
Could be useful mentioning that the `.gdextension` file should be modified based on the system type, or giving some extra information about the setup in the README.

**Environment**
*Godot-cpp version:* the one recursively cloned from this template
*Godot Editor version:* 4.0 beta 1
*Windows:* Microsoft Windows 11 PRO Build 22622 x64
|
non_process
|
running vscode tasks generates libs but gdextension file looks for windows as the titles mentions when running the tasks in vscode on windows libraries are automatically compiled and named to libgdsummator windows debug while gdextension file in this repository looks for files could be useful mentioning that the gdextension file should be modified based on the system type or giving some extra information about the setup in the readme environment godot cpp version the one recursively cloned from this template godot editor version beta windows microsoft windows pro build
| 0
|
9,708
| 12,704,561,357
|
IssuesEvent
|
2020-06-23 01:49:42
|
googleapis/python-phishingprotection
|
https://api.github.com/repos/googleapis/python-phishingprotection
|
closed
|
Beta release
|
api: phishingprotection type: process
|
Package name: **google-cloud-phishing-protection**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta
- [x] Service API is public
- [x] Client surface is mostly stable (no known issues that could significantly change the surface)
- [x] All manual types and methods have comment documentation
- [x] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [x] Central GitHub README lists and points to the per-API README
- [x] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one βgetting startedβ sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Beta release - Package name: **google-cloud-phishing-protection**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta
- [x] Service API is public
- [x] Client surface is mostly stable (no known issues that could significantly change the surface)
- [x] All manual types and methods have comment documentation
- [x] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [x] Central GitHub README lists and points to the per-API README
- [x] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one βgetting startedβ sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
beta release package name google cloud phishing protection current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one βgetting startedβ sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
20,609
| 27,276,103,122
|
IssuesEvent
|
2023-02-23 05:26:43
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
closed
|
Constants defined in configuration file are not defined in bootstrap file when test is run in separate process
|
type/bug feature/test-runner feature/process-isolation version/10
|
<!--
- Please do not report an issue for a version of PHPUnit that is no longer supported. A list of currently supported versions of PHPUnit is available at https://phpunit.de/supported-versions.html.
- Please do not report an issue if you are using a version of PHP that is not supported by the version of PHPUnit you are using. A list that shows which version of PHP is supported by which version of PHPUnit is available at https://phpunit.de/supported-versions.html.
- Please do not report an issue if you are not using PHPUnit directly, but rather a wrapper around it such as Symfony's PHPUnit Bridge
- Please fill in this template according to your issue.
- Please keep the table shown below at the top of your issue.
- Please include the output of "composer info | sort" if you installed PHPUnit using Composer.
- Please post code as text (using proper markup). Do not post screenshots of code.
- Visit https://phpunit.de/support.html if you are looking for support.
- Please remove this comment before submitting your issue.
-->
| Q | A
| --------------------| ---------------
| PHPUnit version | 10.0.11
| PHP version | 8.1.16
| Installation Method | Composer
#### Summary
Constants defined in configuration file are not defined in the bootstrap file when running in a separated process.
#### Current behavior
Show the error: Test was run in child process and ended unexpectedly
```
PHPUnit 10.0.11 by Sebastian Bergmann and contributors.
Runtime: PHP 8.1.16
Configuration: phpunit-10.xml
.E 2 / 2 (100%)
Time: 00:00.064, Memory: 8.00 MB
There was 1 error:
1) FooTest::testFoo2
Test was run in child process and ended unexpectedly
ERRORS!
Tests: 2, Assertions: 1, Errors: 1.
```
#### How to reproduce
POC: https://github.com/MauricioFauth/phpunit-10-test
`phpunit.xml`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/10.0/phpunit.xsd"
bootstrap="tests/bootstrap.php"
cacheDirectory=".phpunit10.cache"
executionOrder="depends,defects"
beStrictAboutOutputDuringTests="true"
failOnRisky="true"
failOnWarning="true"
colors="true">
<testsuites>
<testsuite name="default">
<directory>tests</directory>
</testsuite>
</testsuites>
<coverage>
<include>
<directory suffix=".php">src</directory>
</include>
</coverage>
<php>
<const name="CONST_TEST" value="1"/>
</php>
</phpunit>
```
`tests/bootstrap.php`:
```php
<?php
if (! defined('CONST_TEST')) {
exit;
}
require dirname(__DIR__) . '/vendor/autoload.php';
```
`src/function.php`:
```php
<?php
declare(strict_types=1);
function foo(): string {
return 'foo';
}
```
`tests/FooTest.php`:
```php
<?php
declare(strict_types=1);
use PHPUnit\Framework\TestCase;
class FooTest extends TestCase
{
public function testFoo(): void
{
$this->assertSame('foo', foo());
}
/**
* @runInSeparateProcess
* @preserveGlobalState disabled
*/
public function testFoo2(): void
{
$this->assertSame('foo', foo());
}
}
```
#### Expected behavior
Constants should be available.
```
PHPUnit 9.6.3 by Sebastian Bergmann and contributors.
Runtime: PHP 8.0.28
Configuration: phpunit-9.xml
.. 2 / 2 (100%)
Time: 00:00.088, Memory: 6.00 MB
OK (2 tests, 2 assertions)
```
|
1.0
|
Constants defined in configuration file are not defined in bootstrap file when test is run in separate process - <!--
- Please do not report an issue for a version of PHPUnit that is no longer supported. A list of currently supported versions of PHPUnit is available at https://phpunit.de/supported-versions.html.
- Please do not report an issue if you are using a version of PHP that is not supported by the version of PHPUnit you are using. A list that shows which version of PHP is supported by which version of PHPUnit is available at https://phpunit.de/supported-versions.html.
- Please do not report an issue if you are not using PHPUnit directly, but rather a wrapper around it such as Symfony's PHPUnit Bridge
- Please fill in this template according to your issue.
- Please keep the table shown below at the top of your issue.
- Please include the output of "composer info | sort" if you installed PHPUnit using Composer.
- Please post code as text (using proper markup). Do not post screenshots of code.
- Visit https://phpunit.de/support.html if you are looking for support.
- Please remove this comment before submitting your issue.
-->
| Q | A
| --------------------| ---------------
| PHPUnit version | 10.0.11
| PHP version | 8.1.16
| Installation Method | Composer
#### Summary
Constants defined in configuration file are not defined in the bootstrap file when running in a separated process.
#### Current behavior
Show the error: Test was run in child process and ended unexpectedly
```
PHPUnit 10.0.11 by Sebastian Bergmann and contributors.
Runtime: PHP 8.1.16
Configuration: phpunit-10.xml
.E 2 / 2 (100%)
Time: 00:00.064, Memory: 8.00 MB
There was 1 error:
1) FooTest::testFoo2
Test was run in child process and ended unexpectedly
ERRORS!
Tests: 2, Assertions: 1, Errors: 1.
```
#### How to reproduce
POC: https://github.com/MauricioFauth/phpunit-10-test
`phpunit.xml`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/10.0/phpunit.xsd"
bootstrap="tests/bootstrap.php"
cacheDirectory=".phpunit10.cache"
executionOrder="depends,defects"
beStrictAboutOutputDuringTests="true"
failOnRisky="true"
failOnWarning="true"
colors="true">
<testsuites>
<testsuite name="default">
<directory>tests</directory>
</testsuite>
</testsuites>
<coverage>
<include>
<directory suffix=".php">src</directory>
</include>
</coverage>
<php>
<const name="CONST_TEST" value="1"/>
</php>
</phpunit>
```
`tests/bootstrap.php`:
```php
<?php
if (! defined('CONST_TEST')) {
exit;
}
require dirname(__DIR__) . '/vendor/autoload.php';
```
`src/function.php`:
```php
<?php
declare(strict_types=1);
function foo(): string {
return 'foo';
}
```
`tests/FooTest.php`:
```php
<?php
declare(strict_types=1);
use PHPUnit\Framework\TestCase;
class FooTest extends TestCase
{
public function testFoo(): void
{
$this->assertSame('foo', foo());
}
/**
* @runInSeparateProcess
* @preserveGlobalState disabled
*/
public function testFoo2(): void
{
$this->assertSame('foo', foo());
}
}
```
#### Expected behavior
Constants should be available.
```
PHPUnit 9.6.3 by Sebastian Bergmann and contributors.
Runtime: PHP 8.0.28
Configuration: phpunit-9.xml
.. 2 / 2 (100%)
Time: 00:00.088, Memory: 6.00 MB
OK (2 tests, 2 assertions)
```
|
process
|
constants defined in configuration file are not defined in bootstrap file when test is run in separate process please do not report an issue for a version of phpunit that is no longer supported a list of currently supported versions of phpunit is available at please do not report an issue if you are using a version of php that is not supported by the version of phpunit you are using a list that shows which version of php is supported by which version of phpunit is available at please do not report an issue if you are not using phpunit directly but rather a wrapper around it such as symfony s phpunit bridge please fill in this template according to your issue please keep the table shown below at the top of your issue please include the output of composer info sort if you installed phpunit using composer please post code as text using proper markup do not post screenshots of code visit if you are looking for support please remove this comment before submitting your issue q a phpunit version php version installation method composer summary constants defined in configuration file are not defined in the bootstrap file when running in a separated process current behavior show the error test was run in child process and ended unexpectedly phpunit by sebastian bergmann and contributors runtime php configuration phpunit xml e time memory mb there was error footest test was run in child process and ended unexpectedly errors tests assertions errors how to reproduce poc phpunit xml xml phpunit xmlns xsi xsi nonamespaceschemalocation bootstrap tests bootstrap php cachedirectory cache executionorder depends defects bestrictaboutoutputduringtests true failonrisky true failonwarning true colors true tests src tests bootstrap php php php if defined const test exit require dirname dir vendor autoload php src function php php php declare strict types function foo string return foo tests footest php php php declare strict types use phpunit framework testcase class footest extends testcase public function testfoo void this assertsame foo foo runinseparateprocess preserveglobalstate disabled public function void this assertsame foo foo expected behavior constants should be available phpunit by sebastian bergmann and contributors runtime php configuration phpunit xml time memory mb ok tests assertions
| 1
|
73,691
| 9,693,252,310
|
IssuesEvent
|
2019-05-24 15:37:14
|
networktocode/yangify
|
https://api.github.com/repos/networktocode/yangify
|
closed
|
Step 6 typo
|
bug documentation
|
cd docs/tutorial/parsing-quickstart/
This should be:
cd docs/tutorials/parsing-quickstart/
|
1.0
|
Step 6 typo - cd docs/tutorial/parsing-quickstart/
This should be:
cd docs/tutorials/parsing-quickstart/
|
non_process
|
step typo cd docs tutorial parsing quickstart this should be cd docs tutorials parsing quickstart
| 0
|
263,457
| 23,059,497,919
|
IssuesEvent
|
2022-07-25 08:38:55
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[test-failed]: Chrome UI Functional Tests.test/functional/apps/visualize/group3/_pie_chartΒ·ts - visualize app pie chart other bucket should apply correct filter on other bucket by clicking on a legend
|
Team:VisEditors failed-test test-cloud v8.3.3
|
**Version: 8.3.3**
**Class: Chrome UI Functional Tests.test/functional/apps/visualize/group3/_pie_chartΒ·ts**
**Stack Trace:**
```
Error: retry.try timeout: Error: expected [ 'ios', 'win 7', 'win 8', 'win xp' ] to sort of equal [ 'Missing', 'osx' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at /opt/local-ssd/buildkite/builds/kb-n2-4-4c9b4a1f1306317e/elastic/estf-cloud-kibana-functional-tests/kibana/test/functional/services/visualizations/pie_chart.ts:242:33
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at PieChartService.expectPieChartLabels (test/functional/services/visualizations/pie_chart.ts:240:5)
at Context.<anonymous> (test/functional/apps/visualize/group3/_pie_chart.ts:127:9)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at PieChartService.expectPieChartLabels (test/functional/services/visualizations/pie_chart.ts:240:5)
at Context.<anonymous> (test/functional/apps/visualize/group3/_pie_chart.ts:127:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
- visualize app pie chart other bucket should show two levels of other buckets
_Test Report: https://buildkite.com/elastic/estf-cloud-kibana-functional-tests/builds/432_
|
2.0
|
[test-failed]: Chrome UI Functional Tests.test/functional/apps/visualize/group3/_pie_chartΒ·ts - visualize app pie chart other bucket should apply correct filter on other bucket by clicking on a legend - **Version: 8.3.3**
**Class: Chrome UI Functional Tests.test/functional/apps/visualize/group3/_pie_chartΒ·ts**
**Stack Trace:**
```
Error: retry.try timeout: Error: expected [ 'ios', 'win 7', 'win 8', 'win xp' ] to sort of equal [ 'Missing', 'osx' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at /opt/local-ssd/buildkite/builds/kb-n2-4-4c9b4a1f1306317e/elastic/estf-cloud-kibana-functional-tests/kibana/test/functional/services/visualizations/pie_chart.ts:242:33
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at PieChartService.expectPieChartLabels (test/functional/services/visualizations/pie_chart.ts:240:5)
at Context.<anonymous> (test/functional/apps/visualize/group3/_pie_chart.ts:127:9)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at PieChartService.expectPieChartLabels (test/functional/services/visualizations/pie_chart.ts:240:5)
at Context.<anonymous> (test/functional/apps/visualize/group3/_pie_chart.ts:127:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
- visualize app pie chart other bucket should show two levels of other buckets
_Test Report: https://buildkite.com/elastic/estf-cloud-kibana-functional-tests/builds/432_
|
non_process
|
chrome ui functional tests test functional apps visualize pie chartΒ·ts visualize app pie chart other bucket should apply correct filter on other bucket by clicking on a legend version class chrome ui functional tests test functional apps visualize pie chartΒ·ts stack trace error retry try timeout error expected to sort of equal at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at opt local ssd buildkite builds kb elastic estf cloud kibana functional tests kibana test functional services visualizations pie chart ts at runmicrotasks at processticksandrejections node internal process task queues at runattempt test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at piechartservice expectpiechartlabels test functional services visualizations pie chart ts at context test functional apps visualize pie chart ts at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at piechartservice expectpiechartlabels test functional services visualizations pie chart ts at context test functional apps visualize pie chart ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js other test failures visualize app pie chart other bucket should show two levels of other buckets test report
| 0
|
6,608
| 9,693,630,036
|
IssuesEvent
|
2019-05-24 16:35:47
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
opened
|
Spanner: 'test_reload_instance' systest flakes.
|
api: spanner flaky testing type: process
|
From [this Kokoto run](https://source.cloud.google.com/results/invocations/9c28a3d8-da69-4bca-9b82-db7166c03522/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fspanner/log)
```python
self = <tests.system.test_system.TestInstanceAdminAPI testMethod=test_reload_instance>
def test_reload_instance(self):
# Use same arguments as Config.INSTANCE (created in `setUpModule`)
# so we can use reload() on a fresh instance.
instance = Config.CLIENT.instance(INSTANCE_ID)
# Make sure metadata unset before reloading.
instance.display_name = None
instance.reload()
> self.assertEqual(instance.display_name, Config.INSTANCE.display_name)
E AssertionError: 'GCP System Tests' != 'Foo Bar Baz'
E - GCP System Tests
E + Foo Bar Baz
```
Maybe somebody tweaked the title in the console while the test was running? Because it is "Foo Bar Baz" now.
|
1.0
|
Spanner: 'test_reload_instance' systest flakes. - From [this Kokoto run](https://source.cloud.google.com/results/invocations/9c28a3d8-da69-4bca-9b82-db7166c03522/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fspanner/log)
```python
self = <tests.system.test_system.TestInstanceAdminAPI testMethod=test_reload_instance>
def test_reload_instance(self):
# Use same arguments as Config.INSTANCE (created in `setUpModule`)
# so we can use reload() on a fresh instance.
instance = Config.CLIENT.instance(INSTANCE_ID)
# Make sure metadata unset before reloading.
instance.display_name = None
instance.reload()
> self.assertEqual(instance.display_name, Config.INSTANCE.display_name)
E AssertionError: 'GCP System Tests' != 'Foo Bar Baz'
E - GCP System Tests
E + Foo Bar Baz
```
Maybe somebody tweaked the title in the console while the test was running? Because it is "Foo Bar Baz" now.
|
process
|
spanner test reload instance systest flakes from python self def test reload instance self use same arguments as config instance created in setupmodule so we can use reload on a fresh instance instance config client instance instance id make sure metadata unset before reloading instance display name none instance reload self assertequal instance display name config instance display name e assertionerror gcp system tests foo bar baz e gcp system tests e foo bar baz maybe somebody tweaked the title in the console while the test was running because it is foo bar baz now
| 1
|
16,876
| 22,156,447,767
|
IssuesEvent
|
2022-06-03 23:37:51
|
0xffset/rOSt
|
https://api.github.com/repos/0xffset/rOSt
|
opened
|
Add Thread Local Storage
|
memory processes
|
When tgreads are created in a process each one should get a dedicated section of memory for thread-local data. A pointer to this section should be stored in the FS base segment when the thread is dispatched.
|
1.0
|
Add Thread Local Storage - When tgreads are created in a process each one should get a dedicated section of memory for thread-local data. A pointer to this section should be stored in the FS base segment when the thread is dispatched.
|
process
|
add thread local storage when tgreads are created in a process each one should get a dedicated section of memory for thread local data a pointer to this section should be stored in the fs base segment when the thread is dispatched
| 1
|
11,671
| 14,530,945,260
|
IssuesEvent
|
2020-12-14 20:01:43
|
pacificclimate/quail
|
https://api.github.com/repos/pacificclimate/quail
|
closed
|
Monthly Minimum of Daily Maximum Temperature
|
process
|
## Description
This function takes a climdexInput object as input and computes the monthly or annual minimum of daily maximum temperature.
## Function to wrap
[`climdex.txn`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L901)
|
1.0
|
Monthly Minimum of Daily Maximum Temperature - ## Description
This function takes a climdexInput object as input and computes the monthly or annual minimum of daily maximum temperature.
## Function to wrap
[`climdex.txn`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L901)
|
process
|
monthly minimum of daily maximum temperature description this function takes a climdexinput object as input and computes the monthly or annual minimum of daily maximum temperature function to wrap
| 1
|
115,568
| 9,805,043,657
|
IssuesEvent
|
2019-06-12 08:02:25
|
Elgg/Elgg
|
https://api.github.com/repos/Elgg/Elgg
|
closed
|
Admin-only set of pages for manual integration testing
|
dev usability tests
|
This could exercise various JS loading techniques (#6726), test presence of all expected JS APIs and values, input widgets, etc. Manual testing is better than no testing.
|
1.0
|
Admin-only set of pages for manual integration testing - This could exercise various JS loading techniques (#6726), test presence of all expected JS APIs and values, input widgets, etc. Manual testing is better than no testing.
|
non_process
|
admin only set of pages for manual integration testing this could exercise various js loading techniques test presence of all expected js apis and values input widgets etc manual testing is better than no testing
| 0
|
20,113
| 26,652,779,804
|
IssuesEvent
|
2023-01-25 14:49:48
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Root Span in tailsampling processor will directly through the processor and go to exporter?
|
bug processor/tailsampling needs triage
|
### Component(s)
processor/tailsampling
### What happened?
## Description
When I use tailsampling processor to tailsampling traces in that case:
the config
```
tail_sampling:
decision_wait: 60s
num_traces: 5000
expected_new_traces_per_sec: 10
policies:
[
{
name: errors-policy,
type: numeric_attribute,
numeric_attribute: {key: http.status_code, min_value: 499, max_value: 600}
},
{
name: latency-5s,
type: latency,
latency: {threshold_ms: 5000}
},
]
```
the logic is: parent span -->child span and sleep 6s-->parant span sleep 10s:
```
with tracer.start_span('TestSpan') as span:
span.log_kv({'event': 'test message', 'life': 42})
with tracer.start_span('ChildSpan', child_of=span) as child_span:
time.sleep(6)
child_span.set_tag('CTag', '1')
child_span.log_kv({'event': 'down below'})
logging.info(u'child is done')
time.sleep(240)
logging.info(u'parent is done')
```
I found the root span's phenomenon:
- 1) after 60s(decision_wait) of child done, tailsampling print and I can got the trace without in jaeger(It is right):
```
2022-12-28T12:58:20.422Z debug tailsamplingprocessor@v0.68.0/processor.go:204 Sampling policy evaluation completed {"kind": "processor", "name": "tail_sampling", "pipeline": "traces", "batch.len": 1, "sampled": 2, "notSampled": 0, "droppedPriorToEvaluation": 0, "policyEvaluationErrors": 0}β
```
- 2) the jaeger got warning one the UI, said "invalid parent span IDs=ed541f049ba40b85; skipping clock skew adjustmentβ"
- 3) after 240, the parent span done
- 4) the tailsampling print nothing and exporter print "2022-12-28T13:01:19.734Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "logging", "#spans": 1}"
- 5) I get the whole trace, both parent and child in jaeger
Is it right? How to explain with tailsampling processor' configration?
Thanks very much.
## Steps to Reproduce
## Expected Result
## Actual Result
### Collector version
0.68.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
Root Span in tailsampling processor will directly through the processor and go to exporter? - ### Component(s)
processor/tailsampling
### What happened?
## Description
When I use tailsampling processor to tailsampling traces in that case:
the config
```
tail_sampling:
decision_wait: 60s
num_traces: 5000
expected_new_traces_per_sec: 10
policies:
[
{
name: errors-policy,
type: numeric_attribute,
numeric_attribute: {key: http.status_code, min_value: 499, max_value: 600}
},
{
name: latency-5s,
type: latency,
latency: {threshold_ms: 5000}
},
]
```
the logic is: parent span -->child span and sleep 6s-->parant span sleep 10s:
```
with tracer.start_span('TestSpan') as span:
span.log_kv({'event': 'test message', 'life': 42})
with tracer.start_span('ChildSpan', child_of=span) as child_span:
time.sleep(6)
child_span.set_tag('CTag', '1')
child_span.log_kv({'event': 'down below'})
logging.info(u'child is done')
time.sleep(240)
logging.info(u'parent is done')
```
I found the root span's phenomenon:
- 1) after 60s(decision_wait) of child done, tailsampling print and I can got the trace without in jaeger(It is right):
```
2022-12-28T12:58:20.422Z debug tailsamplingprocessor@v0.68.0/processor.go:204 Sampling policy evaluation completed {"kind": "processor", "name": "tail_sampling", "pipeline": "traces", "batch.len": 1, "sampled": 2, "notSampled": 0, "droppedPriorToEvaluation": 0, "policyEvaluationErrors": 0}β
```
- 2) the jaeger got warning one the UI, said "invalid parent span IDs=ed541f049ba40b85; skipping clock skew adjustmentβ"
- 3) after 240, the parent span done
- 4) the tailsampling print nothing and exporter print "2022-12-28T13:01:19.734Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "logging", "#spans": 1}"
- 5) I get the whole trace, both parent and child in jaeger
Is it right? How to explain with tailsampling processor' configration?
Thanks very much.
## Steps to Reproduce
## Expected Result
## Actual Result
### Collector version
0.68.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
process
|
root span in tailsampling processor will directly through the processor and go to exporter component s processor tailsampling what happened description when i use tailsampling processor to tailsampling traces in that case the config tail sampling decision wait num traces expected new traces per sec policies name errors policy type numeric attribute numeric attribute key http status code min value max value name latency type latency latency threshold ms the logic is parent span child span and sleep parant span sleep with tracer start span testspan as span span log kv event test message life with tracer start span childspan child of span as child span time sleep child span set tag ctag child span log kv event down below logging info u child is done time sleep logging info u parent is done i found the root span s phenomenon after decision wait of child done tailsampling print and i can got the trace without in jaeger it is right debug tailsamplingprocessor processor go sampling policy evaluation completed kind processor name tail sampling pipeline traces batch len sampled notsampled droppedpriortoevaluation policyevaluationerrors β the jaeger got warning one the ui said invalid parent span ids skipping clock skew adjustmentβ after the parent span done the tailsampling print nothing and exporter print info tracesexporter kind exporter data type traces name logging spans i get the whole trace both parent and child in jaeger is it right how to explain with tailsampling processor configration thanks very much steps to reproduce expected result actual result collector version environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration no response log output no response additional context no response
| 1
|
93,946
| 27,080,016,424
|
IssuesEvent
|
2023-02-14 13:24:11
|
micrometer-metrics/micrometer
|
https://api.github.com/repos/micrometer-metrics/micrometer
|
closed
|
Upgrade CI build to Java 19
|
type: task build
|
Once Gradle `7.6.1` we should also upgrade to Java 19, basically reverting this: https://github.com/micrometer-metrics/micrometer/issues/3612
_Originally posted by @jonatan-ivanov in https://github.com/micrometer-metrics/micrometer/issues/3549#issuecomment-1409669482_
We've upgraded to Gradle 8 instead, which should unblock us.
|
1.0
|
Upgrade CI build to Java 19 - Once Gradle `7.6.1` we should also upgrade to Java 19, basically reverting this: https://github.com/micrometer-metrics/micrometer/issues/3612
_Originally posted by @jonatan-ivanov in https://github.com/micrometer-metrics/micrometer/issues/3549#issuecomment-1409669482_
We've upgraded to Gradle 8 instead, which should unblock us.
|
non_process
|
upgrade ci build to java once gradle we should also upgrade to java basically reverting this originally posted by jonatan ivanov in we ve upgraded to gradle instead which should unblock us
| 0
|
15,540
| 19,703,300,242
|
IssuesEvent
|
2022-01-12 18:54:33
|
googleapis/signet
|
https://api.github.com/repos/googleapis/signet
|
opened
|
Your .repo-metadata.json file has a problem π€
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan π:
* must have required property 'release_level' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
βοΈ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem π€ - You have a problem with your .repo-metadata.json file:
Result of scan π:
* must have required property 'release_level' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
βοΈ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem π€ you have a problem with your repo metadata json file result of scan π must have required property release level in repo metadata json must have required property client documentation in repo metadata json βοΈ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
7,093
| 10,239,620,051
|
IssuesEvent
|
2019-08-19 18:42:03
|
andywalden/gsuite2mfe
|
https://api.github.com/repos/andywalden/gsuite2mfe
|
closed
|
Missing quickstart.py
|
In Process question
|
Just installing from fresh and it looks like quickstart.py is now missing from the repo.
`(gsuite2mfe) [root@XXXX gsuite2mfe]# python quickstart.py --noauth_local_webserver
python: can't open file 'quickstart.py': [Errno 2] No such file or directory
`
|
1.0
|
Missing quickstart.py - Just installing from fresh and it looks like quickstart.py is now missing from the repo.
`(gsuite2mfe) [root@XXXX gsuite2mfe]# python quickstart.py --noauth_local_webserver
python: can't open file 'quickstart.py': [Errno 2] No such file or directory
`
|
process
|
missing quickstart py just installing from fresh and it looks like quickstart py is now missing from the repo python quickstart py noauth local webserver python can t open file quickstart py no such file or directory
| 1
|
9,490
| 12,483,874,791
|
IssuesEvent
|
2020-05-30 11:50:07
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Allow an array of manually entered values for SQL Variables
|
Querying/Native Querying/Parameters & Variables Querying/Processor Type:New Feature
|
Fairly self-explanatory. Would be nice to have an option of SQL params to provide a simple array of values for drop-down population (*not* type ahead). This opens up some clever and arbitrary uses for static params that are currently difficult.
Simple Ex. ` [[order by {{order_field}} {{dir}} ]]`
Variable UI somewhere:
order_field =` ["upvotes", "downvotes", "issues"]`
dir = `["asc", "desc"]`
Also Interesting: Allowing the value of a dynamic Question to power this list (would help #4509 as well).
The allowable values would be Question-1234 or `select distinct region from sales;` etc. 'Field Filter' somewhat covers that, albeit in a specific way that isn't really applicable to the above example, I don't believe.
|
1.0
|
Allow an array of manually entered values for SQL Variables - Fairly self-explanatory. Would be nice to have an option of SQL params to provide a simple array of values for drop-down population (*not* type ahead). This opens up some clever and arbitrary uses for static params that are currently difficult.
Simple Ex. ` [[order by {{order_field}} {{dir}} ]]`
Variable UI somewhere:
order_field =` ["upvotes", "downvotes", "issues"]`
dir = `["asc", "desc"]`
Also Interesting: Allowing the value of a dynamic Question to power this list (would help #4509 as well).
The allowable values would be Question-1234 or `select distinct region from sales;` etc. 'Field Filter' somewhat covers that, albeit in a specific way that isn't really applicable to the above example, I don't believe.
|
process
|
allow an array of manually entered values for sql variables fairly self explanatory would be nice to have an option of sql params to provide a simple array of values for drop down population not type ahead this opens up some clever and arbitrary uses for static params that are currently difficult simple ex variable ui somewhere order field dir also interesting allowing the value of a dynamic question to power this list would help as well the allowable values would be question or select distinct region from sales etc field filter somewhat covers that albeit in a specific way that isn t really applicable to the above example i don t believe
| 1
|
9,662
| 12,644,040,504
|
IssuesEvent
|
2020-06-16 10:52:06
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
Process: using GCP eX (e.x. e2) machine type instead of nX (e.x. n2) for test infra
|
area/engprod kind/process priority/p1 size/S status/triaged
|
Currently we using nX machine type. It's suggested and asked by other team that better we migrate to eX.
1. cloud build machine type
2. test scripts which create clusters
3. possible samples / docs
|
1.0
|
Process: using GCP eX (e.x. e2) machine type instead of nX (e.x. n2) for test infra - Currently we using nX machine type. It's suggested and asked by other team that better we migrate to eX.
1. cloud build machine type
2. test scripts which create clusters
3. possible samples / docs
|
process
|
process using gcp ex e x machine type instead of nx e x for test infra currently we using nx machine type it s suggested and asked by other team that better we migrate to ex cloud build machine type test scripts which create clusters possible samples docs
| 1
|
16,200
| 20,712,205,517
|
IssuesEvent
|
2022-03-12 04:03:00
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
Archival of Abandoned/Withdrawn EIPs
|
type: EIP1 (Process) stale
|
The Abandoned (*name may change, see #2941*) status is introduced for EIPs which are no longer pursued due to various reasons.
There has been some concerns that the current process is designed at merging drafts as soon as possible (*even if we fail at that*) will eventually result in a lot abandoned EIPs.
In order to clean up eips.ethereum.org, one possible solution could be considering a process of "archival": after a certain time period, Abandoned EIPs are archived. Archived EIPs:
- are listed under a separate section called "Archive" on eips.ethereum.org and do not show up under the respective categories nor under "All"
- their title and header is displayed unchanged
- their body is replaced with a text explaining they are archived and can be found in github, also it would explain how to revive them (-> mark them draft)
|
1.0
|
Archival of Abandoned/Withdrawn EIPs - The Abandoned (*name may change, see #2941*) status is introduced for EIPs which are no longer pursued due to various reasons.
There has been some concerns that the current process is designed at merging drafts as soon as possible (*even if we fail at that*) will eventually result in a lot abandoned EIPs.
In order to clean up eips.ethereum.org, one possible solution could be considering a process of "archival": after a certain time period, Abandoned EIPs are archived. Archived EIPs:
- are listed under a separate section called "Archive" on eips.ethereum.org and do not show up under the respective categories nor under "All"
- their title and header is displayed unchanged
- their body is replaced with a text explaining they are archived and can be found in github, also it would explain how to revive them (-> mark them draft)
|
process
|
archival of abandoned withdrawn eips the abandoned name may change see status is introduced for eips which are no longer pursued due to various reasons there has been some concerns that the current process is designed at merging drafts as soon as possible even if we fail at that will eventually result in a lot abandoned eips in order to clean up eips ethereum org one possible solution could be considering a process of archival after a certain time period abandoned eips are archived archived eips are listed under a separate section called archive on eips ethereum org and do not show up under the respective categories nor under all their title and header is displayed unchanged their body is replaced with a text explaining they are archived and can be found in github also it would explain how to revive them mark them draft
| 1
|
472,533
| 13,626,477,250
|
IssuesEvent
|
2020-09-24 11:05:48
|
Lookyloo/lookyloo
|
https://api.github.com/repos/Lookyloo/lookyloo
|
closed
|
SVG interactions
|
Low priority UI Improvements
|
Main hostname tree:
* [ ] click on icon (i.e. JS) -> displays box with all URLs loading a JS
* [x] click on hostname -> display all the related URLs (same format as hostnames: line 1: URL, Line 2: icons)
Overlay box:
* [x] click on icon (i.e. JS) -> download the content
|
1.0
|
SVG interactions - Main hostname tree:
* [ ] click on icon (i.e. JS) -> displays box with all URLs loading a JS
* [x] click on hostname -> display all the related URLs (same format as hostnames: line 1: URL, Line 2: icons)
Overlay box:
* [x] click on icon (i.e. JS) -> download the content
|
non_process
|
svg interactions main hostname tree click on icon i e js displays box with all urls loading a js click on hostname display all the related urls same format as hostnames line url line icons overlay box click on icon i e js download the content
| 0
|
20,673
| 27,336,154,840
|
IssuesEvent
|
2023-02-26 08:21:53
|
austinlake04/desiforecast
|
https://api.github.com/repos/austinlake04/desiforecast
|
closed
|
Write data to and read from FITS files
|
preprocessing low priority
|
- converting pandas DataFrame directly to astropy.io.fits object
> verify datetime index information is retained
- acquiring data with query script and storing data directly into FITS file
|
1.0
|
Write data to and read from FITS files - - converting pandas DataFrame directly to astropy.io.fits object
> verify datetime index information is retained
- acquiring data with query script and storing data directly into FITS file
|
process
|
write data to and read from fits files converting pandas dataframe directly to astropy io fits object verify datetime index information is retained acquiring data with query script and storing data directly into fits file
| 1
|
189,953
| 15,213,145,377
|
IssuesEvent
|
2021-02-17 11:22:37
|
xmos/lib_mic_array
|
https://api.github.com/repos/xmos/lib_mic_array
|
opened
|
examples/app_vu Example not compliant
|
type:documentation
|
<!--- Update the title with the path of the example -->
<!--- Update the issues below -->
**Issues:**
The app_vu example is incomplete as an App Note. It is missing a README.rst file and the doc directory, including typical contents such as the App Note text itself (i.e., ANddddd*.rst), any necessary diagrams, and an xdoc.conf file.
I have not checked the functional operation of the app_vu example.
|
1.0
|
examples/app_vu Example not compliant - <!--- Update the title with the path of the example -->
<!--- Update the issues below -->
**Issues:**
The app_vu example is incomplete as an App Note. It is missing a README.rst file and the doc directory, including typical contents such as the App Note text itself (i.e., ANddddd*.rst), any necessary diagrams, and an xdoc.conf file.
I have not checked the functional operation of the app_vu example.
|
non_process
|
examples app vu example not compliant issues the app vu example is incomplete as an app note it is missing a readme rst file and the doc directory including typical contents such as the app note text itself i e anddddd rst any necessary diagrams and an xdoc conf file i have not checked the functional operation of the app vu example
| 0
|
128,332
| 5,053,304,229
|
IssuesEvent
|
2016-12-21 07:32:07
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
opened
|
[FVT]:xcatprobe does not give dhcp information when mac attribute has several values.
|
component:xcatprobe priority:normal
|
xCAT 2.13.0
```
[root@c910f03c05k23 ~]# lsdef -l c910f03c05k07
Object name: c910f03c05k07
arch=ppc64le
cons=kvm
currchain=boot
currstate=install sles12.1-ppc64le-compute
groups=all
initrd=xcat/osimage/sles12.1-ppc64le-install-compute/initrd
installnic=eth0
kcmdline=quiet autoyast=http://!myipfn!:80/install/autoinst/c910f03c05k07 install=http://!myipfn!:80/install/sles12.1/ppc64le/1 netdevice=eth0 Loghost=!myipfn! console=tty0 console=hvc0,115200 dhcptimeout=150
kernel=xcat/osimage/sles12.1-ppc64le-install-compute/linux
mac=42:08:0a:03:05:07!c910f03c05k07-pri|42:08:0a:03:05:11!c910f03c05k07-pub
mgt=kvm
netboot=grub2
nichostnamesuffixes.eth1=-pub
nichostnamesuffixes.eth0=-pri
nicips.eth1=50.3.5.7
nicips.eth0=10.3.5.7
os=sles12.1
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
profile=compute
provmethod=sles12.1-ppc64le-install-compute
serialport=0
serialspeed=115200
status=powering-on
statustime=12-21-2016 01:31:39
updatestatus=synced
updatestatustime=11-24-2015 04:24:31
vmcpus=1
vmhost=c910f03c05
vmmemory=4096
vmnicnicmodel=virtio
vmnics=br10
vmstorage=dir:///var/lib/libvirt/images/
[root@c910f03c05k23 ~]# nodeset c910f03c05k07 osimage=sles12.1-ppc64le-install-compute
c910f03c05k07: install sles12.1-ppc64le-compute
[root@c910f03c05k23 ~]# rpower c910f03c05k07 boot
c910f03c05k07: on reset
```
On another screen, check the status
```
[root@c910f03c05k23 subcmds]# xcatprobe -w osdeploy -n c910f03c05k07
The install NIC in current server is eth0 [INFO]
All nodes to be deployed are valid [ OK ]
-------------------------------------------------------------
Start capturing every message during OS provision process....
-------------------------------------------------------------
[c910f03c05k07] Use command rpower to reboot node c910f03c05k07
[c910f03c05k07] Node status is changed to powering-on
[c910f03c05k07] Via TFTP download /boot/grub2/grub2-c910f03c05k07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/normal.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/gettext.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/gzio.mod
[c910f03c05k07] Via TFTP download //boot/grub2/grub.cfg-01-42-08-0a-03-05-07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/command.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/command.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/fs.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/fs.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.lst
[c910f03c05k07] Via TFTP download //boot/grub2/grub.cfg-01-42-08-0a-03-05-07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/http.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/http.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/echo.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/echo.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/linux.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/elf.mod
[c910f03c05k07] Via TFTP download /xcat/osimage/sles12.1-ppc64le-install-compute/linux
[c910f03c05k07] Via TFTP download /xcat/osimage/sles12.1-ppc64le-install-compute/initrd
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/content
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/content.asc
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/config
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/common
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/root
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/cracklib-dict-full.rpm
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/bind
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/yast2-trans-en_US.rpm
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/media.1/info.txt
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/license.tar.gz
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/part.info
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/control.xml
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/autoinst.xml
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/driverupdate
```
|
1.0
|
[FVT]:xcatprobe does not give dhcp information when mac attribute has several values. - xCAT 2.13.0
```
[root@c910f03c05k23 ~]# lsdef -l c910f03c05k07
Object name: c910f03c05k07
arch=ppc64le
cons=kvm
currchain=boot
currstate=install sles12.1-ppc64le-compute
groups=all
initrd=xcat/osimage/sles12.1-ppc64le-install-compute/initrd
installnic=eth0
kcmdline=quiet autoyast=http://!myipfn!:80/install/autoinst/c910f03c05k07 install=http://!myipfn!:80/install/sles12.1/ppc64le/1 netdevice=eth0 Loghost=!myipfn! console=tty0 console=hvc0,115200 dhcptimeout=150
kernel=xcat/osimage/sles12.1-ppc64le-install-compute/linux
mac=42:08:0a:03:05:07!c910f03c05k07-pri|42:08:0a:03:05:11!c910f03c05k07-pub
mgt=kvm
netboot=grub2
nichostnamesuffixes.eth1=-pub
nichostnamesuffixes.eth0=-pri
nicips.eth1=50.3.5.7
nicips.eth0=10.3.5.7
os=sles12.1
postbootscripts=otherpkgs
postscripts=syslog,remoteshell,syncfiles
profile=compute
provmethod=sles12.1-ppc64le-install-compute
serialport=0
serialspeed=115200
status=powering-on
statustime=12-21-2016 01:31:39
updatestatus=synced
updatestatustime=11-24-2015 04:24:31
vmcpus=1
vmhost=c910f03c05
vmmemory=4096
vmnicnicmodel=virtio
vmnics=br10
vmstorage=dir:///var/lib/libvirt/images/
[root@c910f03c05k23 ~]# nodeset c910f03c05k07 osimage=sles12.1-ppc64le-install-compute
c910f03c05k07: install sles12.1-ppc64le-compute
[root@c910f03c05k23 ~]# rpower c910f03c05k07 boot
c910f03c05k07: on reset
```
On another screen, check the status
```
[root@c910f03c05k23 subcmds]# xcatprobe -w osdeploy -n c910f03c05k07
The install NIC in current server is eth0 [INFO]
All nodes to be deployed are valid [ OK ]
-------------------------------------------------------------
Start capturing every message during OS provision process....
-------------------------------------------------------------
[c910f03c05k07] Use command rpower to reboot node c910f03c05k07
[c910f03c05k07] Node status is changed to powering-on
[c910f03c05k07] Via TFTP download /boot/grub2/grub2-c910f03c05k07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/normal.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/gettext.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/gzio.mod
[c910f03c05k07] Via TFTP download //boot/grub2/grub.cfg-01-42-08-0a-03-05-07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/command.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/command.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/fs.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/fs.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/crypto.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.lst
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/terminal.lst
[c910f03c05k07] Via TFTP download //boot/grub2/grub.cfg-01-42-08-0a-03-05-07
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/http.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/http.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/echo.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/echo.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/linux.mod
[c910f03c05k07] Via TFTP download /boot/grub2/powerpc-ieee1275/elf.mod
[c910f03c05k07] Via TFTP download /xcat/osimage/sles12.1-ppc64le-install-compute/linux
[c910f03c05k07] Via TFTP download /xcat/osimage/sles12.1-ppc64le-install-compute/initrd
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/content
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/content.asc
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/config
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/common
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/root
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/cracklib-dict-full.rpm
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/bind
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/boot/ppc64le/yast2-trans-en_US.rpm
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/media.1/info.txt
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/license.tar.gz
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/part.info
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/control.xml
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/autoinst.xml
[c910f03c05k07] Via HTTP get /install/sles12.1/ppc64le/1/driverupdate
```
|
non_process
|
xcatprobe does not give dhcp information when mac attribute has several values xcat lsdef l object name arch cons kvm currchain boot currstate install compute groups all initrd xcat osimage install compute initrd installnic kcmdline quiet autoyast install netdevice loghost myipfn console console dhcptimeout kernel xcat osimage install compute linux mac pri pub mgt kvm netboot nichostnamesuffixes pub nichostnamesuffixes pri nicips nicips os postbootscripts otherpkgs postscripts syslog remoteshell syncfiles profile compute provmethod install compute serialport serialspeed status powering on statustime updatestatus synced updatestatustime vmcpus vmhost vmmemory vmnicnicmodel virtio vmnics vmstorage dir var lib libvirt images nodeset osimage install compute install compute rpower boot on reset on another screen check the status xcatprobe w osdeploy n the install nic in current server is all nodes to be deployed are valid start capturing every message during os provision process use command rpower to reboot node node status is changed to powering on via tftp download boot via tftp download boot powerpc normal mod via tftp download boot powerpc terminal mod via tftp download boot powerpc crypto mod via tftp download boot powerpc gettext mod via tftp download boot powerpc gzio mod via tftp download boot grub cfg via tftp download boot powerpc command lst via tftp download boot powerpc command lst via tftp download boot powerpc fs lst via tftp download boot powerpc fs lst via tftp download boot powerpc crypto lst via tftp download boot powerpc crypto lst via tftp download boot powerpc terminal lst via tftp download boot powerpc terminal lst via tftp download boot grub cfg via tftp download boot powerpc http mod via tftp download boot powerpc http mod via tftp download boot powerpc echo mod via tftp download boot powerpc echo mod via tftp download boot powerpc linux mod via tftp download boot powerpc elf mod via tftp download xcat osimage install compute linux via tftp download xcat osimage install compute initrd via http get install content via http get install content asc via http get install boot config via http get install boot common via http get install boot root via http get install boot cracklib dict full rpm via http get install boot bind via http get install boot trans en us rpm via http get install media info txt via http get install license tar gz via http get install part info via http get install control xml via http get install autoinst xml via http get install driverupdate
| 0
|
552,441
| 16,240,724,630
|
IssuesEvent
|
2021-05-07 09:12:48
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
google.com - see bug description
|
browser-fenix engine-gecko priority-critical
|
<!-- @browser: Firefox Mobile 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:89.0) Gecko/89.0 Firefox/89.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/72524 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://google.com
**Browser / Version**: Firefox Mobile 89.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: The top meny bar keeps popping in and out when scrolling down
**Steps to Reproduce**:
When scrolling down the web address bar/ menu pops in and out rather than smoothly staying away as intended. This did not happen on previous versions of the android firefox browser
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210427185821</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/5/e810cbc6-5477-4f39-8cfb-0b10d7b81d21)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
1.0
|
google.com - see bug description - <!-- @browser: Firefox Mobile 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:89.0) Gecko/89.0 Firefox/89.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/72524 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://google.com
**Browser / Version**: Firefox Mobile 89.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: The top meny bar keeps popping in and out when scrolling down
**Steps to Reproduce**:
When scrolling down the web address bar/ menu pops in and out rather than smoothly staying away as intended. This did not happen on previous versions of the android firefox browser
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210427185821</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/5/e810cbc6-5477-4f39-8cfb-0b10d7b81d21)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
non_process
|
google com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description the top meny bar keeps popping in and out when scrolling down steps to reproduce when scrolling down the web address bar menu pops in and out rather than smoothly staying away as intended this did not happen on previous versions of the android firefox browser browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with β€οΈ
| 0
|
8,261
| 11,425,738,145
|
IssuesEvent
|
2020-02-03 20:26:49
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Compress post-process does not compress image
|
bug need-more-info post-processor/compress waiting-reply
|
I'm not sure what's causing my behavior, but I get no compressed artifact when adding a compress post-process the result is a successful build and a 22kb zip.
```
{
"post-processors": [
[
{
"type": "manifest",
"output": "manifest.json"
},
{
"type": "compress",
"keep_input_artifact": true,
"output": "{{user `vm_name`}}.zip"
}
]
]
}
```
The builder is `hyperv-iso` and it's a Windows 10 system for both host and guest. Any ideas? Nothing stands out in the output:
```
==> hyperv-iso: Running post-processor: manifest
==> hyperv-iso: Running post-processor: compress
==> hyperv-iso (compress): Zipping machine-xyz.zip
==> hyperv-iso (compress): Archive machine-xyz.zip completed
Build 'hyperv-iso' finished.
```
|
1.0
|
Compress post-process does not compress image - I'm not sure what's causing my behavior, but I get no compressed artifact when adding a compress post-process the result is a successful build and a 22kb zip.
```
{
"post-processors": [
[
{
"type": "manifest",
"output": "manifest.json"
},
{
"type": "compress",
"keep_input_artifact": true,
"output": "{{user `vm_name`}}.zip"
}
]
]
}
```
The builder is `hyperv-iso` and it's a Windows 10 system for both host and guest. Any ideas? Nothing stands out in the output:
```
==> hyperv-iso: Running post-processor: manifest
==> hyperv-iso: Running post-processor: compress
==> hyperv-iso (compress): Zipping machine-xyz.zip
==> hyperv-iso (compress): Archive machine-xyz.zip completed
Build 'hyperv-iso' finished.
```
|
process
|
compress post process does not compress image i m not sure what s causing my behavior but i get no compressed artifact when adding a compress post process the result is a successful build and a zip post processors type manifest output manifest json type compress keep input artifact true output user vm name zip the builder is hyperv iso and it s a windows system for both host and guest any ideas nothing stands out in the output hyperv iso running post processor manifest hyperv iso running post processor compress hyperv iso compress zipping machine xyz zip hyperv iso compress archive machine xyz zip completed build hyperv iso finished
| 1
|
30,024
| 5,719,678,430
|
IssuesEvent
|
2017-04-19 22:43:43
|
connectivedx/fuzzy-chainsaw
|
https://api.github.com/repos/connectivedx/fuzzy-chainsaw
|
closed
|
What's buggin' you?
|
documentation enhancement question
|
@drolsen @pgregorova @stoff @elseloop @nicmarson @kamsar @speeQr
Hello good folks. If you've already started a project using Fuzzy Chainsaw, you've probably found some workflow issues or daily annoyances. I'd like to open up this ticket to discuss ongoing improvements that could be made to remove development friction.
Some examples I've found so far:
- I'd like to be able to create an example with a dark background in the style guide. (lead to #136)
- In practice, it seems like using PascalCase in css to match JSX style might be an easier mental model to consume.
|
1.0
|
What's buggin' you? - @drolsen @pgregorova @stoff @elseloop @nicmarson @kamsar @speeQr
Hello good folks. If you've already started a project using Fuzzy Chainsaw, you've probably found some workflow issues or daily annoyances. I'd like to open up this ticket to discuss ongoing improvements that could be made to remove development friction.
Some examples I've found so far:
- I'd like to be able to create an example with a dark background in the style guide. (lead to #136)
- In practice, it seems like using PascalCase in css to match JSX style might be an easier mental model to consume.
|
non_process
|
what s buggin you drolsen pgregorova stoff elseloop nicmarson kamsar speeqr hello good folks if you ve already started a project using fuzzy chainsaw you ve probably found some workflow issues or daily annoyances i d like to open up this ticket to discuss ongoing improvements that could be made to remove development friction some examples i ve found so far i d like to be able to create an example with a dark background in the style guide lead to in practice it seems like using pascalcase in css to match jsx style might be an easier mental model to consume
| 0
|
11,773
| 14,610,415,976
|
IssuesEvent
|
2020-12-22 00:18:47
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Release: Jenkins does not update the release notes on github
|
kind/cleanup kind/process priority/important-soon
|
for past two releases
https://github.com/kubernetes/minikube/releases/tag/v1.14.0
and
https://github.com/kubernetes/minikube/releases/tag/v1.14.0-beta.0
we have been updating the release notes manually.
I ran the command in the script manually and it seems that it is not matching
[hack/jenkins/release_github_page.sh](https://github.com/medyagh/minikube/blob/b09ee50ec047410326a85435f4d99026f9c4f5c4/hack/jenkins/release_github_page.sh#L43-L44)
```
medya@~/workspace/minikube (master) $
medya@~/workspace/minikube (master) $ RELEASE_NOTES=$(perl -e "\$p=0; while(<>) { if(/^## Version ${VERSION} -/) { \$p=1 } elsif (/^##/) { \$p=0 }; if (\$p) { print }}" < CHANGELOG.md)
medya@~/workspace/minikube (master) $
medya@~/workspace/minikube (master) $ echo $RELEASE_NOTES
```
|
1.0
|
Release: Jenkins does not update the release notes on github - for past two releases
https://github.com/kubernetes/minikube/releases/tag/v1.14.0
and
https://github.com/kubernetes/minikube/releases/tag/v1.14.0-beta.0
we have been updating the release notes manually.
I ran the command in the script manually and it seems that it is not matching
[hack/jenkins/release_github_page.sh](https://github.com/medyagh/minikube/blob/b09ee50ec047410326a85435f4d99026f9c4f5c4/hack/jenkins/release_github_page.sh#L43-L44)
```
medya@~/workspace/minikube (master) $
medya@~/workspace/minikube (master) $ RELEASE_NOTES=$(perl -e "\$p=0; while(<>) { if(/^## Version ${VERSION} -/) { \$p=1 } elsif (/^##/) { \$p=0 }; if (\$p) { print }}" < CHANGELOG.md)
medya@~/workspace/minikube (master) $
medya@~/workspace/minikube (master) $ echo $RELEASE_NOTES
```
|
process
|
release jenkins does not update the release notes on github for past two releases and we have been updating the release notes manually i ran the command in the script manually and it seems that it is not matching medya workspace minikube master medya workspace minikube master release notes perl e p while if version version p elsif p if p print changelog md medya workspace minikube master medya workspace minikube master echo release notes
| 1
|
18,387
| 24,515,234,023
|
IssuesEvent
|
2022-10-11 04:00:13
|
f5devcentral/container-egress-service
|
https://api.github.com/repos/f5devcentral/container-egress-service
|
closed
|
CVE-2022-1996
|
fixed processing
|
[CVE-2022-1996](https://github.com/advisories/GHSA-r48q-9g5r-8q2h) Published:Β June 08, 2022; 9:15:07 AM -0400 V3.1:Β 9.1 **CRITICAL** V2.0:Β 6.4 MEDIUM
> Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.
**To Reproduce**
[github.com/emicklei/go-restful version 2.15.0+incompatible ](https://github.com/f5devcentral/container-egress-service/blob/3e8f64bb9249ae60325fa7cd71e77b078abcfef2/go.mod#L6)
**Expected behavior**
github.com/emicklei/go-restful version 2.16.0 or higher
**Additional context**
[Authorization Bypass Through User-Controlled Key in go-restful](https://github.com/advisories/GHSA-r48q-9g5r-8q2h)
|
1.0
|
CVE-2022-1996 - [CVE-2022-1996](https://github.com/advisories/GHSA-r48q-9g5r-8q2h) Published:Β June 08, 2022; 9:15:07 AM -0400 V3.1:Β 9.1 **CRITICAL** V2.0:Β 6.4 MEDIUM
> Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.
**To Reproduce**
[github.com/emicklei/go-restful version 2.15.0+incompatible ](https://github.com/f5devcentral/container-egress-service/blob/3e8f64bb9249ae60325fa7cd71e77b078abcfef2/go.mod#L6)
**Expected behavior**
github.com/emicklei/go-restful version 2.16.0 or higher
**Additional context**
[Authorization Bypass Through User-Controlled Key in go-restful](https://github.com/advisories/GHSA-r48q-9g5r-8q2h)
|
process
|
cve published Β june am Β critical Β medium authorization bypass through user controlled key in github repository emicklei go restful prior to to reproduce expected behavior github com emicklei go restful version or higher additional context
| 1
|
579,426
| 17,191,199,168
|
IssuesEvent
|
2021-07-16 11:13:36
|
thoth-station/kebechet
|
https://api.github.com/repos/thoth-station/kebechet
|
closed
|
Create a manager that automatically updates generated swagger clients
|
good first issue hacktoberfest kind/feature lifecycle/rotten priority/backlog
|
I, as thoth would like to keep my swagger clients up2date. If there is any change in the swagger specification, I would like to automatically populate these changes to libraries - swagger clients - (and potentially automatically release them).
An example:
I changed Amun API swagger specification. On merge to master a bot finds out that the swagger specification has changed and automatically generates swagger client in amun-client.
The same applies for Thoth itself and its Thamos library.
Before the release is done there are automatically performed tests (bots open PRs so changes are checked against testsuites). This way we know if there is some breaking change in swagger clients even before we push services to production with their generated libraries.
|
1.0
|
Create a manager that automatically updates generated swagger clients - I, as thoth would like to keep my swagger clients up2date. If there is any change in the swagger specification, I would like to automatically populate these changes to libraries - swagger clients - (and potentially automatically release them).
An example:
I changed Amun API swagger specification. On merge to master a bot finds out that the swagger specification has changed and automatically generates swagger client in amun-client.
The same applies for Thoth itself and its Thamos library.
Before the release is done there are automatically performed tests (bots open PRs so changes are checked against testsuites). This way we know if there is some breaking change in swagger clients even before we push services to production with their generated libraries.
|
non_process
|
create a manager that automatically updates generated swagger clients i as thoth would like to keep my swagger clients if there is any change in the swagger specification i would like to automatically populate these changes to libraries swagger clients and potentially automatically release them an example i changed amun api swagger specification on merge to master a bot finds out that the swagger specification has changed and automatically generates swagger client in amun client the same applies for thoth itself and its thamos library before the release is done there are automatically performed tests bots open prs so changes are checked against testsuites this way we know if there is some breaking change in swagger clients even before we push services to production with their generated libraries
| 0
|
7,634
| 2,603,739,225
|
IssuesEvent
|
2015-02-24 17:40:38
|
chrsmith/bwapi
|
https://api.github.com/repos/chrsmith/bwapi
|
closed
|
Building inside creep/pylon range.
|
auto-migrated Maintainability Priority-High Type-Enhancement Usability
|
```
Don't attempt to build outside of these ranges.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2009 at 2:55
|
1.0
|
Building inside creep/pylon range. - ```
Don't attempt to build outside of these ranges.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2009 at 2:55
|
non_process
|
building inside creep pylon range don t attempt to build outside of these ranges original issue reported on code google com by aheinerm on feb at
| 0
|
16,603
| 21,658,036,922
|
IssuesEvent
|
2022-05-06 15:59:03
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Highlight differences from BL - PHAST
|
Process Heating
|
There are instances (especially with newer fields) where MEASUR doesn't register a difference between the baseline and mod in assessments - both individual field highlighting and badge changes
Below is a list of known fields, but there may be more
Basically any CO2 emission factor field (or the associated dropdowns)

New cost fields in Process Heating - Electrotechnology - Electric Arc Furnace

I checked the rest of process heating - CO2 related fields and 2 new fields need the blue




steam


This one may have been an artifact of being an older assessment...

I haven't looked at Pumps, Fans or wastewater. They will need to be checked quickly
Compressed Air doesn't have an Expert mode, so only the View/Add Scenario button needs to be tested
|
1.0
|
Highlight differences from BL - PHAST - There are instances (especially with newer fields) where MEASUR doesn't register a difference between the baseline and mod in assessments - both individual field highlighting and badge changes
Below is a list of known fields, but there may be more
Basically any CO2 emission factor field (or the associated dropdowns)

New cost fields in Process Heating - Electrotechnology - Electric Arc Furnace

I checked the rest of process heating - CO2 related fields and 2 new fields need the blue




steam


This one may have been an artifact of being an older assessment...

I haven't looked at Pumps, Fans or wastewater. They will need to be checked quickly
Compressed Air doesn't have an Expert mode, so only the View/Add Scenario button needs to be tested
|
process
|
highlight differences from bl phast there are instances especially with newer fields where measur doesn t register a difference between the baseline and mod in assessments both individual field highlighting and badge changes below is a list of known fields but there may be more basically any emission factor field or the associated dropdowns new cost fields in process heating electrotechnology electric arc furnace i checked the rest of process heating related fields and new fields need the blue steam this one may have been an artifact of being an older assessment i haven t looked at pumps fans or wastewater they will need to be checked quickly compressed air doesn t have an expert mode so only the view add scenario button needs to be tested
| 1
|
9,123
| 12,197,930,995
|
IssuesEvent
|
2020-04-29 21:44:19
|
googleapis/nodejs-game-servers
|
https://api.github.com/repos/googleapis/nodejs-game-servers
|
closed
|
Add CRUD samples for Game Server clusters
|
type: process
|
Add the following region tags:
- cloud_game_servers_create_cluster
- cloud_game_servers_get_cluster
- cloud_game_servers_list_clusters
- cloud_game_servers_delete_cluster
|
1.0
|
Add CRUD samples for Game Server clusters - Add the following region tags:
- cloud_game_servers_create_cluster
- cloud_game_servers_get_cluster
- cloud_game_servers_list_clusters
- cloud_game_servers_delete_cluster
|
process
|
add crud samples for game server clusters add the following region tags cloud game servers create cluster cloud game servers get cluster cloud game servers list clusters cloud game servers delete cluster
| 1
|
172,938
| 14,396,235,467
|
IssuesEvent
|
2020-12-03 05:49:55
|
TonySchaufelberger/Python-Calculus-Quiz-Code
|
https://api.github.com/repos/TonySchaufelberger/Python-Calculus-Quiz-Code
|
closed
|
Functionality
|
documentation relevant implications
|
Relevant Implications relating to Functionality:
Every aspect of the program works well and as intended. This is important as these are implemented in order to enhance the user experience - if they did not work as intended, then they would be ineffective and would detract from the user experience instead as the user could be confused or frustrated as to why itβs not working.
The program's functionality also fulfills all end-user requirements. When asking various users who have tested this program, they have all used the quiz for revising their knowledge, and responded positively to the options given to the user. Overall, this shows that the program is effective in its purpose and what it was designed to do.
Rigourous version testing, combined with the feedback from users at each stage contributed heavily to the development of functional and effective features of the program. This bug-testing ensures that there are no prevalent bugs in the program.
There have also been experimental features added to the program. These have been greyed out until development completes, as partial completion does not form good functionality.
|
1.0
|
Functionality - Relevant Implications relating to Functionality:
Every aspect of the program works well and as intended. This is important as these are implemented in order to enhance the user experience - if they did not work as intended, then they would be ineffective and would detract from the user experience instead as the user could be confused or frustrated as to why itβs not working.
The program's functionality also fulfills all end-user requirements. When asking various users who have tested this program, they have all used the quiz for revising their knowledge, and responded positively to the options given to the user. Overall, this shows that the program is effective in its purpose and what it was designed to do.
Rigourous version testing, combined with the feedback from users at each stage contributed heavily to the development of functional and effective features of the program. This bug-testing ensures that there are no prevalent bugs in the program.
There have also been experimental features added to the program. These have been greyed out until development completes, as partial completion does not form good functionality.
|
non_process
|
functionality relevant implications relating to functionality every aspect of the program works well and as intended this is important as these are implemented in order to enhance the user experience if they did not work as intended then they would be ineffective and would detract from the user experience instead as the user could be confused or frustrated as to why itβs not working the program s functionality also fulfills all end user requirements when asking various users who have tested this program they have all used the quiz for revising their knowledge and responded positively to the options given to the user overall this shows that the program is effective in its purpose and what it was designed to do rigourous version testing combined with the feedback from users at each stage contributed heavily to the development of functional and effective features of the program this bug testing ensures that there are no prevalent bugs in the program there have also been experimental features added to the program these have been greyed out until development completes as partial completion does not form good functionality
| 0
|
289,447
| 31,932,997,403
|
IssuesEvent
|
2023-09-19 08:41:09
|
Trinadh465/linux-4.1.15_CVE-2023-4128
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
|
opened
|
CVE-2018-20836 (High) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2018-20836 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 4.20. There is a race condition in smp_task_timedout() and smp_task_done() in drivers/scsi/libsas/sas_expander.c, leading to a use-after-free.
<p>Publish Date: 2019-05-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20836>CVE-2018-20836</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20836">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20836</a></p>
<p>Release Date: 2019-05-07</p>
<p>Fix Resolution: v4.20-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-20836 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2018-20836 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/libsas/sas_expander.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 4.20. There is a race condition in smp_task_timedout() and smp_task_done() in drivers/scsi/libsas/sas_expander.c, leading to a use-after-free.
<p>Publish Date: 2019-05-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20836>CVE-2018-20836</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20836">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20836</a></p>
<p>Release Date: 2019-05-07</p>
<p>Fix Resolution: v4.20-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers scsi libsas sas expander c drivers scsi libsas sas expander c drivers scsi libsas sas expander c vulnerability details an issue was discovered in the linux kernel before there is a race condition in smp task timedout and smp task done in drivers scsi libsas sas expander c leading to a use after free publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
15,930
| 20,149,023,165
|
IssuesEvent
|
2022-02-09 10:30:13
|
digitalmethodsinitiative/4cat
|
https://api.github.com/repos/digitalmethodsinitiative/4cat
|
closed
|
Allow processor results to auto-delete
|
enhancement processors
|
Currently automatic deletion of datasets after a given amount of time is possible for top-level datasets. This would also be a useful feature to have for some processors, if they collect new information that could be sensitive (e.g. photos) or take a lot of disk space (e.g. photos ;)
|
1.0
|
Allow processor results to auto-delete - Currently automatic deletion of datasets after a given amount of time is possible for top-level datasets. This would also be a useful feature to have for some processors, if they collect new information that could be sensitive (e.g. photos) or take a lot of disk space (e.g. photos ;)
|
process
|
allow processor results to auto delete currently automatic deletion of datasets after a given amount of time is possible for top level datasets this would also be a useful feature to have for some processors if they collect new information that could be sensitive e g photos or take a lot of disk space e g photos
| 1
|
10,383
| 13,194,928,413
|
IssuesEvent
|
2020-08-13 17:42:37
|
pacificclimate/osprey
|
https://api.github.com/repos/pacificclimate/osprey
|
closed
|
Wrap RVIC parameters module into wps
|
enhancement process
|
The [`parameter`](https://github.com/UW-Hydro/RVIC/blob/master/rvic/parameters.py) module is responsible for the "development of impulse response functions using input datasets such as a flow direction grid, outlet locations, etc." Wrap this into a `process` that can handle either a `file` or `dict` object as input. Follow the [`script`](https://github.com/UW-Hydro/RVIC/blob/master/scripts/rvic) for other inputs that may be needed.
Things we need to add:
- The `process`
- A `pytest` suite
- A `notebook` demo (can be an individual demo for now)
Ensure that you:
- Test both locally and on the `docker` server
- Make note of any changes required in `RVIC` in this thread
|
1.0
|
Wrap RVIC parameters module into wps - The [`parameter`](https://github.com/UW-Hydro/RVIC/blob/master/rvic/parameters.py) module is responsible for the "development of impulse response functions using input datasets such as a flow direction grid, outlet locations, etc." Wrap this into a `process` that can handle either a `file` or `dict` object as input. Follow the [`script`](https://github.com/UW-Hydro/RVIC/blob/master/scripts/rvic) for other inputs that may be needed.
Things we need to add:
- The `process`
- A `pytest` suite
- A `notebook` demo (can be an individual demo for now)
Ensure that you:
- Test both locally and on the `docker` server
- Make note of any changes required in `RVIC` in this thread
|
process
|
wrap rvic parameters module into wps the module is responsible for the development of impulse response functions using input datasets such as a flow direction grid outlet locations etc wrap this into a process that can handle either a file or dict object as input follow the for other inputs that may be needed things we need to add the process a pytest suite a notebook demo can be an individual demo for now ensure that you test both locally and on the docker server make note of any changes required in rvic in this thread
| 1
|
671,559
| 22,766,661,143
|
IssuesEvent
|
2022-07-08 05:33:23
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
SBC Common Components: header dropdown account list doesn't scroll
|
bug Priority3 Relationships
|
Profile BCREG0001 in Dev has many accounts... so many, in fact, that they don't all display on a page. Unfortunately, the list/page doesn't scroll, so it's not possible to select the bottom ones without making the page taller (or the font smaller), which is not always possible.
Example:

^^ the drop-down list doesn't scroll, so I can't select any item below this.
|
1.0
|
SBC Common Components: header dropdown account list doesn't scroll - Profile BCREG0001 in Dev has many accounts... so many, in fact, that they don't all display on a page. Unfortunately, the list/page doesn't scroll, so it's not possible to select the bottom ones without making the page taller (or the font smaller), which is not always possible.
Example:

^^ the drop-down list doesn't scroll, so I can't select any item below this.
|
non_process
|
sbc common components header dropdown account list doesn t scroll profile in dev has many accounts so many in fact that they don t all display on a page unfortunately the list page doesn t scroll so it s not possible to select the bottom ones without making the page taller or the font smaller which is not always possible example the drop down list doesn t scroll so i can t select any item below this
| 0
|
22,705
| 32,030,779,065
|
IssuesEvent
|
2023-09-22 12:12:00
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
hpcflow-new2 0.2.0a106 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a106",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a106/hpcflow/sdk/helper/helper.py:111",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a106/hpcflow/sdk/submission/jobscript.py:990",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp5k41pr1l/hpcflow-new2"
}
}```
|
1.0
|
hpcflow-new2 0.2.0a106 has 2 GuardDog issues - https://pypi.org/project/hpcflow-new2
https://inspector.pypi.io/project/hpcflow-new2
```{
"dependency": "hpcflow-new2",
"version": "0.2.0a106",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "hpcflow_new2-0.2.0a106/hpcflow/sdk/helper/helper.py:111",
"code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "hpcflow_new2-0.2.0a106/hpcflow/sdk/submission/jobscript.py:990",
"code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp5k41pr1l/hpcflow-new2"
}
}```
|
process
|
hpcflow has guarddog issues dependency hpcflow version result issues errors results silent process execution location hpcflow hpcflow sdk helper helper py code proc subprocess popen n args args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n kwargs n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null exec location hpcflow hpcflow sdk submission jobscript py code init proc subprocess popen n args args n cwd str self workflow path n creationflags subprocess create no window n message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n path tmp hpcflow
| 1
|
57,447
| 15,786,274,933
|
IssuesEvent
|
2021-04-01 17:30:42
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
reprodcibility of scipy.sparse.linalg.svds
|
defect scipy.sparse.linalg
|
`scipy.sparse.linalg.svds` sometimes gives slightly different results when calling it for the second time. It is not easy to reproduce this behavior for me. I need to use a special input array that I uploaded here: https://sharing.crt-dresden.de/index.php/s/zkgBd6MQj2hd2zq/download
The data I uploaded is single cell data. So far, I didn't manage to create a random sparse matrix showing the same issue.
I do run the code below in a Jupyter Lab notebook. It is important that I restart the Python kernel and run the code. Then, the reproducibility issue will happen. If I run the code again, without restarting the kernel, successive calls usually give the same result.
I came across the issue when using scanpy, where `svds` is used to do principal components analysis (https://github.com/theislab/scanpy/issues/1749).
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```python
import numpy as np
from scipy.sparse import load_npz
from scipy.sparse.linalg import LinearOperator, svds
X = load_npz('X.npz')
np.random.seed(42)
v0 = np.random.rand(2700)
def linear_operator(X):
mu = X.mean(0).A.flatten()[None, :]
mdot = mu.dot
mmat = mdot
mhdot = mu.T.dot
mhmat = mu.T.dot
Xdot = X.dot
Xmat = Xdot
XHdot = X.T.conj().dot
XHmat = XHdot
ones = np.ones(X.shape[0])[None, :].dot
def matvec(x):
return Xdot(x) - mdot(x)
def matmat(x):
return Xmat(x) - mmat(x)
def rmatvec(x):
return XHdot(x) - mhdot(ones(x))
XL = LinearOperator(
matvec=matvec,
dtype=X.dtype,
matmat=matmat,
shape=X.shape,
rmatvec=rmatvec
)
return XL
XL = linear_operator(X)
_, s0, _ = svds(XL, solver='arpack', k=50, v0=v0)
_, s, _ = svds(XL, solver='arpack', k=50, v0=v0)
print(np.array_equal(s0, s))
print(s0)
print(s)
```
#### Output:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
I would expect a `True` in the first line and the exact same values for the two arrays. The values are only approximately the same.
```
False
[ 278.30255 281.411 284.99527 287.63443 291.7871 295.71402
298.26492 299.56052 303.0501 309.24493 310.1185 312.63315
315.82254 316.74927 319.27026 319.75082 326.31787 328.99042
334.83572 336.02936 337.44098 346.24493 353.1684 362.29474
365.06183 373.71198 376.8522 382.48376 387.80975 409.9564
416.49933 428.44943 438.2156 456.71884 473.25833 480.61197
486.87122 493.66934 504.4133 533.55255 581.6367 725.28375
799.1953 907.3372 1107.4529 1161.2765 1378.3331 1664.9464
3205.3774 3608.9746 ]
[ 278.30252 281.41095 284.9953 287.63443 291.7871 295.714
298.26505 299.5605 303.05014 309.245 310.11847 312.63306
315.82245 316.74924 319.2703 319.75064 326.3178 328.9904
334.8356 336.02936 337.44092 346.24493 353.1683 362.29456
365.0619 373.71204 376.85217 382.48367 387.8097 409.9564
416.49933 428.44934 438.2155 456.71884 473.25833 480.61185
486.8711 493.66928 504.4132 533.5527 581.63684 725.2845
799.19574 907.3374 1107.4526 1161.2762 1378.3326 1664.9454
3205.3777 3608.9749 ]
```
#### Scipy/Numpy/Python version information:
<!-- You can simply run the following and paste the result in a code block
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
-->
```
1.6.0 1.20.1 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)
```
I am running the code in this singularity container: https://singularity-hub.org/collections/5095
|
1.0
|
reprodcibility of scipy.sparse.linalg.svds - `scipy.sparse.linalg.svds` sometimes gives slightly different results when calling it for the second time. It is not easy to reproduce this behavior for me. I need to use a special input array that I uploaded here: https://sharing.crt-dresden.de/index.php/s/zkgBd6MQj2hd2zq/download
The data I uploaded is single cell data. So far, I didn't manage to create a random sparse matrix showing the same issue.
I do run the code below in a Jupyter Lab notebook. It is important that I restart the Python kernel and run the code. Then, the reproducibility issue will happen. If I run the code again, without restarting the kernel, successive calls usually give the same result.
I came across the issue when using scanpy, where `svds` is used to do principal components analysis (https://github.com/theislab/scanpy/issues/1749).
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```python
import numpy as np
from scipy.sparse import load_npz
from scipy.sparse.linalg import LinearOperator, svds
X = load_npz('X.npz')
np.random.seed(42)
v0 = np.random.rand(2700)
def linear_operator(X):
mu = X.mean(0).A.flatten()[None, :]
mdot = mu.dot
mmat = mdot
mhdot = mu.T.dot
mhmat = mu.T.dot
Xdot = X.dot
Xmat = Xdot
XHdot = X.T.conj().dot
XHmat = XHdot
ones = np.ones(X.shape[0])[None, :].dot
def matvec(x):
return Xdot(x) - mdot(x)
def matmat(x):
return Xmat(x) - mmat(x)
def rmatvec(x):
return XHdot(x) - mhdot(ones(x))
XL = LinearOperator(
matvec=matvec,
dtype=X.dtype,
matmat=matmat,
shape=X.shape,
rmatvec=rmatvec
)
return XL
XL = linear_operator(X)
_, s0, _ = svds(XL, solver='arpack', k=50, v0=v0)
_, s, _ = svds(XL, solver='arpack', k=50, v0=v0)
print(np.array_equal(s0, s))
print(s0)
print(s)
```
#### Output:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
I would expect a `True` in the first line and the exact same values for the two arrays. The values are only approximately the same.
```
False
[ 278.30255 281.411 284.99527 287.63443 291.7871 295.71402
298.26492 299.56052 303.0501 309.24493 310.1185 312.63315
315.82254 316.74927 319.27026 319.75082 326.31787 328.99042
334.83572 336.02936 337.44098 346.24493 353.1684 362.29474
365.06183 373.71198 376.8522 382.48376 387.80975 409.9564
416.49933 428.44943 438.2156 456.71884 473.25833 480.61197
486.87122 493.66934 504.4133 533.55255 581.6367 725.28375
799.1953 907.3372 1107.4529 1161.2765 1378.3331 1664.9464
3205.3774 3608.9746 ]
[ 278.30252 281.41095 284.9953 287.63443 291.7871 295.714
298.26505 299.5605 303.05014 309.245 310.11847 312.63306
315.82245 316.74924 319.2703 319.75064 326.3178 328.9904
334.8356 336.02936 337.44092 346.24493 353.1683 362.29456
365.0619 373.71204 376.85217 382.48367 387.8097 409.9564
416.49933 428.44934 438.2155 456.71884 473.25833 480.61185
486.8711 493.66928 504.4132 533.5527 581.63684 725.2845
799.19574 907.3374 1107.4526 1161.2762 1378.3326 1664.9454
3205.3777 3608.9749 ]
```
#### Scipy/Numpy/Python version information:
<!-- You can simply run the following and paste the result in a code block
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
-->
```
1.6.0 1.20.1 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)
```
I am running the code in this singularity container: https://singularity-hub.org/collections/5095
|
non_process
|
reprodcibility of scipy sparse linalg svds scipy sparse linalg svds sometimes gives slightly different results when calling it for the second time it is not easy to reproduce this behavior for me i need to use a special input array that i uploaded here the data i uploaded is single cell data so far i didn t manage to create a random sparse matrix showing the same issue i do run the code below in a jupyter lab notebook it is important that i restart the python kernel and run the code then the reproducibility issue will happen if i run the code again without restarting the kernel successive calls usually give the same result i came across the issue when using scanpy where svds is used to do principal components analysis reproducing code example if you place your code between the triple backticks below it will be rendered as a code block python import numpy as np from scipy sparse import load npz from scipy sparse linalg import linearoperator svds x load npz x npz np random seed np random rand def linear operator x mu x mean a flatten mdot mu dot mmat mdot mhdot mu t dot mhmat mu t dot xdot x dot xmat xdot xhdot x t conj dot xhmat xhdot ones np ones x shape dot def matvec x return xdot x mdot x def matmat x return xmat x mmat x def rmatvec x return xhdot x mhdot ones x xl linearoperator matvec matvec dtype x dtype matmat matmat shape x shape rmatvec rmatvec return xl xl linear operator x svds xl solver arpack k s svds xl solver arpack k print np array equal s print print s output if any paste the full error message inside a code block as above starting from line traceback i would expect a true in the first line and the exact same values for the two arrays the values are only approximately the same false scipy numpy python version information you can simply run the following and paste the result in a code block import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial i am running the code in this singularity container
| 0
|
20,964
| 27,818,512,298
|
IssuesEvent
|
2023-03-19 00:08:53
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
The postcss-sprites plugin requires additional configuration and cannot get postcss-sprites.config.js normally
|
CSS Preprocessing Stale
|

Through the example of the official documentation, create postcss-sprites.config.js for configuration, but it seems that the configuration of postcss-sprites.config.js is not obtained.
Using postcss.config.js, promptοΌ
```
@parcel/transformer-postcss: WARNING: Using a JavaScript PostCSS config file means losing out on caching features of
Parcel. Use a .postcssrc(.json) file whenever possible.
```
fork postcss-sprites οΌAdditional postcss-sprites.config.js is added through fs, but it seems that saving postcss-sprites.config.js does not take effect. See the documentation prompt: Avoid trying to get information from other sources, especially from the file system, I how should I do it
|
1.0
|
The postcss-sprites plugin requires additional configuration and cannot get postcss-sprites.config.js normally - 
Through the example of the official documentation, create postcss-sprites.config.js for configuration, but it seems that the configuration of postcss-sprites.config.js is not obtained.
Using postcss.config.js, promptοΌ
```
@parcel/transformer-postcss: WARNING: Using a JavaScript PostCSS config file means losing out on caching features of
Parcel. Use a .postcssrc(.json) file whenever possible.
```
fork postcss-sprites οΌAdditional postcss-sprites.config.js is added through fs, but it seems that saving postcss-sprites.config.js does not take effect. See the documentation prompt: Avoid trying to get information from other sources, especially from the file system, I how should I do it
|
process
|
the postcss sprites plugin requires additional configuration and cannot get postcss sprites config js normally through the example of the official documentation create postcss sprites config js for configuration but it seems that the configuration of postcss sprites config js is not obtained using postcss config js promptοΌ parcel transformer postcss warning using a javascript postcss config file means losing out on caching features of parcel use a postcssrc json file whenever possible fork postcss sprites οΌadditional postcss sprites config js is added through fs but it seems that saving postcss sprites config js does not take effect see the documentation prompt avoid trying to get information from other sources especially from the file system i how should i do it
| 1
|
55,619
| 14,597,328,698
|
IssuesEvent
|
2020-12-20 19:38:17
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
`scipy.ncx2.sf` should be monotone decreasing
|
defect scipy.special scipy.stats
|
I'm getting unexpected behavior from `scipy.ncx2.sf`. The survival function should eventually go to zero for large enough inputs, but instead I'm seeing a kind of "foot", i.e., it is bottoming out at a positive value. The following interactive I/O demonstrates the issue.
### Reproducing code **example:**
(Pdb) v_T_norm[::50]
array([ 4.99515382e+00, 1.07617327e+01, 2.31854502e+01,
4.99515382e+01, 1.07617327e+02, 2.31854502e+02,
4.99515382e+02, 1.07617327e+03, 2.31854502e+03,
4.99515382e+03, 1.07617327e+04, 2.31854502e+04,
4.99515382e+04])
(Pdb) nu
20.0
(Pdb) lam
499.51538166556196
(Pdb) ncx2.sf(v_T_norm[::50], df=nu, nc=lam)
array([ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
6.64666099e-01, 4.16325934e-05, 4.16325934e-05,
4.16325934e-05, 4.16325934e-05, 4.16325934e-05,
4.16325934e-05])
|
1.0
|
`scipy.ncx2.sf` should be monotone decreasing - I'm getting unexpected behavior from `scipy.ncx2.sf`. The survival function should eventually go to zero for large enough inputs, but instead I'm seeing a kind of "foot", i.e., it is bottoming out at a positive value. The following interactive I/O demonstrates the issue.
### Reproducing code **example:**
(Pdb) v_T_norm[::50]
array([ 4.99515382e+00, 1.07617327e+01, 2.31854502e+01,
4.99515382e+01, 1.07617327e+02, 2.31854502e+02,
4.99515382e+02, 1.07617327e+03, 2.31854502e+03,
4.99515382e+03, 1.07617327e+04, 2.31854502e+04,
4.99515382e+04])
(Pdb) nu
20.0
(Pdb) lam
499.51538166556196
(Pdb) ncx2.sf(v_T_norm[::50], df=nu, nc=lam)
array([ 1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
1.00000000e+00, 1.00000000e+00, 1.00000000e+00,
6.64666099e-01, 4.16325934e-05, 4.16325934e-05,
4.16325934e-05, 4.16325934e-05, 4.16325934e-05,
4.16325934e-05])
|
non_process
|
scipy sf should be monotone decreasing i m getting unexpected behavior from scipy sf the survival function should eventually go to zero for large enough inputs but instead i m seeing a kind of foot i e it is bottoming out at a positive value the following interactive i o demonstrates the issue reproducing code example pdb v t norm array pdb nu pdb lam pdb sf v t norm df nu nc lam array
| 0
|
57,100
| 15,687,979,328
|
IssuesEvent
|
2021-03-25 14:14:00
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Race between appending AIO DIO and fallocate
|
Status: Triage Needed Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name |
Distribution Version |
Linux Kernel | 5.11
Architecture |
ZFS Version | 2.0.0-rc1_466_g8a915ba1f66e
SPL Version | 2.0.0-rc1_466_g8a915ba1f66e
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
This problem was detected with generic/586, from the fstest suite. The test description is:
> Race an appending aio dio write to the second block of a file while
> simultaneously fallocating to the first block. Make sure that we end up
> with a two-block file.
There's also the [mailing-list thread](https://lore.kernel.org/linux-xfs/20191029100342.GA41131@bfoster/T) for the XFS fix for this race.
After looking at the code and running some debug code, I think that `zpl_fallocate_common()` races against `zpl_iter_write()`:
- `zpl_iter_write()` calls `zfs_write()`
- `zpl_fallocate_common()` starts executing and gets 0 for olen (`olen = i_size_read(ip);`).
- However, in `zfs_freesp()`, after the sa_lookup, `zp->z_size` will be already updated from the `zfs_write()` above.
I tried to come up with a fix, but I'm sure anyone familiar with this code will have that fix ready a few months before I do, so I decided to create an issue instead :-)
### Describe how to reproduce the problem
Simply run test generic/586 from the fstest suite. An easier alternative that doesn't require hacking the fstests to run with ZFS, is to manually execute:
`
<fstest-dir>/src/aio-dio-regress/aio-dio-append-write-fallocate-race /storage/fs0/testfile 1
`
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
|
1.0
|
Race between appending AIO DIO and fallocate - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name |
Distribution Version |
Linux Kernel | 5.11
Architecture |
ZFS Version | 2.0.0-rc1_466_g8a915ba1f66e
SPL Version | 2.0.0-rc1_466_g8a915ba1f66e
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
This problem was detected with generic/586, from the fstest suite. The test description is:
> Race an appending aio dio write to the second block of a file while
> simultaneously fallocating to the first block. Make sure that we end up
> with a two-block file.
There's also the [mailing-list thread](https://lore.kernel.org/linux-xfs/20191029100342.GA41131@bfoster/T) for the XFS fix for this race.
After looking at the code and running some debug code, I think that `zpl_fallocate_common()` races against `zpl_iter_write()`:
- `zpl_iter_write()` calls `zfs_write()`
- `zpl_fallocate_common()` starts executing and gets 0 for olen (`olen = i_size_read(ip);`).
- However, in `zfs_freesp()`, after the sa_lookup, `zp->z_size` will be already updated from the `zfs_write()` above.
I tried to come up with a fix, but I'm sure anyone familiar with this code will have that fix ready a few months before I do, so I decided to create an issue instead :-)
### Describe how to reproduce the problem
Simply run test generic/586 from the fstest suite. An easier alternative that doesn't require hacking the fstests to run with ZFS, is to manually execute:
`
<fstest-dir>/src/aio-dio-regress/aio-dio-append-write-fallocate-race /storage/fs0/testfile 1
`
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
|
non_process
|
race between appending aio dio and fallocate thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing this problem was detected with generic from the fstest suite the test description is race an appending aio dio write to the second block of a file while simultaneously fallocating to the first block make sure that we end up with a two block file there s also the for the xfs fix for this race after looking at the code and running some debug code i think that zpl fallocate common races against zpl iter write zpl iter write calls zfs write zpl fallocate common starts executing and gets for olen olen i size read ip however in zfs freesp after the sa lookup zp z size will be already updated from the zfs write above i tried to come up with a fix but i m sure anyone familiar with this code will have that fix ready a few months before i do so i decided to create an issue instead describe how to reproduce the problem simply run test generic from the fstest suite an easier alternative that doesn t require hacking the fstests to run with zfs is to manually execute src aio dio regress aio dio append write fallocate race storage testfile include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
| 0
|
223,877
| 17,645,059,867
|
IssuesEvent
|
2021-08-20 04:00:07
|
elastic/uptime
|
https://api.github.com/repos/elastic/uptime
|
reopened
|
Uptime not handling different sized screenshots (mobile) particularly well
|
bug polish test-plan v7.14.0
|
Kibana 7.14 (local, checked with build just now)
When using mobile device emulation in a Synthetics test, for example with:
```
playwrightOptions: {
userAgent: 'Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1',
viewport: {
width: 375,
height: 667,
},
deviceScaleFactor: 2,
isMobile: true,
hasTouch: true,
}
```
Uptime has trouble showing the different sized image in some places.
Hovers and clicks to see the big (non thumbnail) image seems to be ok
<details>
<summary>Click to expand</summary>

</details>
Test result history seems ok:
<details>
<summary>Click to expand</summary>

</details>
Clicking through to a test result seems ok
<details>
<summary>Click to expand</summary>

</details>
However, expanding the step does not give enough space for the image:

(in this animation, I have scrolled all the way to the bottom of the screen, but the image is being chopped off.
|
1.0
|
Uptime not handling different sized screenshots (mobile) particularly well - Kibana 7.14 (local, checked with build just now)
When using mobile device emulation in a Synthetics test, for example with:
```
playwrightOptions: {
userAgent: 'Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1',
viewport: {
width: 375,
height: 667,
},
deviceScaleFactor: 2,
isMobile: true,
hasTouch: true,
}
```
Uptime has trouble showing the different sized image in some places.
Hovers and clicks to see the big (non thumbnail) image seems to be ok
<details>
<summary>Click to expand</summary>

</details>
Test result history seems ok:
<details>
<summary>Click to expand</summary>

</details>
Clicking through to a test result seems ok
<details>
<summary>Click to expand</summary>

</details>
However, expanding the step does not give enough space for the image:

(in this animation, I have scrolled all the way to the bottom of the screen, but the image is being chopped off.
|
non_process
|
uptime not handling different sized screenshots mobile particularly well kibana local checked with build just now when using mobile device emulation in a synthetics test for example with playwrightoptions useragent mozilla iphone cpu iphone os like mac os x applewebkit khtml like gecko version mobile safari viewport width height devicescalefactor ismobile true hastouch true uptime has trouble showing the different sized image in some places hovers and clicks to see the big non thumbnail image seems to be ok click to expand test result history seems ok click to expand clicking through to a test result seems ok click to expand however expanding the step does not give enough space for the image in this animation i have scrolled all the way to the bottom of the screen but the image is being chopped off
| 0
|
16,916
| 22,264,973,957
|
IssuesEvent
|
2022-06-10 06:27:42
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
bazel Windows: Error: Unable to access jarfile C:\\SPB_Data\\_bazel_...\\A-server.jar
|
more data needed type: support / not a bug (process) area-Windows team-OSS
|
### Description of the bug:
Hi,
I installed bazel v 4.2.2 on Windows by downloading the exe file. when I run `bazel version`, I have this error:
```
Error: Unable to access jarfile C:\\SPB_Data\\_bazel_User???\\install\\6ece1c08582de19efc011e4c994a5b03\\A-server.jar
```
Please note the `???` in the path of `SPB_Data\\_bazel_User???\\` and I guess it's because of some unicode char in my Window user name.
I can't change my Windows user name, is there a way to change the default path setting so that bazel can work in it?
### Which operating system are you running Bazel on?
Windows 10
|
1.0
|
bazel Windows: Error: Unable to access jarfile C:\\SPB_Data\\_bazel_...\\A-server.jar - ### Description of the bug:
Hi,
I installed bazel v 4.2.2 on Windows by downloading the exe file. when I run `bazel version`, I have this error:
```
Error: Unable to access jarfile C:\\SPB_Data\\_bazel_User???\\install\\6ece1c08582de19efc011e4c994a5b03\\A-server.jar
```
Please note the `???` in the path of `SPB_Data\\_bazel_User???\\` and I guess it's because of some unicode char in my Window user name.
I can't change my Windows user name, is there a way to change the default path setting so that bazel can work in it?
### Which operating system are you running Bazel on?
Windows 10
|
process
|
bazel windows error unable to access jarfile c spb data bazel a server jar description of the bug hi i installed bazel v on windows by downloading the exe file when i run bazel version i have this error error unable to access jarfile c spb data bazel user install a server jar please note the in the path of spb data bazel user and i guess it s because of some unicode char in my window user name i can t change my windows user name is there a way to change the default path setting so that bazel can work in it which operating system are you running bazel on windows
| 1
|
55,390
| 14,008,845,954
|
IssuesEvent
|
2020-10-29 00:45:52
|
mwilliams7197/web-stories-wp
|
https://api.github.com/repos/mwilliams7197/web-stories-wp
|
opened
|
CVE-2020-11022 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.0.min.js</b>, <b>jquery-1.11.3.js</b>, <b>jquery-2.0.3.min.js</b>, <b>jquery-1.9.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/es6-shim/test-sham/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/es6-shim/test-sham/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/es6-shim/test/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/es6-shim/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.0.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/@egjs/list-differ/doc/eg.ListDiffer.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/@egjs/list-differ/doc/scripts/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.0.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/tinycolor2/test/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js,web-stories-wp/node_modules/tinycolor2/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/colorthief/async.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/colorthief/async.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/mwilliams7197/web-stories-wp/commit/a160d2c1c4cd141d457a26d3ae95abd45c42ce62">a160d2c1c4cd141d457a26d3ae95abd45c42ce62</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.0","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.11.3","isTransitiveDependency":false,"dependencyTree":"jquery:1.11.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.0.3","isTransitiveDependency":false,"dependencyTree":"jquery:2.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.4.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11022 (Medium) detected in multiple libraries - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.0.min.js</b>, <b>jquery-1.11.3.js</b>, <b>jquery-2.0.3.min.js</b>, <b>jquery-1.9.1.js</b>, <b>jquery-3.4.1.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/es6-shim/test-sham/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/es6-shim/test-sham/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/es6-shim/test/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/es6-shim/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.0.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.0.3/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/@egjs/list-differ/doc/eg.ListDiffer.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/@egjs/list-differ/doc/scripts/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.0.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/tinycolor2/test/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js,web-stories-wp/node_modules/tinycolor2/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/colorthief/async.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/colorthief/async.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: web-stories-wp/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: web-stories-wp/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/mwilliams7197/web-stories-wp/commit/a160d2c1c4cd141d457a26d3ae95abd45c42ce62">a160d2c1c4cd141d457a26d3ae95abd45c42ce62</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.0","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.11.3","isTransitiveDependency":false,"dependencyTree":"jquery:1.11.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.0.3","isTransitiveDependency":false,"dependencyTree":"jquery:2.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.4.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.8.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11022","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js jquery js jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file web stories wp node modules shim test sham index html path to vulnerable library web stories wp node modules shim test sham index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file web stories wp node modules shim test index html path to vulnerable library web stories wp node modules shim test index html dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file web stories wp node modules egjs list differ doc eg listdiffer html path to vulnerable library web stories wp node modules egjs list differ doc scripts jquery min js dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file web stories wp node modules test index html path to vulnerable library web stories wp node modules test demo jquery js web stories wp node modules demo jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file web stories wp node modules colorthief async html path to vulnerable library web stories wp node modules colorthief async html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file web stories wp node modules redeyed examples browser index html path to vulnerable library web stories wp node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl
| 0
|
337,788
| 10,220,176,361
|
IssuesEvent
|
2019-08-15 20:36:01
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.adslzone.net - see bug description
|
browser-fenix engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.adslzone.net/
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Diferent design on Firefox
**Steps to Reproduce**:
The site seems completely different on Firefox compared with a chrome based browser
[](https://webcompat.com/uploads/2019/8/def27873-12d6-483f-a6f6-129e069aa9d7.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
1.0
|
www.adslzone.net - see bug description - <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.adslzone.net/
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Diferent design on Firefox
**Steps to Reproduce**:
The site seems completely different on Firefox compared with a chrome based browser
[](https://webcompat.com/uploads/2019/8/def27873-12d6-483f-a6f6-129e069aa9d7.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
non_process
|
see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description diferent design on firefox steps to reproduce the site seems completely different on firefox compared with a chrome based browser browser configuration none from with β€οΈ
| 0
|
5,836
| 8,666,533,357
|
IssuesEvent
|
2018-11-29 04:39:23
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
process: setupProcessWarnings() is not called when process.noDeprecation is true
|
process question
|
* **Version**: master
* **Platform**: all
* **Subsystem**: process
I came across this issue while trying to improve test coverage of `process` subsystem.
The line which is not covered:
https://github.com/nodejs/node/blob/5df47d0b062d4d01980c56b14906b7eb20312546/lib/internal/process/warning.js#L83
I noticed that `process.noDeprecation` is set to true and warning.name is `DeprecationWarning`, `process.emit('warning', warning)` is not called
https://github.com/nodejs/node/blob/5df47d0b062d4d01980c56b14906b7eb20312546/lib/internal/process/warning.js#L139-L144
Is this a bug, or should "DeprecationWarning" be explicitly thrown using `process.emit('warning', warning)` and not `process.emitWarning`
|
1.0
|
process: setupProcessWarnings() is not called when process.noDeprecation is true - * **Version**: master
* **Platform**: all
* **Subsystem**: process
I came across this issue while trying to improve test coverage of `process` subsystem.
The line which is not covered:
https://github.com/nodejs/node/blob/5df47d0b062d4d01980c56b14906b7eb20312546/lib/internal/process/warning.js#L83
I noticed that `process.noDeprecation` is set to true and warning.name is `DeprecationWarning`, `process.emit('warning', warning)` is not called
https://github.com/nodejs/node/blob/5df47d0b062d4d01980c56b14906b7eb20312546/lib/internal/process/warning.js#L139-L144
Is this a bug, or should "DeprecationWarning" be explicitly thrown using `process.emit('warning', warning)` and not `process.emitWarning`
|
process
|
process setupprocesswarnings is not called when process nodeprecation is true version master platform all subsystem process i came across this issue while trying to improve test coverage of process subsystem the line which is not covered i noticed that process nodeprecation is set to true and warning name is deprecationwarning process emit warning warning is not called is this a bug or should deprecationwarning be explicitly thrown using process emit warning warning and not process emitwarning
| 1
|
184,715
| 14,988,044,033
|
IssuesEvent
|
2021-01-29 00:17:02
|
solid/node-solid-server
|
https://api.github.com/repos/solid/node-solid-server
|
opened
|
Invalid contact email address in README
|
documentation
|
The README.MD still includes the following bullet point at the bottom:
- Reach out to Jackson at jacksonm@inrupt.com to become more involved in maintaining Node Solid Server
As this email address is not longer valid, recommend updating to an appropriate solidproject.org email or URL.
cc @jaxoncreed
|
1.0
|
Invalid contact email address in README - The README.MD still includes the following bullet point at the bottom:
- Reach out to Jackson at jacksonm@inrupt.com to become more involved in maintaining Node Solid Server
As this email address is not longer valid, recommend updating to an appropriate solidproject.org email or URL.
cc @jaxoncreed
|
non_process
|
invalid contact email address in readme the readme md still includes the following bullet point at the bottom reach out to jackson at jacksonm inrupt com to become more involved in maintaining node solid server as this email address is not longer valid recommend updating to an appropriate solidproject org email or url cc jaxoncreed
| 0
|
21,582
| 29,953,572,918
|
IssuesEvent
|
2023-06-23 05:02:41
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Regression in multiprocessing on Windows under coverage and ctypes
|
type-bug OS-windows 3.12 topic-multiprocessing
|
In https://github.com/jaraco/keyring/issues/634#issuecomment-1597346621, I finally managed to replicate an issue encountered in keyring. Starting sometime between 3.12a7 and 3.12b1, the keyring test suite started failing on Windows tests running under multiprocessing when pytest-cov is involved.
I'm not yet convinced that the issue doesn't lie with pytest or pytest-cov or ctypes, but given that the issue started with a beta release of Python, it's the most likely culprit.
I'll try to continue to distill the issue, but I wanted to register this report early, as it may be a release blocker.
|
1.0
|
Regression in multiprocessing on Windows under coverage and ctypes - In https://github.com/jaraco/keyring/issues/634#issuecomment-1597346621, I finally managed to replicate an issue encountered in keyring. Starting sometime between 3.12a7 and 3.12b1, the keyring test suite started failing on Windows tests running under multiprocessing when pytest-cov is involved.
I'm not yet convinced that the issue doesn't lie with pytest or pytest-cov or ctypes, but given that the issue started with a beta release of Python, it's the most likely culprit.
I'll try to continue to distill the issue, but I wanted to register this report early, as it may be a release blocker.
|
process
|
regression in multiprocessing on windows under coverage and ctypes in i finally managed to replicate an issue encountered in keyring starting sometime between and the keyring test suite started failing on windows tests running under multiprocessing when pytest cov is involved i m not yet convinced that the issue doesn t lie with pytest or pytest cov or ctypes but given that the issue started with a beta release of python it s the most likely culprit i ll try to continue to distill the issue but i wanted to register this report early as it may be a release blocker
| 1
|
2,929
| 5,289,570,092
|
IssuesEvent
|
2017-02-08 17:40:50
|
Harmonus/HMS_SlicerModule
|
https://api.github.com/repos/Harmonus/HMS_SlicerModule
|
opened
|
Network security
|
requirement
|
All communication of patient data to and from the software shall be compliant with hospital network security protocols.
|
1.0
|
Network security - All communication of patient data to and from the software shall be compliant with hospital network security protocols.
|
non_process
|
network security all communication of patient data to and from the software shall be compliant with hospital network security protocols
| 0
|
8,596
| 11,758,897,838
|
IssuesEvent
|
2020-03-13 16:14:12
|
nltk/nltk
|
https://api.github.com/repos/nltk/nltk
|
closed
|
cant use nltk functions in parallel programs
|
inactive multithread / multiprocessing
|
when I want to use nltk stemmer and lemmatizer in a multithreaded program only one thread works fine and other ones throw this exception :
Exception in thread 2:
Traceback (most recent call last):
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "D:/iaun/0_Tez/141095-TFATOwithdiscriminative/01/11_MultiThreadedDoctTermInfo.py", line 21, in myfunc
lemma = lemmatizer.lemmatize(stem)
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\stem\wordnet.py", line 40, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\corpus\util.py", line 99, in __getattr__
self.__load()
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\corpus\util.py", line 73, in __load
args, kwargs = self.__args, self.__kwargs
AttributeError: 'WordNetCorpusReader' object has no attribute '_LazyCorpusLoader__args'
|
1.0
|
cant use nltk functions in parallel programs - when I want to use nltk stemmer and lemmatizer in a multithreaded program only one thread works fine and other ones throw this exception :
Exception in thread 2:
Traceback (most recent call last):
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "D:/iaun/0_Tez/141095-TFATOwithdiscriminative/01/11_MultiThreadedDoctTermInfo.py", line 21, in myfunc
lemma = lemmatizer.lemmatize(stem)
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\stem\wordnet.py", line 40, in lemmatize
lemmas = wordnet._morphy(word, pos)
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\corpus\util.py", line 99, in __getattr__
self.__load()
File "C:\Users\Danial\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\corpus\util.py", line 73, in __load
args, kwargs = self.__args, self.__kwargs
AttributeError: 'WordNetCorpusReader' object has no attribute '_LazyCorpusLoader__args'
|
process
|
cant use nltk functions in parallel programs when i want to use nltk stemmer and lemmatizer in a multithreaded program only one thread works fine and other ones throw this exception exception in thread traceback most recent call last file c users danial appdata local programs python lib threading py line in bootstrap inner self run file c users danial appdata local programs python lib threading py line in run self target self args self kwargs file d iaun tez tfatowithdiscriminative multithreadeddoctterminfo py line in myfunc lemma lemmatizer lemmatize stem file c users danial appdata local programs python lib site packages nltk stem wordnet py line in lemmatize lemmas wordnet morphy word pos file c users danial appdata local programs python lib site packages nltk corpus util py line in getattr self load file c users danial appdata local programs python lib site packages nltk corpus util py line in load args kwargs self args self kwargs attributeerror wordnetcorpusreader object has no attribute lazycorpusloader args
| 1
|
74,388
| 14,241,957,288
|
IssuesEvent
|
2020-11-19 00:36:11
|
phetsims/axon
|
https://api.github.com/repos/phetsims/axon
|
closed
|
PropertyStateHandler.propertiesInOrderDependencies should be a Set
|
dev:code-review dev:phet-io
|
From https://github.com/phetsims/axon/issues/316, the phet-io team sees that `Object.<phetioID, {true}>` can be rewritten as a Set.
|
1.0
|
PropertyStateHandler.propertiesInOrderDependencies should be a Set - From https://github.com/phetsims/axon/issues/316, the phet-io team sees that `Object.<phetioID, {true}>` can be rewritten as a Set.
|
non_process
|
propertystatehandler propertiesinorderdependencies should be a set from the phet io team sees that object can be rewritten as a set
| 0
|
5,125
| 7,894,621,506
|
IssuesEvent
|
2018-06-28 22:15:08
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Atlas postprocessor problem when using whole set of Amazon region
|
bug post-processor/atlas
|
Hi,
I using packer:1.0.2 container and I have some problem with the atlas processor.
If I build images in maximum 9 region then it totally works fine. Example set:
```
"ap-northeast-2",
"eu-central-1",
"us-east-1",
"sa-east-1",
"us-west-1",
"us-west-2",
"us-east-2",
"ca-central-1",
"eu-west-2",
"ap-south-1"
```
But If I using the whole set of amazon regions or at least 10 region then I get an error. Example set:
```
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"ap-northeast-1",
"ap-northeast-2",
"us-east-1",
"sa-east-1",
"us-west-1",
"us-west-2",
"us-east-2",
"ca-central-1",
"eu-west-2",
"ap-south-1"
```
I realy dont think that this is related to the number of the region but this is very strange and maybe you have some thoughts on this.
- Packer version: hashicorp/packer:1.0.2 docker container
- Host platform: Amazon
- Post processor:
```
"post-processors": [
{
"type": "atlas",
"artifact": "mycompany/{{ user `atlas_artifact` }}",
"token": "{{user `atlas_token`}}",
"artifact_type": "{{ user `atlas_artifact_type` }}.image",
"metadata": {
"created_at": "{{timestamp}}",
"created": "{{ isotime \"2006-01-02 15:04:05 -0700\" }}",
"git_rev": "{{user `git_rev`}}",
"git_branch": "{{user `git_branch`}}",
"git_tag": "{{user `git_tag`}}",
"os_user": "{{user `os_user` }}"
}
}
]
```
- Example metadata:
```
ami_id = ami-173648d7,ami-174648d7,ami-173548d7,ami-173748d7,ami-173848d7,ami-173948d7,ami-173448d7,ami-113648d7,ami-123648d7,ami-133648d7
region = ap-northeast-2,ap-south-1,ca-central-1,eu-west-1,eu-west-2,sa-east-1,us-east-1,us-east-2,us-west-1,us-west-2
created = 2017-07-18 20:13:44 +0000
git_rev =
git_tag =
os_user = osuser
created_at = 1500408824
git_branch =
hdp_repoid =
image_name = imagename-secondname--1707182213
prometheus = true
hdp_baseurl =
hdp_version =
orchestrator = salt
ambari_gpgkey =
ambari_baseurl =
ambari_version =
hdputil_repoid =
hdputil_baseurl =
hdputil_version =
region.eu-west-1 = ami-173648d7
region.eu-west-2 = ami-174648d7
region.sa-east-1 = ami-173548d7
region.us-east-1 = ami-173748d7
region.us-east-2 = ami-173848d7
region.us-west-1 = ami-173948d7
region.us-west-2 = ami-173448d7
region.ap-south-1 = ami-113648d7
region.ca-central-1 = ami-123648d7
region.ap-northeast-2 = ami-133648d7
```
- Debug log output:
```
==> aws-centos7: Running post-processor: atlas
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [INFO] request: GET /api/v1/artifacts/mycompany/mytool
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [DEBUG] request: appending token (xyz*** (masked))
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [DEBUG] raw request: &http.Request{Method:"GET", URL:(*url.URL)(0xc420052180), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{"User-Agent":[]string{"AtlasGo/1.0 (+https://github.com/hashicorp/atlas-go; go1.8.3)"}, "X-Atlas-Token":[]string{"******"}}, Body:io.ReadCloser(nil), GetBody:(func() (io.ReadCloser, error))(nil), ContentLength:0, TransferEncoding:[]string(nil), Close:false, Host:"atlas.hashicorp.com", Form:url.Values(nil), PostForm:url.Values(nil), MultipartForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil), Cancel:(<-chan struct {})(nil), Response:(*http.Response)(nil), ctx:context.Context(nil)}
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] response: 200 (200 OK)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] response: {"username":"mycompany","name":"mytool","tag":"mycompany/mytool"}
aws-centos7 (atlas): Uploading artifact (0 bytes)
2017/07/19 08:10:21 ui: aws-centos7 (atlas): Uploading artifact (0 bytes)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] uploading artifact: mycompany/mytool (amazon.image)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] request: POST /api/v1/artifacts/mycompany/mytool/amazon.image
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] request: appending token (xyz*** (masked))
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] raw request: &http.Request{Method:"POST", URL:(*url.URL)(0xc420053100), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{"Content-Type":[]string{"application/json"}, "User-Agent":[]string{"AtlasGo/1.0 (+https://github.com/hashicorp/atlas-go; go1.8.3)"}, "X-Atlas-Token":[]string{"*****"}}, Body:ioutil.nopCloser{Reader:(*bytes.Reader)(0xc420365ce0)}, GetBody:(func() (io.ReadCloser, error))(0x6be300), ContentLength:1111, TransferEncoding:[]string(nil), Close:false, Host:"atlas.hashicorp.com", Form:url.Values(nil), PostForm:url.Values(nil), MultipartForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil), Cancel:(<-chan struct {})(nil), Response:(*http.Response)(nil), ctx:context.Context(nil)}
2017/07/19 08:10:22 packer: 2017/07/19 08:10:22 [INFO] response: 500 (500 Internal Server Error)
2017/07/19 08:10:22 packer: 2017/07/19 08:10:22 [DEBUG] response: {"status":"500","error":"Internal Server Error"}
2017/07/19 08:10:22 [INFO] (telemetry) ending atlas
2017/07/19 08:10:22 [INFO] (telemetry) found error: Error uploading (0 bytes): client: 500 Internal Server Error
2017/07/19 08:10:22 Deleting original artifact for build 'aws-centos7'
```
|
1.0
|
Atlas postprocessor problem when using whole set of Amazon region - Hi,
I using packer:1.0.2 container and I have some problem with the atlas processor.
If I build images in maximum 9 region then it totally works fine. Example set:
```
"ap-northeast-2",
"eu-central-1",
"us-east-1",
"sa-east-1",
"us-west-1",
"us-west-2",
"us-east-2",
"ca-central-1",
"eu-west-2",
"ap-south-1"
```
But If I using the whole set of amazon regions or at least 10 region then I get an error. Example set:
```
"ap-southeast-1",
"ap-southeast-2",
"eu-central-1",
"ap-northeast-1",
"ap-northeast-2",
"us-east-1",
"sa-east-1",
"us-west-1",
"us-west-2",
"us-east-2",
"ca-central-1",
"eu-west-2",
"ap-south-1"
```
I realy dont think that this is related to the number of the region but this is very strange and maybe you have some thoughts on this.
- Packer version: hashicorp/packer:1.0.2 docker container
- Host platform: Amazon
- Post processor:
```
"post-processors": [
{
"type": "atlas",
"artifact": "mycompany/{{ user `atlas_artifact` }}",
"token": "{{user `atlas_token`}}",
"artifact_type": "{{ user `atlas_artifact_type` }}.image",
"metadata": {
"created_at": "{{timestamp}}",
"created": "{{ isotime \"2006-01-02 15:04:05 -0700\" }}",
"git_rev": "{{user `git_rev`}}",
"git_branch": "{{user `git_branch`}}",
"git_tag": "{{user `git_tag`}}",
"os_user": "{{user `os_user` }}"
}
}
]
```
- Example metadata:
```
ami_id = ami-173648d7,ami-174648d7,ami-173548d7,ami-173748d7,ami-173848d7,ami-173948d7,ami-173448d7,ami-113648d7,ami-123648d7,ami-133648d7
region = ap-northeast-2,ap-south-1,ca-central-1,eu-west-1,eu-west-2,sa-east-1,us-east-1,us-east-2,us-west-1,us-west-2
created = 2017-07-18 20:13:44 +0000
git_rev =
git_tag =
os_user = osuser
created_at = 1500408824
git_branch =
hdp_repoid =
image_name = imagename-secondname--1707182213
prometheus = true
hdp_baseurl =
hdp_version =
orchestrator = salt
ambari_gpgkey =
ambari_baseurl =
ambari_version =
hdputil_repoid =
hdputil_baseurl =
hdputil_version =
region.eu-west-1 = ami-173648d7
region.eu-west-2 = ami-174648d7
region.sa-east-1 = ami-173548d7
region.us-east-1 = ami-173748d7
region.us-east-2 = ami-173848d7
region.us-west-1 = ami-173948d7
region.us-west-2 = ami-173448d7
region.ap-south-1 = ami-113648d7
region.ca-central-1 = ami-123648d7
region.ap-northeast-2 = ami-133648d7
```
- Debug log output:
```
==> aws-centos7: Running post-processor: atlas
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [INFO] request: GET /api/v1/artifacts/mycompany/mytool
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [DEBUG] request: appending token (xyz*** (masked))
2017/07/19 08:10:20 packer: 2017/07/19 08:10:20 [DEBUG] raw request: &http.Request{Method:"GET", URL:(*url.URL)(0xc420052180), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{"User-Agent":[]string{"AtlasGo/1.0 (+https://github.com/hashicorp/atlas-go; go1.8.3)"}, "X-Atlas-Token":[]string{"******"}}, Body:io.ReadCloser(nil), GetBody:(func() (io.ReadCloser, error))(nil), ContentLength:0, TransferEncoding:[]string(nil), Close:false, Host:"atlas.hashicorp.com", Form:url.Values(nil), PostForm:url.Values(nil), MultipartForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil), Cancel:(<-chan struct {})(nil), Response:(*http.Response)(nil), ctx:context.Context(nil)}
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] response: 200 (200 OK)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] response: {"username":"mycompany","name":"mytool","tag":"mycompany/mytool"}
aws-centos7 (atlas): Uploading artifact (0 bytes)
2017/07/19 08:10:21 ui: aws-centos7 (atlas): Uploading artifact (0 bytes)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] uploading artifact: mycompany/mytool (amazon.image)
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [INFO] request: POST /api/v1/artifacts/mycompany/mytool/amazon.image
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] request: appending token (xyz*** (masked))
2017/07/19 08:10:21 packer: 2017/07/19 08:10:21 [DEBUG] raw request: &http.Request{Method:"POST", URL:(*url.URL)(0xc420053100), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{"Content-Type":[]string{"application/json"}, "User-Agent":[]string{"AtlasGo/1.0 (+https://github.com/hashicorp/atlas-go; go1.8.3)"}, "X-Atlas-Token":[]string{"*****"}}, Body:ioutil.nopCloser{Reader:(*bytes.Reader)(0xc420365ce0)}, GetBody:(func() (io.ReadCloser, error))(0x6be300), ContentLength:1111, TransferEncoding:[]string(nil), Close:false, Host:"atlas.hashicorp.com", Form:url.Values(nil), PostForm:url.Values(nil), MultipartForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil), Cancel:(<-chan struct {})(nil), Response:(*http.Response)(nil), ctx:context.Context(nil)}
2017/07/19 08:10:22 packer: 2017/07/19 08:10:22 [INFO] response: 500 (500 Internal Server Error)
2017/07/19 08:10:22 packer: 2017/07/19 08:10:22 [DEBUG] response: {"status":"500","error":"Internal Server Error"}
2017/07/19 08:10:22 [INFO] (telemetry) ending atlas
2017/07/19 08:10:22 [INFO] (telemetry) found error: Error uploading (0 bytes): client: 500 Internal Server Error
2017/07/19 08:10:22 Deleting original artifact for build 'aws-centos7'
```
|
process
|
atlas postprocessor problem when using whole set of amazon region hi i using packer container and i have some problem with the atlas processor if i build images in maximum region then it totally works fine example set ap northeast eu central us east sa east us west us west us east ca central eu west ap south but if i using the whole set of amazon regions or at least region then i get an error example set ap southeast ap southeast eu central ap northeast ap northeast us east sa east us west us west us east ca central eu west ap south i realy dont think that this is related to the number of the region but this is very strange and maybe you have some thoughts on this packer version hashicorp packer docker container host platform amazon post processor post processors type atlas artifact mycompany user atlas artifact token user atlas token artifact type user atlas artifact type image metadata created at timestamp created isotime git rev user git rev git branch user git branch git tag user git tag os user user os user example metadata ami id ami ami ami ami ami ami ami ami ami ami region ap northeast ap south ca central eu west eu west sa east us east us east us west us west created git rev git tag os user osuser created at git branch hdp repoid image name imagename secondname prometheus true hdp baseurl hdp version orchestrator salt ambari gpgkey ambari baseurl ambari version hdputil repoid hdputil baseurl hdputil version region eu west ami region eu west ami region sa east ami region us east ami region us east ami region us west ami region us west ami region ap south ami region ca central ami region ap northeast ami debug log output aws running post processor atlas packer request get api artifacts mycompany mytool packer request appending token xyz masked packer raw request http request method get url url url proto http protomajor protominor header http header user agent string atlasgo x atlas token string body io readcloser nil getbody func io readcloser error nil contentlength transferencoding string nil close false host atlas hashicorp com form url values nil postform url values nil multipartform multipart form nil trailer http header nil remoteaddr requesturi tls tls connectionstate nil cancel chan struct nil response http response nil ctx context context nil packer response ok packer response username mycompany name mytool tag mycompany mytool aws atlas uploading artifact bytes ui aws atlas uploading artifact bytes packer uploading artifact mycompany mytool amazon image packer request post api artifacts mycompany mytool amazon image packer request appending token xyz masked packer raw request http request method post url url url proto http protomajor protominor header http header content type string application json user agent string atlasgo x atlas token string body ioutil nopcloser reader bytes reader getbody func io readcloser error contentlength transferencoding string nil close false host atlas hashicorp com form url values nil postform url values nil multipartform multipart form nil trailer http header nil remoteaddr requesturi tls tls connectionstate nil cancel chan struct nil response http response nil ctx context context nil packer response internal server error packer response status error internal server error telemetry ending atlas telemetry found error error uploading bytes client internal server error deleting original artifact for build aws
| 1
|
3,406
| 6,520,364,366
|
IssuesEvent
|
2017-08-28 16:12:05
|
w3c/w3process
|
https://api.github.com/repos/w3c/w3process
|
closed
|
Do we need a superseded status for older specs?
|
Process2018Candidate
|
We have some older recommendations that are still published as they are important references (e.g. HTML 3, 4). The technology in them has been wholly subsumed by later versions, and we no longer want to maintain them actively. Do we need a process to mark them as superseded?
|
1.0
|
Do we need a superseded status for older specs? - We have some older recommendations that are still published as they are important references (e.g. HTML 3, 4). The technology in them has been wholly subsumed by later versions, and we no longer want to maintain them actively. Do we need a process to mark them as superseded?
|
process
|
do we need a superseded status for older specs we have some older recommendations that are still published as they are important references e g html the technology in them has been wholly subsumed by later versions and we no longer want to maintain them actively do we need a process to mark them as superseded
| 1
|
86,962
| 10,854,981,745
|
IssuesEvent
|
2019-11-13 17:25:15
|
kubernetes/kubeadm
|
https://api.github.com/repos/kubernetes/kubeadm
|
closed
|
kubeadm should leverage kubelet automatic client cert rotation on nodes created with `kubeadm init`
|
area/security kind/design kind/feature lifecycle/active priority/important-soon
|
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
## Is this a BUG REPORT or FEATURE REQUEST?
FEATURE REQUEST
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
## Versions
**kubeadm version** `kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-26T15:59:52Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}`
## What happened?
On master nodes, `/etc/kubernetes/kubelet.conf` gets created with "hardcoded" `client-certificate/client-key` instead of pointing to `/var/lib/kubelet/pki/kubelet-client-current.pem` as done on minions node.
## What you expected to happen?
I expected `/etc/kubernetes/kubelet.conf` to point to `/var/lib/kubelet/pki/kubelet-client-current.pem` to leverage automatic kubelet client certificate rotation that is configured by `kubeadm`
## How to reproduce it (as minimally and precisely as possible)?
`kubeadm init && cat /etc/kubernetes/kubelet.conf`
## Anything else we need to know?
I already know this a chicken-and-egg problem but I think it would be really nice if the first master, after initialising the control plane, could make use of `/var/lib/kubelet/pki/kubelet-client-current.pem` to further streamline the certificates rotation process and avoid having to use `kubeadm init kubeconfig kubelet` just on the master nodes to renew kubelet's client certificate.
|
1.0
|
kubeadm should leverage kubelet automatic client cert rotation on nodes created with `kubeadm init` - <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.-->
## Is this a BUG REPORT or FEATURE REQUEST?
FEATURE REQUEST
<!--
If this is a BUG REPORT, please:
- Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
If this is a FEATURE REQUEST, please:
- Describe *in detail* the feature/behavior/change you'd like to see.
In both cases, be ready for followup questions, and please respond in a timely
manner. If we can't reproduce a bug or think a feature already exists, we
might close your issue. If we're wrong, PLEASE feel free to reopen it and
explain why.
-->
## Versions
**kubeadm version** `kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-26T15:59:52Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}`
## What happened?
On master nodes, `/etc/kubernetes/kubelet.conf` gets created with "hardcoded" `client-certificate/client-key` instead of pointing to `/var/lib/kubelet/pki/kubelet-client-current.pem` as done on minions node.
## What you expected to happen?
I expected `/etc/kubernetes/kubelet.conf` to point to `/var/lib/kubelet/pki/kubelet-client-current.pem` to leverage automatic kubelet client certificate rotation that is configured by `kubeadm`
## How to reproduce it (as minimally and precisely as possible)?
`kubeadm init && cat /etc/kubernetes/kubelet.conf`
## Anything else we need to know?
I already know this a chicken-and-egg problem but I think it would be really nice if the first master, after initialising the control plane, could make use of `/var/lib/kubelet/pki/kubelet-client-current.pem` to further streamline the certificates rotation process and avoid having to use `kubeadm init kubeconfig kubelet` just on the master nodes to renew kubelet's client certificate.
|
non_process
|
kubeadm should leverage kubelet automatic client cert rotation on nodes created with kubeadm init is this a bug report or feature request feature request if this is a bug report please fill in as much of the template below as you can if you leave out information we can t help you as well if this is a feature request please describe in detail the feature behavior change you d like to see in both cases be ready for followup questions and please respond in a timely manner if we can t reproduce a bug or think a feature already exists we might close your issue if we re wrong please feel free to reopen it and explain why versions kubeadm version kubeadm version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin what happened on master nodes etc kubernetes kubelet conf gets created with hardcoded client certificate client key instead of pointing to var lib kubelet pki kubelet client current pem as done on minions node what you expected to happen i expected etc kubernetes kubelet conf to point to var lib kubelet pki kubelet client current pem to leverage automatic kubelet client certificate rotation that is configured by kubeadm how to reproduce it as minimally and precisely as possible kubeadm init cat etc kubernetes kubelet conf anything else we need to know i already know this a chicken and egg problem but i think it would be really nice if the first master after initialising the control plane could make use of var lib kubelet pki kubelet client current pem to further streamline the certificates rotation process and avoid having to use kubeadm init kubeconfig kubelet just on the master nodes to renew kubelet s client certificate
| 0
|
12,311
| 14,861,133,794
|
IssuesEvent
|
2021-01-18 22:04:47
|
modi-w/AutoVersionsDB
|
https://api.github.com/repos/modi-w/AutoVersionsDB
|
closed
|
Run DevDummyData files when virtual execution on the development environment
|
area-Core good first issue process-ready-for-implementation type-enhancement up-for-grab
|
**The Problem**
When we download a database from production, and we want to work with it in the development environment, we need to run the "virtual" execution.
But the virtual execution does not run the DevDummyData scripts. This is a problem when we want to keep working and develop on this DB. Because the next time we will run the "sync" process, it will run the DevDummyData scripts and maybe ruin the database state.
**Solution**
When we run the virtual process on the development environment, and the target script file is the last script file, then run virtual execution on the DevDummyDataFiles too.
**Updates**
1.
|
1.0
|
Run DevDummyData files when virtual execution on the development environment - **The Problem**
When we download a database from production, and we want to work with it in the development environment, we need to run the "virtual" execution.
But the virtual execution does not run the DevDummyData scripts. This is a problem when we want to keep working and develop on this DB. Because the next time we will run the "sync" process, it will run the DevDummyData scripts and maybe ruin the database state.
**Solution**
When we run the virtual process on the development environment, and the target script file is the last script file, then run virtual execution on the DevDummyDataFiles too.
**Updates**
1.
|
process
|
run devdummydata files when virtual execution on the development environment the problem when we download a database from production and we want to work with it in the development environment we need to run the virtual execution but the virtual execution does not run the devdummydata scripts this is a problem when we want to keep working and develop on this db because the next time we will run the sync process it will run the devdummydata scripts and maybe ruin the database state solution when we run the virtual process on the development environment and the target script file is the last script file then run virtual execution on the devdummydatafiles too updates
| 1
|
21,797
| 30,310,171,011
|
IssuesEvent
|
2023-07-10 12:18:57
|
kitspace/kitspace-v2
|
https://api.github.com/repos/kitspace/kitspace-v2
|
closed
|
S3 assets versioning wrt processor version
|
enhancement processor
|
Currently, if we make a change in the processor, to reprocess the projects we have to options:
1. Delete the S3 bucket, delete the data volumes, and re-import the projects again.
2. Add a new commit to all repos (probably out of our control).
I think we need a way to support versioning the assets with respect to the processor version. Maybe add the processor version to the file paths in S3?
This will be useful for rollbacks.
|
1.0
|
S3 assets versioning wrt processor version - Currently, if we make a change in the processor, to reprocess the projects we have to options:
1. Delete the S3 bucket, delete the data volumes, and re-import the projects again.
2. Add a new commit to all repos (probably out of our control).
I think we need a way to support versioning the assets with respect to the processor version. Maybe add the processor version to the file paths in S3?
This will be useful for rollbacks.
|
process
|
assets versioning wrt processor version currently if we make a change in the processor to reprocess the projects we have to options delete the bucket delete the data volumes and re import the projects again add a new commit to all repos probably out of our control i think we need a way to support versioning the assets with respect to the processor version maybe add the processor version to the file paths in this will be useful for rollbacks
| 1
|
10,046
| 13,044,161,646
|
IssuesEvent
|
2020-07-29 03:47:25
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateDatetimeInt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateDatetimeInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateDatetimeInt` from TiDB -
## Description
Port the scalar function `SubDateDatetimeInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdatedatetimeint from tidb description port the scalar function subdatedatetimeint from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
143,733
| 11,576,706,548
|
IssuesEvent
|
2020-02-21 12:34:41
|
navikt/tiltaksgjennomforing-varsel
|
https://api.github.com/repos/navikt/tiltaksgjennomforing-varsel
|
closed
|
Bygg av ny-branch-test
|
deploy ny-branch-test
|
Kommenter med
>/deploy ny-branch-test
for Γ₯ deploye til dev-fss.
Commit: d9323b0b5006d2fd6b008740c03e9ee148b1f00b
|
1.0
|
Bygg av ny-branch-test - Kommenter med
>/deploy ny-branch-test
for Γ₯ deploye til dev-fss.
Commit: d9323b0b5006d2fd6b008740c03e9ee148b1f00b
|
non_process
|
bygg av ny branch test kommenter med deploy ny branch test for Γ₯ deploye til dev fss commit
| 0
|
2,100
| 4,937,639,972
|
IssuesEvent
|
2016-11-29 08:35:31
|
itsyouonline/identityserver
|
https://api.github.com/repos/itsyouonline/identityserver
|
closed
|
iyo-ImprovedIntegration: Unable to use the OAuth 2.0 support in Postman against Itsyou.online
|
process_wontfix state_inprogress type_bug
|
After a successful 2FA authentication you get this:

|
1.0
|
iyo-ImprovedIntegration: Unable to use the OAuth 2.0 support in Postman against Itsyou.online - After a successful 2FA authentication you get this:

|
process
|
iyo improvedintegration unable to use the oauth support in postman against itsyou online after a successful authentication you get this
| 1
|
21,734
| 30,247,368,988
|
IssuesEvent
|
2023-07-06 17:33:41
|
UnitTestBot/UTBotJava
|
https://api.github.com/repos/UnitTestBot/UTBotJava
|
closed
|
ClassNotFoundException TestConfiguration inside Test class for a Spring unit tests generation
|
ctg-bug comp-instrumented-process comp-spring
|
**Description**
ClassNotFoundException for TestConfiguration defined inside CrashControllerIntegrationTests in instrumented process.
**To Reproduce**
1. Install [UnitTestBot plugin built from main](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA
2. Open spring-petclinic project
3. Set JDK 17
4. Generate tests for Owner entity-class:
select `TestConfiguration`, `Unit tests` and leave defaults for all other settings.
**Expected behavior**
Inner class TestConfiguration can be used for test generation.
No exception is expected.
**Actual behavior**
Error test is generated.
CLassNotFound exception is thrown in utbot-engine-current.log
**Screenshots, logs**
~~~java
///region Test suites for executable org.springframework.samples.petclinic.owner.Owner.toString
///region Errors report for toString
public void testToString_errors() {
// Couldn't generate some tests. List of errors:
//
// 1 occurrences of:
// <Throwable with empty message>
}
///endregion
///endregion
~~~
~~~java
17:10:46.739 | INFO | EngineProcessMain | -----------------------------------------------------------------------
17:10:46.743 | INFO | EngineProcessMain | -------------------NEW ENGINE PROCESS STARTED--------------------------
17:10:46.744 | INFO | EngineProcessMain | -----------------------------------------------------------------------
17:10:47.364 | INFO | SpringAnalyzerProcess | Spring Analyzer process started with PID = 88618
17:10:48.071 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | -----------------------------------------------------------------------
17:10:48.072 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ------------------NEW SPRING ANALYZER PROCESS STARTED------------------
17:10:48.072 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | -----------------------------------------------------------------------
17:10:48.086 | INFO | SpringAnalyzerProcess | RdCategory: SourceFinder | Using java Spring configuration
17:10:48.116 | ERROR | EngineProcessMain | Spring Analyzer crashed, resorting to using empty bean list
com.jetbrains.rd.util.reactive.RdFault: org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.TestConfiguration, reason: java.lang.ClassNotFoundException: org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.TestConfiguration
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at org.utbot.spring.utils.SourceFinder.findSources(SourceFinder.kt:30)
at org.utbot.spring.analyzer.SpringApplicationAnalyzer.getBeanDefinitions(SpringApplicationAnalyzer.kt:13)
at org.utbot.spring.process.SpringAnalyzerProcessMainKt$setup$1.invoke(SpringAnalyzerProcessMain.kt:51)
at org.utbot.spring.process.SpringAnalyzerProcessMainKt$setup$1.invoke(SpringAnalyzerProcessMain.kt:43)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:106) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1.queue$execute(RdTask.kt:280) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1.access$queue$execute(RdTask.kt:269) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invokeSuspend(RdTask.kt:289) ~[rd-framework-2023.1.2.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) ~[kotlin-stdlib-1.8.10.jar:1.8.10-release-430(1.8.10)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at org.utbot.rd.UtRdUtilKt.startBlocking(UtRdUtil.kt:31) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.spring.process.SpringAnalyzerProcess.getBeanDefinitions(SpringAnalyzerProcess.kt:97) ~[utbot-spring-analyzer-main-2023.6.4608.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$2.invoke(EngineProcessMain.kt:85) ~[utbot-framework-main-2023.6.4608.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$2.invoke(EngineProcessMain.kt:81) ~[utbot-framework-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114) ~[utbot-rd-main-2023.6.4608.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41) ~[rd-core-2023.1.2.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
~~~
**Environment**
OS - macOS Ventura 13.2.1 (22D68)
IntelliJ IDEA version - 2023.1.2 **Community edition**
Project - spring-petclinic
JDK - 17
|
1.0
|
ClassNotFoundException TestConfiguration inside Test class for a Spring unit tests generation - **Description**
ClassNotFoundException for TestConfiguration defined inside CrashControllerIntegrationTests in instrumented process.
**To Reproduce**
1. Install [UnitTestBot plugin built from main](https://github.com/UnitTestBot/UTBotJava/actions/runs/5277843850) in IntelliJ IDEA
2. Open spring-petclinic project
3. Set JDK 17
4. Generate tests for Owner entity-class:
select `TestConfiguration`, `Unit tests` and leave defaults for all other settings.
**Expected behavior**
Inner class TestConfiguration can be used for test generation.
No exception is expected.
**Actual behavior**
Error test is generated.
CLassNotFound exception is thrown in utbot-engine-current.log
**Screenshots, logs**
~~~java
///region Test suites for executable org.springframework.samples.petclinic.owner.Owner.toString
///region Errors report for toString
public void testToString_errors() {
// Couldn't generate some tests. List of errors:
//
// 1 occurrences of:
// <Throwable with empty message>
}
///endregion
///endregion
~~~
~~~java
17:10:46.739 | INFO | EngineProcessMain | -----------------------------------------------------------------------
17:10:46.743 | INFO | EngineProcessMain | -------------------NEW ENGINE PROCESS STARTED--------------------------
17:10:46.744 | INFO | EngineProcessMain | -----------------------------------------------------------------------
17:10:47.364 | INFO | SpringAnalyzerProcess | Spring Analyzer process started with PID = 88618
17:10:48.071 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | -----------------------------------------------------------------------
17:10:48.072 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | ------------------NEW SPRING ANALYZER PROCESS STARTED------------------
17:10:48.072 | INFO | SpringAnalyzerProcess | RdCategory: SpringAnalyzerProcessMain | -----------------------------------------------------------------------
17:10:48.086 | INFO | SpringAnalyzerProcess | RdCategory: SourceFinder | Using java Spring configuration
17:10:48.116 | ERROR | EngineProcessMain | Spring Analyzer crashed, resorting to using empty bean list
com.jetbrains.rd.util.reactive.RdFault: org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.TestConfiguration, reason: java.lang.ClassNotFoundException: org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.TestConfiguration
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at org.utbot.spring.utils.SourceFinder.findSources(SourceFinder.kt:30)
at org.utbot.spring.analyzer.SpringApplicationAnalyzer.getBeanDefinitions(SpringApplicationAnalyzer.kt:13)
at org.utbot.spring.process.SpringAnalyzerProcessMainKt$setup$1.invoke(SpringAnalyzerProcessMain.kt:51)
at org.utbot.spring.process.SpringAnalyzerProcessMainKt$setup$1.invoke(SpringAnalyzerProcessMain.kt:43)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115)
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88)
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182)
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54)
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
at com.jetbrains.rd.framework.RdTaskResult$Companion.read(TaskInterfaces.kt:30) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.CallSiteWiredRdTask.onWireReceived(RdTask.kt:106) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1.queue$execute(RdTask.kt:280) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1.access$queue$execute(RdTask.kt:269) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall$createResponseScheduler$1$queue$1.invokeSuspend(RdTask.kt:289) ~[rd-framework-2023.1.2.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) ~[kotlin-stdlib-1.8.10.jar:1.8.10-release-430(1.8.10)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source) ~[kotlinx-coroutines-core-jvm-1.6.3.jar:?]
at org.utbot.rd.UtRdUtilKt.startBlocking(UtRdUtil.kt:31) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.spring.process.SpringAnalyzerProcess.getBeanDefinitions(SpringAnalyzerProcess.kt:97) ~[utbot-spring-analyzer-main-2023.6.4608.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$2.invoke(EngineProcessMain.kt:85) ~[utbot-framework-main-2023.6.4608.jar:?]
at org.utbot.framework.process.EngineProcessMainKt$setup$2.invoke(EngineProcessMain.kt:81) ~[utbot-framework-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1$2$1.invoke(ClientProcessUtil.kt:115) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog.wrapActive(ClientProcessUtil.kt:88) ~[utbot-rd-main-2023.6.4608.jar:?]
at org.utbot.rd.IdleWatchdog$measureTimeForActiveCall$1.invoke(ClientProcessUtil.kt:114) ~[utbot-rd-main-2023.6.4608.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.IRdEndpoint$set$1.invoke(TaskInterfaces.kt:182) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.RdCall.onWireReceived(RdTask.kt:362) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:57) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:148) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:56) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:54) ~[rd-framework-2023.1.2.jar:?]
at com.jetbrains.rd.util.threading.SingleThreadSchedulerBase.queue$lambda-3(SingleThreadScheduler.kt:41) ~[rd-core-2023.1.2.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
at java.lang.Thread.run(Thread.java:833) [?:?]
~~~
**Environment**
OS - macOS Ventura 13.2.1 (22D68)
IntelliJ IDEA version - 2023.1.2 **Community edition**
Project - spring-petclinic
JDK - 17
|
process
|
classnotfoundexception testconfiguration inside test class for a spring unit tests generation description classnotfoundexception for testconfiguration defined inside crashcontrollerintegrationtests in instrumented process to reproduce install in intellij idea open spring petclinic project set jdk generate tests for owner entity class select testconfiguration unit tests and leave defaults for all other settings expected behavior inner class testconfiguration can be used for test generation no exception is expected actual behavior error test is generated classnotfound exception is thrown in utbot engine current log screenshots logs java region test suites for executable org springframework samples petclinic owner owner tostring region errors report for tostring public void testtostring errors couldn t generate some tests list of errors occurrences of endregion endregion java info engineprocessmain info engineprocessmain new engine process started info engineprocessmain info springanalyzerprocess spring analyzer process started with pid info springanalyzerprocess rdcategory springanalyzerprocessmain info springanalyzerprocess rdcategory springanalyzerprocessmain new spring analyzer process started info springanalyzerprocess rdcategory springanalyzerprocessmain info springanalyzerprocess rdcategory sourcefinder using java spring configuration error engineprocessmain spring analyzer crashed resorting to using empty bean list com jetbrains rd util reactive rdfault org springframework samples petclinic system crashcontrollerintegrationtests testconfiguration reason java lang classnotfoundexception org springframework samples petclinic system crashcontrollerintegrationtests testconfiguration at java base jdk internal loader builtinclassloader loadclass builtinclassloader java at java base jdk internal loader classloaders appclassloader loadclass classloaders java at java base java lang classloader loadclass classloader java at org utbot spring utils sourcefinder findsources sourcefinder kt at org utbot spring analyzer springapplicationanalyzer getbeandefinitions springapplicationanalyzer kt at org utbot spring process springanalyzerprocessmainkt setup invoke springanalyzerprocessmain kt at org utbot spring process springanalyzerprocessmainkt setup invoke springanalyzerprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue lambda singlethreadscheduler kt at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java at com jetbrains rd framework rdtaskresult companion read taskinterfaces kt at com jetbrains rd framework impl callsitewiredrdtask onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl rdcall createresponsescheduler queue execute rdtask kt at com jetbrains rd framework impl rdcall createresponsescheduler access queue execute rdtask kt at com jetbrains rd framework impl rdcall createresponsescheduler queue invokesuspend rdtask kt at kotlin coroutines jvm internal basecontinuationimpl resumewith continuationimpl kt at kotlinx coroutines dispatchedtask run dispatchedtask kt at kotlinx coroutines eventloopimplbase processnextevent eventloop common kt at kotlinx coroutines blockingcoroutine joinblocking builders kt at kotlinx coroutines builderskt builderskt runblocking builders kt at kotlinx coroutines builderskt runblocking unknown source at kotlinx coroutines builderskt builderskt runblocking default builders kt at kotlinx coroutines builderskt runblocking default unknown source at org utbot rd utrdutilkt startblocking utrdutil kt at org utbot spring process springanalyzerprocess getbeandefinitions springanalyzerprocess kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot framework process engineprocessmainkt setup invoke engineprocessmain kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at org utbot rd idlewatchdog wrapactive clientprocessutil kt at org utbot rd idlewatchdog measuretimeforactivecall invoke clientprocessutil kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework irdendpoint set invoke taskinterfaces kt at com jetbrains rd framework impl rdcall onwirereceived rdtask kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework impl protocolcontexts readmessagecontextandinvoke protocolcontexts kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd framework messagebroker invoke invoke messagebroker kt at com jetbrains rd util threading singlethreadschedulerbase queue lambda singlethreadscheduler kt at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java environment os macos ventura intellij idea version community edition project spring petclinic jdk
| 1
|
12,703
| 15,078,579,319
|
IssuesEvent
|
2021-02-05 08:56:40
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Have the Cypress GitHub bot handle issues that were opened that did not change the PR template
|
process: contributing stage: ready for work type: chore
|
### Current behavior:
We frequently have users that open issues where they do not edit the issue template at all. These issues are obviously not providing enough detail and end up being closed by us manually.
<img width="755" alt="Screen Shot 2019-06-19 at 10 53 16 PM" src="https://user-images.githubusercontent.com/1271364/59825241-7e30a380-9359-11e9-9a58-bd06b7791c94.png">
<img width="788" alt="Screen Shot 2019-06-20 at 12 47 15 PM" src="https://user-images.githubusercontent.com/1271364/59825258-8f79b000-9359-11e9-87d6-1166e8cd4ddf.png">
### Desired behavior:
If an issue is opened without changing the issue template - the issue should be automatically closed.
IDEALLY, I would prefer the bot do the following:
- Scan new issues to see if the content of the original comment is any different from our [ISSUE_TEMPLATE.md](https://github.com/cypress-io/cypress/blob/develop/ISSUE_TEMPLATE.md)
- If NO differences - CLOSE the issue.
- Post a comment saying -
>Unfortunately we have to close this issue as there is not enough information to reproduce the problem.
>
>Please comment in this issue with a reproducible example and we will reopen the issue. π
|
1.0
|
Have the Cypress GitHub bot handle issues that were opened that did not change the PR template - ### Current behavior:
We frequently have users that open issues where they do not edit the issue template at all. These issues are obviously not providing enough detail and end up being closed by us manually.
<img width="755" alt="Screen Shot 2019-06-19 at 10 53 16 PM" src="https://user-images.githubusercontent.com/1271364/59825241-7e30a380-9359-11e9-9a58-bd06b7791c94.png">
<img width="788" alt="Screen Shot 2019-06-20 at 12 47 15 PM" src="https://user-images.githubusercontent.com/1271364/59825258-8f79b000-9359-11e9-87d6-1166e8cd4ddf.png">
### Desired behavior:
If an issue is opened without changing the issue template - the issue should be automatically closed.
IDEALLY, I would prefer the bot do the following:
- Scan new issues to see if the content of the original comment is any different from our [ISSUE_TEMPLATE.md](https://github.com/cypress-io/cypress/blob/develop/ISSUE_TEMPLATE.md)
- If NO differences - CLOSE the issue.
- Post a comment saying -
>Unfortunately we have to close this issue as there is not enough information to reproduce the problem.
>
>Please comment in this issue with a reproducible example and we will reopen the issue. π
|
process
|
have the cypress github bot handle issues that were opened that did not change the pr template current behavior we frequently have users that open issues where they do not edit the issue template at all these issues are obviously not providing enough detail and end up being closed by us manually img width alt screen shot at pm src img width alt screen shot at pm src desired behavior if an issue is opened without changing the issue template the issue should be automatically closed ideally i would prefer the bot do the following scan new issues to see if the content of the original comment is any different from our if no differences close the issue post a comment saying unfortunately we have to close this issue as there is not enough information to reproduce the problem please comment in this issue with a reproducible example and we will reopen the issue π
| 1
|
2,141
| 4,982,867,518
|
IssuesEvent
|
2016-12-07 12:52:04
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
Signing up a trial service doesn't work
|
process_duplicate type_bug
|
Method "service.trial_signup" does not exist

|
1.0
|
Signing up a trial service doesn't work - Method "service.trial_signup" does not exist

|
process
|
signing up a trial service doesn t work method service trial signup does not exist
| 1
|
17,488
| 23,302,706,513
|
IssuesEvent
|
2022-08-07 15:08:07
|
Battle-s/battle-school-backend
|
https://api.github.com/repos/Battle-s/battle-school-backend
|
closed
|
[FEAT] νκ΅ Entity μμ±
|
feature :computer: processing :hourglass_flowing_sand:
|
## μ€λͺ
> μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€. λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€.
## 체ν¬μ¬ν
> μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€.
- νκ΅ Entity μμ±
## μ°Έκ³ μλ£
> μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€.
## κ΄λ ¨ λ
Όμ
> μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€.
|
1.0
|
[FEAT] νκ΅ Entity μμ± - ## μ€λͺ
> μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€. λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€.
## 체ν¬μ¬ν
> μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€.
- νκ΅ Entity μμ±
## μ°Έκ³ μλ£
> μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€.
## κ΄λ ¨ λ
Όμ
> μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€.
|
process
|
νκ΅ entity μμ± μ€λͺ
μ΄μμ λν μ€λͺ
μ μμ±ν©λλ€ λ΄λΉμλ ν¨κ» μμ±νλ©΄ μ’μ΅λλ€ μ²΄ν¬μ¬ν μ΄μλ₯Ό closeνκΈ° μν΄ νμν 쑰건λ€μ 체ν¬λ°μ€λ‘ λμ΄ν©λλ€ νκ΅ entity μμ± μ°Έκ³ μλ£ μ΄μλ₯Ό ν΄κ²°νκΈ° μν΄ νμν μ°Έκ³ μλ£κ° μλ€λ©΄ μΆκ°ν©λλ€ κ΄λ ¨ λ
Όμ μ΄μμ λν λ
Όμκ° μμλ€λ©΄ λ
Όμ λ΄μ©μ κ°λ΅νκ² μΆκ°ν©λλ€
| 1
|
15,034
| 18,755,331,506
|
IssuesEvent
|
2021-11-05 10:00:15
|
ClearcodeHQ/pytest-postgresql
|
https://api.github.com/repos/ClearcodeHQ/pytest-postgresql
|
closed
|
postgresql_noproc with pytest-xdist and each worker using a separate database in the shared process
|
enhancement help wanted noprocess
|
### What action do you want to perform
I want to run a larger test suite using pytest-postgresql with the `postgresql_noproc` fixture (so all tests use PostgreSQL in a single Docker container) in parallel with pytest-xdist.
### What are the results
It fails due to all test processes attempting to concurrently create or drop the same test database specified via `dbname`.
### What are the expected results
Tests succeed, operating on separate databases. (Similarly how with `postgresql_proc` we run separate processes for the databases.) This can be easily worked around by using the executor and `DatabaseJanitor` directly without the fixtures and passing a database name containing the value of the `worker_id` fixture.
I have two ideas how this can be implemented:
* either support a fixture for overriding the `dbname` parameter (so the user could provide a session-scoped fixture using `worker_id`),
* or detect xdist and adjust database names: but this might be too fragile to not break other use cases.
In any case, this still needs support for overriding `dbname` or just making it unique: in my test suite a single worker might mix tests using different database templates (one has tables used by tests and the other one is used for testing migrations on an empty database).
|
1.0
|
postgresql_noproc with pytest-xdist and each worker using a separate database in the shared process - ### What action do you want to perform
I want to run a larger test suite using pytest-postgresql with the `postgresql_noproc` fixture (so all tests use PostgreSQL in a single Docker container) in parallel with pytest-xdist.
### What are the results
It fails due to all test processes attempting to concurrently create or drop the same test database specified via `dbname`.
### What are the expected results
Tests succeed, operating on separate databases. (Similarly how with `postgresql_proc` we run separate processes for the databases.) This can be easily worked around by using the executor and `DatabaseJanitor` directly without the fixtures and passing a database name containing the value of the `worker_id` fixture.
I have two ideas how this can be implemented:
* either support a fixture for overriding the `dbname` parameter (so the user could provide a session-scoped fixture using `worker_id`),
* or detect xdist and adjust database names: but this might be too fragile to not break other use cases.
In any case, this still needs support for overriding `dbname` or just making it unique: in my test suite a single worker might mix tests using different database templates (one has tables used by tests and the other one is used for testing migrations on an empty database).
|
process
|
postgresql noproc with pytest xdist and each worker using a separate database in the shared process what action do you want to perform i want to run a larger test suite using pytest postgresql with the postgresql noproc fixture so all tests use postgresql in a single docker container in parallel with pytest xdist what are the results it fails due to all test processes attempting to concurrently create or drop the same test database specified via dbname what are the expected results tests succeed operating on separate databases similarly how with postgresql proc we run separate processes for the databases this can be easily worked around by using the executor and databasejanitor directly without the fixtures and passing a database name containing the value of the worker id fixture i have two ideas how this can be implemented either support a fixture for overriding the dbname parameter so the user could provide a session scoped fixture using worker id or detect xdist and adjust database names but this might be too fragile to not break other use cases in any case this still needs support for overriding dbname or just making it unique in my test suite a single worker might mix tests using different database templates one has tables used by tests and the other one is used for testing migrations on an empty database
| 1
|
13,327
| 15,788,296,398
|
IssuesEvent
|
2021-04-01 20:34:06
|
apache/trafficcontrol
|
https://api.github.com/repos/apache/trafficcontrol
|
opened
|
`release.pl` exits success when it fails to update the VERSION file
|
bug process
|
## I'm submitting a ...
- bug report
## Traffic Control components affected ...
None
## Current behavior:
If the `release.pl` script fails to edit the VERSION file, it will plow ahead anyway. Or, presumably, if any step fails it will continue to do things, possibly incorrectly.
## Expected behavior:
If part of the process of creating a release (or candidate) fails, the script should exit immediately, printing what went wrong (ideally to stderr) and use a non-zero exit code.
## Minimal reproduction of the problem with instructions:
I think if you try to create an existing release it won't have any negative side effects. So checkout e.g. the RELEASE-5.0.0-RC1 tag and do `./misc/release.pl --gpg-key={{some key}} --release-no=RELEASE-5.0.0-RC1 cut`.
## Anything else:
You can see it happening in the script's output:
```
Failed to run:git commit -m 'RELEASE: Syncing VERSION file' VERSION
Updating 'VERSION' file
Everything up-to-date
Signing new tag based upon your gpg key
```
|
1.0
|
`release.pl` exits success when it fails to update the VERSION file - ## I'm submitting a ...
- bug report
## Traffic Control components affected ...
None
## Current behavior:
If the `release.pl` script fails to edit the VERSION file, it will plow ahead anyway. Or, presumably, if any step fails it will continue to do things, possibly incorrectly.
## Expected behavior:
If part of the process of creating a release (or candidate) fails, the script should exit immediately, printing what went wrong (ideally to stderr) and use a non-zero exit code.
## Minimal reproduction of the problem with instructions:
I think if you try to create an existing release it won't have any negative side effects. So checkout e.g. the RELEASE-5.0.0-RC1 tag and do `./misc/release.pl --gpg-key={{some key}} --release-no=RELEASE-5.0.0-RC1 cut`.
## Anything else:
You can see it happening in the script's output:
```
Failed to run:git commit -m 'RELEASE: Syncing VERSION file' VERSION
Updating 'VERSION' file
Everything up-to-date
Signing new tag based upon your gpg key
```
|
process
|
release pl exits success when it fails to update the version file i m submitting a bug report traffic control components affected none current behavior if the release pl script fails to edit the version file it will plow ahead anyway or presumably if any step fails it will continue to do things possibly incorrectly expected behavior if part of the process of creating a release or candidate fails the script should exit immediately printing what went wrong ideally to stderr and use a non zero exit code minimal reproduction of the problem with instructions i think if you try to create an existing release it won t have any negative side effects so checkout e g the release tag and do misc release pl gpg key some key release no release cut anything else you can see it happening in the script s output failed to run git commit m release syncing version file version updating version file everything up to date signing new tag based upon your gpg key
| 1
|
657
| 2,577,888,600
|
IssuesEvent
|
2015-02-12 19:44:28
|
jasonhall/google-styleguide
|
https://api.github.com/repos/jasonhall/google-styleguide
|
opened
|
Making cpplint Sonar friendly
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi,
First of all thank you for this great script.
Let me introduce myself, my name is Jorge Costa and im am one of the developers
that is currently updating the c++ plugin community edition. We have now a new
feature that allows the import of any static analysis tool, cpplint is one im
using has a reference case. However in order to get it into sonar i had the
need to modify the script in order to differentiate the different rules. Ive
create a script for that, see
http://docs.codehaus.org/display/SONAR/Usage+of+Non+Supported+Tools
Would this be something that you could improve on your side, so that the rules
are more easily exported into sonar?
Thanks in advance
Best Regards
Jorge Costa
```
-----
Original issue reported on code.google.com by JMECo...@gmail.com on 11 Feb 2013 at 1:18
|
1.0
|
Making cpplint Sonar friendly - ```
Hi,
First of all thank you for this great script.
Let me introduce myself, my name is Jorge Costa and im am one of the developers
that is currently updating the c++ plugin community edition. We have now a new
feature that allows the import of any static analysis tool, cpplint is one im
using has a reference case. However in order to get it into sonar i had the
need to modify the script in order to differentiate the different rules. Ive
create a script for that, see
http://docs.codehaus.org/display/SONAR/Usage+of+Non+Supported+Tools
Would this be something that you could improve on your side, so that the rules
are more easily exported into sonar?
Thanks in advance
Best Regards
Jorge Costa
```
-----
Original issue reported on code.google.com by JMECo...@gmail.com on 11 Feb 2013 at 1:18
|
non_process
|
making cpplint sonar friendly hi first of all thank you for this great script let me introduce myself my name is jorge costa and im am one of the developers that is currently updating the c plugin community edition we have now a new feature that allows the import of any static analysis tool cpplint is one im using has a reference case however in order to get it into sonar i had the need to modify the script in order to differentiate the different rules ive create a script for that see would this be something that you could improve on your side so that the rules are more easily exported into sonar thanks in advance best regards jorge costa original issue reported on code google com by jmeco gmail com on feb at
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.