Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,044
| 3,511,845,757
|
IssuesEvent
|
2016-01-10 16:13:29
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
opened
|
Release 0.1.5
|
release process
|
**Initial release notes**:
TBD.
**Things to do**:
- [ ] bump library version to 0.1.5
- [ ] upload Archives to Maven Central Repository
- [ ] bump library version in `README.md` after Maven Sync
- [ ] update `CHANGELOG.md` after Maven Sync
- [ ] create new GitHub release
|
1.0
|
Release 0.1.5 - **Initial release notes**:
TBD.
**Things to do**:
- [ ] bump library version to 0.1.5
- [ ] upload Archives to Maven Central Repository
- [ ] bump library version in `README.md` after Maven Sync
- [ ] update `CHANGELOG.md` after Maven Sync
- [ ] create new GitHub release
|
process
|
release initial release notes tbd things to do bump library version to upload archives to maven central repository bump library version in readme md after maven sync update changelog md after maven sync create new github release
| 1
|
30,716
| 5,842,090,417
|
IssuesEvent
|
2017-05-10 04:11:26
|
KSP-KOS/KOS
|
https://api.github.com/repos/KSP-KOS/KOS
|
closed
|
unable to differentiate similar-named part modules
|
documentation enhancement
|
As seen in the attached screenshot, I have three part modules all named ModuleScienceExperiment, and I am only able to call the first one on the list. Given how you can get a List of part module names, perhaps it would be better to just reference them by index rather than a string? Or at least in addition to a string?

|
1.0
|
unable to differentiate similar-named part modules - As seen in the attached screenshot, I have three part modules all named ModuleScienceExperiment, and I am only able to call the first one on the list. Given how you can get a List of part module names, perhaps it would be better to just reference them by index rather than a string? Or at least in addition to a string?

|
non_process
|
unable to differentiate similar named part modules as seen in the attached screenshot i have three part modules all named modulescienceexperiment and i am only able to call the first one on the list given how you can get a list of part module names perhaps it would be better to just reference them by index rather than a string or at least in addition to a string
| 0
|
784,357
| 27,567,578,276
|
IssuesEvent
|
2023-03-08 06:01:33
|
njtierney/conmat
|
https://api.github.com/repos/njtierney/conmat
|
closed
|
How to effectively compare age groups
|
Priority 2
|
Currently within `apply_vaccination` we just check that certain dimensions match - I think it is better to check that the age breaks are the same, but for this case, I think that these are the same, but I'm not sure what sort of method we would use to compare them - `80+` is the same as `c(80, Inf)`, and is `0-5, 5-10` the same as `0-4, 5-11`?
``` r
library(conmat)
ngm_nsw <- generate_ngm_oz(
state_name = "NSW",
age_breaks = c(seq(0, 80, by = 5), Inf),
R_target = 1.5
)
age_breaks(ngm_nsw$home)
#> [1] "[0,5)" "[5,10)" "[10,15)" "[15,20)" "[20,25)" "[25,30)"
#> [7] "[30,35)" "[35,40)" "[40,45)" "[45,50)" "[50,55)" "[55,60)"
#> [13] "[60,65)" "[65,70)" "[70,75)" "[75,80)" "[80,Inf)"
vaccination_effect_example_data
#> # A tibble: 17 × 4
#> age_band coverage acquisition transmission
#> <chr> <dbl> <dbl> <dbl>
#> 1 0-4 0 0 0
#> 2 5-11 0.782 0.583 0.254
#> 3 12-15 0.997 0.631 0.295
#> 4 16-19 0.965 0.786 0.469
#> 5 20-24 0.861 0.774 0.453
#> 6 25-29 0.997 0.778 0.458
#> 7 30-34 0.998 0.803 0.493
#> 8 35-39 0.998 0.829 0.533
#> 9 40-44 0.999 0.841 0.551
#> 10 45-49 0.993 0.847 0.562
#> 11 50-54 0.999 0.857 0.579
#> 12 55-59 0.996 0.864 0.591
#> 13 60-64 0.998 0.858 0.581
#> 14 65-69 0.999 0.864 0.591
#> 15 70-74 0.999 0.867 0.597
#> 16 75-79 0.999 0.866 0.595
#> 17 80+ 0.999 0.844 0.556
# these two data sources ^^ are used in `apply_vaccination` below:
# Apply vaccination effect to next generation matrices
ngm_nsw_vacc <- apply_vaccination(
ngm = ngm_nsw,
data = vaccination_effect_example_data,
coverage_col = coverage,
acquisition_col = acquisition,
transmission_col = transmission
)
```
<sup>Created on 2023-01-17 with [reprex v2.0.2](https://reprex.tidyverse.org)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os macOS Monterey 12.3.1
#> system aarch64, darwin20
#> ui X11
#> language (EN)
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz Australia/Brisbane
#> date 2023-01-17
#> pandoc 2.19.2 @ /Applications/RStudio.app/Contents/Resources/app/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.0)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.0)
#> codetools 0.2-18 2020-11-04 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.0)
#> conmat * 0.0.1.9000 2023-01-16 [1] local
#> countrycode 1.4.0 2022-05-04 [1] CRAN (R 4.2.0)
#> curl 4.3.3 2022-10-06 [1] CRAN (R 4.2.0)
#> data.table 1.14.6 2022-11-16 [1] CRAN (R 4.2.0)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.0)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.0)
#> dotCall64 1.0-2 2022-10-03 [1] CRAN (R 4.2.0)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.0)
#> ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.2.0)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.0)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.0)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.0)
#> fields 14.1 2022-08-12 [1] CRAN (R 4.2.0)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.0)
#> furrr 0.3.1 2022-08-15 [1] CRAN (R 4.2.0)
#> future 1.29.0 2022-11-06 [1] CRAN (R 4.2.0)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.0)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.0)
#> globals 0.16.2 2022-11-21 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.0)
#> gridExtra 2.3 2017-09-09 [1] CRAN (R 4.2.0)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.0)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.0)
#> hms 1.1.2 2022-08-19 [1] CRAN (R 4.2.0)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.0)
#> httr 1.4.4 2022-08-17 [1] CRAN (R 4.2.0)
#> jsonlite 1.8.3 2022-10-21 [1] CRAN (R 4.2.0)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.0)
#> lattice 0.20-45 2021-09-22 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.0)
#> listenv 0.8.0 2019-12-05 [1] CRAN (R 4.2.0)
#> lubridate 1.9.0 2022-11-06 [1] CRAN (R 4.2.0)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.0)
#> maps 3.4.1 2022-10-30 [1] CRAN (R 4.2.0)
#> Matrix 1.5-3 2022-11-11 [1] CRAN (R 4.2.0)
#> mgcv 1.8-41 2022-10-21 [1] CRAN (R 4.2.0)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.0)
#> nlme 3.1-160 2022-10-10 [1] CRAN (R 4.2.0)
#> oai 0.4.0 2022-11-10 [1] CRAN (R 4.2.0)
#> parallelly 1.32.1 2022-07-21 [1] CRAN (R 4.2.0)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.0)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.0)
#> plyr 1.8.8 2022-11-11 [1] CRAN (R 4.2.0)
#> purrr * 0.3.5 2022-10-06 [1] CRAN (R 4.2.0)
#> R.cache 0.16.0 2022-07-21 [1] CRAN (R 4.2.0)
#> R.methodsS3 1.8.2 2022-06-13 [1] CRAN (R 4.2.0)
#> R.oo 1.25.0 2022-06-12 [1] CRAN (R 4.2.0)
#> R.utils 2.12.2 2022-11-11 [1] CRAN (R 4.2.0)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.0)
#> Rcpp 1.0.9 2022-07-08 [1] CRAN (R 4.2.0)
#> readr 2.1.3 2022-10-01 [1] CRAN (R 4.2.0)
#> reprex 2.0.2 2022-08-17 [1] CRAN (R 4.2.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.0)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.0)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.0)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.0)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.0)
#> socialmixr 0.2.0 2022-10-27 [1] CRAN (R 4.2.0)
#> spam 2.9-1 2022-08-07 [1] CRAN (R 4.2.0)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.0)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.0)
#> styler 1.8.1 2022-11-07 [1] CRAN (R 4.2.0)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.0)
#> tidyr 1.2.1 2022-09-08 [1] CRAN (R 4.2.0)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.0)
#> timechange 0.1.1 2022-11-04 [1] CRAN (R 4.2.0)
#> tzdb 0.3.0 2022-03-28 [1] CRAN (R 4.2.0)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.0)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.0)
#> viridis 0.6.2 2021-10-13 [1] CRAN (R 4.2.0)
#> viridisLite 0.4.1 2022-08-22 [1] CRAN (R 4.2.0)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.0)
#> wpp2017 1.2-3 2020-02-10 [1] CRAN (R 4.2.0)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.0)
#> xml2 1.3.3 2021-11-30 [1] CRAN (R 4.2.0)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.0)
#>
#> [1] /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details>
|
1.0
|
How to effectively compare age groups - Currently within `apply_vaccination` we just check that certain dimensions match - I think it is better to check that the age breaks are the same, but for this case, I think that these are the same, but I'm not sure what sort of method we would use to compare them - `80+` is the same as `c(80, Inf)`, and is `0-5, 5-10` the same as `0-4, 5-11`?
``` r
library(conmat)
ngm_nsw <- generate_ngm_oz(
state_name = "NSW",
age_breaks = c(seq(0, 80, by = 5), Inf),
R_target = 1.5
)
age_breaks(ngm_nsw$home)
#> [1] "[0,5)" "[5,10)" "[10,15)" "[15,20)" "[20,25)" "[25,30)"
#> [7] "[30,35)" "[35,40)" "[40,45)" "[45,50)" "[50,55)" "[55,60)"
#> [13] "[60,65)" "[65,70)" "[70,75)" "[75,80)" "[80,Inf)"
vaccination_effect_example_data
#> # A tibble: 17 × 4
#> age_band coverage acquisition transmission
#> <chr> <dbl> <dbl> <dbl>
#> 1 0-4 0 0 0
#> 2 5-11 0.782 0.583 0.254
#> 3 12-15 0.997 0.631 0.295
#> 4 16-19 0.965 0.786 0.469
#> 5 20-24 0.861 0.774 0.453
#> 6 25-29 0.997 0.778 0.458
#> 7 30-34 0.998 0.803 0.493
#> 8 35-39 0.998 0.829 0.533
#> 9 40-44 0.999 0.841 0.551
#> 10 45-49 0.993 0.847 0.562
#> 11 50-54 0.999 0.857 0.579
#> 12 55-59 0.996 0.864 0.591
#> 13 60-64 0.998 0.858 0.581
#> 14 65-69 0.999 0.864 0.591
#> 15 70-74 0.999 0.867 0.597
#> 16 75-79 0.999 0.866 0.595
#> 17 80+ 0.999 0.844 0.556
# these two data sources ^^ are used in `apply_vaccination` below:
# Apply vaccination effect to next generation matrices
ngm_nsw_vacc <- apply_vaccination(
ngm = ngm_nsw,
data = vaccination_effect_example_data,
coverage_col = coverage,
acquisition_col = acquisition,
transmission_col = transmission
)
```
<sup>Created on 2023-01-17 with [reprex v2.0.2](https://reprex.tidyverse.org)</sup>
<details style="margin-bottom:10px;">
<summary>
Session info
</summary>
``` r
sessioninfo::session_info()
#> ─ Session info ───────────────────────────────────────────────────────────────
#> setting value
#> version R version 4.2.1 (2022-06-23)
#> os macOS Monterey 12.3.1
#> system aarch64, darwin20
#> ui X11
#> language (EN)
#> collate en_US.UTF-8
#> ctype en_US.UTF-8
#> tz Australia/Brisbane
#> date 2023-01-17
#> pandoc 2.19.2 @ /Applications/RStudio.app/Contents/Resources/app/quarto/bin/tools/ (via rmarkdown)
#>
#> ─ Packages ───────────────────────────────────────────────────────────────────
#> package * version date (UTC) lib source
#> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.2.0)
#> cli 3.4.1 2022-09-23 [1] CRAN (R 4.2.0)
#> codetools 0.2-18 2020-11-04 [1] CRAN (R 4.2.1)
#> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.2.0)
#> conmat * 0.0.1.9000 2023-01-16 [1] local
#> countrycode 1.4.0 2022-05-04 [1] CRAN (R 4.2.0)
#> curl 4.3.3 2022-10-06 [1] CRAN (R 4.2.0)
#> data.table 1.14.6 2022-11-16 [1] CRAN (R 4.2.0)
#> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.2.0)
#> digest 0.6.30 2022-10-18 [1] CRAN (R 4.2.0)
#> dotCall64 1.0-2 2022-10-03 [1] CRAN (R 4.2.0)
#> dplyr 1.0.10 2022-09-01 [1] CRAN (R 4.2.0)
#> ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.2.0)
#> evaluate 0.18 2022-11-07 [1] CRAN (R 4.2.0)
#> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.2.0)
#> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.2.0)
#> fields 14.1 2022-08-12 [1] CRAN (R 4.2.0)
#> fs 1.5.2 2021-12-08 [1] CRAN (R 4.2.0)
#> furrr 0.3.1 2022-08-15 [1] CRAN (R 4.2.0)
#> future 1.29.0 2022-11-06 [1] CRAN (R 4.2.0)
#> generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.0)
#> ggplot2 3.4.0 2022-11-04 [1] CRAN (R 4.2.0)
#> globals 0.16.2 2022-11-21 [1] CRAN (R 4.2.1)
#> glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.0)
#> gridExtra 2.3 2017-09-09 [1] CRAN (R 4.2.0)
#> gtable 0.3.1 2022-09-01 [1] CRAN (R 4.2.0)
#> highr 0.9 2021-04-16 [1] CRAN (R 4.2.0)
#> hms 1.1.2 2022-08-19 [1] CRAN (R 4.2.0)
#> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.2.0)
#> httr 1.4.4 2022-08-17 [1] CRAN (R 4.2.0)
#> jsonlite 1.8.3 2022-10-21 [1] CRAN (R 4.2.0)
#> knitr 1.41 2022-11-18 [1] CRAN (R 4.2.0)
#> lattice 0.20-45 2021-09-22 [1] CRAN (R 4.2.1)
#> lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.2.0)
#> listenv 0.8.0 2019-12-05 [1] CRAN (R 4.2.0)
#> lubridate 1.9.0 2022-11-06 [1] CRAN (R 4.2.0)
#> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.0)
#> maps 3.4.1 2022-10-30 [1] CRAN (R 4.2.0)
#> Matrix 1.5-3 2022-11-11 [1] CRAN (R 4.2.0)
#> mgcv 1.8-41 2022-10-21 [1] CRAN (R 4.2.0)
#> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.2.0)
#> nlme 3.1-160 2022-10-10 [1] CRAN (R 4.2.0)
#> oai 0.4.0 2022-11-10 [1] CRAN (R 4.2.0)
#> parallelly 1.32.1 2022-07-21 [1] CRAN (R 4.2.0)
#> pillar 1.8.1 2022-08-19 [1] CRAN (R 4.2.0)
#> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.0)
#> plyr 1.8.8 2022-11-11 [1] CRAN (R 4.2.0)
#> purrr * 0.3.5 2022-10-06 [1] CRAN (R 4.2.0)
#> R.cache 0.16.0 2022-07-21 [1] CRAN (R 4.2.0)
#> R.methodsS3 1.8.2 2022-06-13 [1] CRAN (R 4.2.0)
#> R.oo 1.25.0 2022-06-12 [1] CRAN (R 4.2.0)
#> R.utils 2.12.2 2022-11-11 [1] CRAN (R 4.2.0)
#> R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.0)
#> Rcpp 1.0.9 2022-07-08 [1] CRAN (R 4.2.0)
#> readr 2.1.3 2022-10-01 [1] CRAN (R 4.2.0)
#> reprex 2.0.2 2022-08-17 [1] CRAN (R 4.2.0)
#> rlang 1.0.6 2022-09-24 [1] CRAN (R 4.2.0)
#> rmarkdown 2.18 2022-11-09 [1] CRAN (R 4.2.0)
#> rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.0)
#> scales 1.2.1 2022-08-20 [1] CRAN (R 4.2.0)
#> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.2.0)
#> socialmixr 0.2.0 2022-10-27 [1] CRAN (R 4.2.0)
#> spam 2.9-1 2022-08-07 [1] CRAN (R 4.2.0)
#> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.2.0)
#> stringr 1.5.0 2022-12-02 [1] CRAN (R 4.2.0)
#> styler 1.8.1 2022-11-07 [1] CRAN (R 4.2.0)
#> tibble 3.1.8 2022-07-22 [1] CRAN (R 4.2.0)
#> tidyr 1.2.1 2022-09-08 [1] CRAN (R 4.2.0)
#> tidyselect 1.2.0 2022-10-10 [1] CRAN (R 4.2.0)
#> timechange 0.1.1 2022-11-04 [1] CRAN (R 4.2.0)
#> tzdb 0.3.0 2022-03-28 [1] CRAN (R 4.2.0)
#> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.2.0)
#> vctrs 0.5.1 2022-11-16 [1] CRAN (R 4.2.0)
#> viridis 0.6.2 2021-10-13 [1] CRAN (R 4.2.0)
#> viridisLite 0.4.1 2022-08-22 [1] CRAN (R 4.2.0)
#> withr 2.5.0 2022-03-03 [1] CRAN (R 4.2.0)
#> wpp2017 1.2-3 2020-02-10 [1] CRAN (R 4.2.0)
#> xfun 0.35 2022-11-16 [1] CRAN (R 4.2.0)
#> xml2 1.3.3 2021-11-30 [1] CRAN (R 4.2.0)
#> yaml 2.3.6 2022-10-18 [1] CRAN (R 4.2.0)
#>
#> [1] /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/library
#>
#> ──────────────────────────────────────────────────────────────────────────────
```
</details>
|
non_process
|
how to effectively compare age groups currently within apply vaccination we just check that certain dimensions match i think it is better to check that the age breaks are the same but for this case i think that these are the same but i m not sure what sort of method we would use to compare them is the same as c inf and is the same as r library conmat ngm nsw generate ngm oz state name nsw age breaks c seq by inf r target age breaks ngm nsw home inf vaccination effect example data a tibble × age band coverage acquisition transmission these two data sources are used in apply vaccination below apply vaccination effect to next generation matrices ngm nsw vacc apply vaccination ngm ngm nsw data vaccination effect example data coverage col coverage acquisition col acquisition transmission col transmission created on with session info r sessioninfo session info ─ session info ─────────────────────────────────────────────────────────────── setting value version r version os macos monterey system ui language en collate en us utf ctype en us utf tz australia brisbane date pandoc applications rstudio app contents resources app quarto bin tools via rmarkdown ─ packages ─────────────────────────────────────────────────────────────────── package version date utc lib source assertthat cran r cli cran r codetools cran r colorspace cran r conmat local countrycode cran r curl cran r data table cran r dbi cran r digest cran r cran r dplyr cran r ellipsis cran r evaluate cran r fansi cran r fastmap cran r fields cran r fs cran r furrr cran r future cran r generics cran r cran r globals cran r glue cran r gridextra cran r gtable cran r highr cran r hms cran r htmltools cran r httr cran r jsonlite cran r knitr cran r lattice cran r lifecycle cran r listenv cran r lubridate cran r magrittr cran r maps cran r matrix cran r mgcv cran r munsell cran r nlme cran r oai cran r parallelly cran r pillar cran r pkgconfig cran r plyr cran r purrr cran r r cache cran r r cran r r oo cran r r utils cran r cran r rcpp cran r readr cran r reprex cran r rlang cran r rmarkdown cran r rstudioapi cran r scales cran r sessioninfo cran r socialmixr cran r spam cran r stringi cran r stringr cran r styler cran r tibble cran r tidyr cran r tidyselect cran r timechange cran r tzdb cran r cran r vctrs cran r viridis cran r viridislite cran r withr cran r cran r xfun cran r cran r yaml cran r library frameworks r framework versions resources library ──────────────────────────────────────────────────────────────────────────────
| 0
|
13,989
| 16,763,408,421
|
IssuesEvent
|
2021-06-14 05:02:55
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
QGIS crash after click on expression button of model input
|
Bug Crash/Data Corruption Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
QGIS crash after click on expression button of model input.
## Report Details
**Crash ID**: [3fee5ff78a3e1a8843b5c9ef31dc3a80c00865a7](https://github.com/qgis/QGIS/search?q=3fee5ff78a3e1a8843b5c9ef31dc3a80c00865a7&type=Issues)
**Stack Trace**
<pre>
QListData::begin :
QgsExpressionContext::QgsExpressionContext :
QgsExpressionLineEdit::editExpression :
QMetaObject::activate :
QAbstractButton::clicked :
QAbstractButton::click :
QAbstractButton::mouseReleaseEvent :
QToolButton::mouseReleaseEvent :
QWidget::event :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QApplicationPrivate::sendMouseEvent :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processMouseEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
UserCallWinProcCheckWow :
DispatchMessageWorker :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
</pre>
**QGIS Info**
QGIS Version: 3.18.1-Z�rich
QGIS code revision: 202f1bf7e5
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 3.1.4
Running against GDAL: 3.1.4
**System Info**
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17763
**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
1. Execute QGIS model [import-photos.zip](https://github.com/qgis/QGIS/files/6638695/import-photos.zip) via QGIS Browser
2. Click expression button (DeleteTableRow input field)

3. QGIS crashes
|
1.0
|
QGIS crash after click on expression button of model input - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
QGIS crash after click on expression button of model input.
## Report Details
**Crash ID**: [3fee5ff78a3e1a8843b5c9ef31dc3a80c00865a7](https://github.com/qgis/QGIS/search?q=3fee5ff78a3e1a8843b5c9ef31dc3a80c00865a7&type=Issues)
**Stack Trace**
<pre>
QListData::begin :
QgsExpressionContext::QgsExpressionContext :
QgsExpressionLineEdit::editExpression :
QMetaObject::activate :
QAbstractButton::clicked :
QAbstractButton::click :
QAbstractButton::mouseReleaseEvent :
QToolButton::mouseReleaseEvent :
QWidget::event :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QApplicationPrivate::sendMouseEvent :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processMouseEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
UserCallWinProcCheckWow :
DispatchMessageWorker :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
</pre>
**QGIS Info**
QGIS Version: 3.18.1-Z�rich
QGIS code revision: 202f1bf7e5
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 3.1.4
Running against GDAL: 3.1.4
**System Info**
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17763
**How to Reproduce**
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
1. Execute QGIS model [import-photos.zip](https://github.com/qgis/QGIS/files/6638695/import-photos.zip) via QGIS Browser
2. Click expression button (DeleteTableRow input field)

3. QGIS crashes
|
process
|
qgis crash after click on expression button of model input bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug qgis crash after click on expression button of model input report details crash id stack trace qlistdata begin qgsexpressioncontext qgsexpressioncontext qgsexpressionlineedit editexpression qmetaobject activate qabstractbutton clicked qabstractbutton click qabstractbutton mousereleaseevent qtoolbutton mousereleaseevent qwidget event qapplicationprivate notify helper qapplication notify qgsapplication notify qcoreapplication qapplicationprivate sendmouseevent qsizepolicy qsizepolicy qsizepolicy qsizepolicy qapplicationprivate notify helper qapplication notify qgsapplication notify qcoreapplication qguiapplicationprivate processmouseevent qwindowsysteminterface sendwindowsystemevents processevents usercallwinproccheckwow dispatchmessageworker processevents qt plugin query metadata qeventloop exec qcoreapplication exec main basethreadinitthunk rtluserthreadstart qgis info qgis version z�rich qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version how to reproduce execute qgis model via qgis browser click expression button deletetablerow input field qgis crashes
| 1
|
11,454
| 14,274,147,284
|
IssuesEvent
|
2020-11-22 01:57:21
|
tdwg/chrono
|
https://api.github.com/repos/tdwg/chrono
|
opened
|
Update Term List Abstract
|
Process - under public review
|
Following recommendation by @gkampmeier in issue #15, update abstract on term list document to say:
"The Chronometric Age Vocabulary is standard for transmitting information about chronometric ages - the processes and results of an assay used to determine the age of a sample. This document lists all terms in namespaces currently used in the vocabulary."
|
1.0
|
Update Term List Abstract - Following recommendation by @gkampmeier in issue #15, update abstract on term list document to say:
"The Chronometric Age Vocabulary is standard for transmitting information about chronometric ages - the processes and results of an assay used to determine the age of a sample. This document lists all terms in namespaces currently used in the vocabulary."
|
process
|
update term list abstract following recommendation by gkampmeier in issue update abstract on term list document to say the chronometric age vocabulary is standard for transmitting information about chronometric ages the processes and results of an assay used to determine the age of a sample this document lists all terms in namespaces currently used in the vocabulary
| 1
|
14,858
| 18,262,943,610
|
IssuesEvent
|
2021-10-04 03:13:25
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Methods from the "Extended Class"
|
work-in-progress issue-processing-state-06
|
Hi all, my friends (@Dil3mm3 and @3aglew0) and I are working to implement new quark rules (version 21.3.2) for a university semester project (our supervisor is @cryptax). We were analyzing Brazking malware (hash SHA256 be3d8500df167b9aaf21c5f76df61c466808b8fdf60e4a7da8d6057d476282b6, let us know if you want the sample).
In a nutshell the problem we have encountered is the following: *we noticed Quark is not able to detect all the API called from an object of a class which extends a noticed Android Class. The root of cause comes from the signature of the API that in the smali code appears with the name of the child class.*
To explain better this problem, we provide the following example: Acessibilidade class is a custom Brazking class which extends Android AccesibilityService class.
```
package com.gservice.autobot;
...
public class Acessibilidade extends AccessibilityService{
...
}
```
Acessibilidade is widely used by this malware to perform accessibilty service actions, as the one below:
```
public void Clicar_Pos(int i, int i2) {
Acessibilidade acessibilidade = this.Contexto;
if (acessibilidade != null) {
try {
Clica(i, i2, acessibilidade.getRootInActiveWindow(), 0);
} catch (Exception unused) {
}
}
}
```
The incriminant line of code is the one where it is called the method `getRootInActionWindow` which in the smali code appears as following
`invoke-virtual {v0}, Lcom/gservice/autobot/Acessibilidade;->getRootInActiveWindow()Landroid/view/accessibility/AccessibilityNodeInfo;`
In the Context of Quark Rules it makes sense link this API with the PerformAction API of AccessibilityNodeInfo class
```
public void Clica(int i, int i2, AccessibilityNodeInfo accessibilityNodeInfo, int i3) {
...
accessibilityNodeInfo.performAction(16);
...
}
```
Finally we have created the following rule (note that the first API is written with the signature of AccessibilityService class and not Acessibilidade since it would not have sense make a rule so specific for a single malware)
```
{
"crime": "Use accessibility service to perform action getting root in active window",
"permission": [],
"api": [
{
"class": "Landroid/accessibilityservice/AccessibilityService;",
"method": "getRootInActiveWindow",
"descriptor": "()Landroid/view/accessibility/AccessibilityNodeInfo;"
},
{
"class": "Landroid/view/accessibility/AccessibilityNodeInfo;",
"method": "performAction",
"descriptor": "(I)Z"
}
],
"score": 1,
"label": [
"accessibility service",
"perform action"
]
}
```
This behaviour may have some repercussions on the functionalities of Quark: launching this rule over Brazking malware the score we obtain is 40% since the first API is not caught.
To sum up, we think that if a class extends another Android class (as in the case of BrazKing for the accessibility service, see class *Acessibilidade* that extends *AccessibilityService*), and a method **M** of the super class is called, it appears in the smali code with the signature of the custom class. If we defined in Quark a rule with that method **M**, from what we have noticed, Quark is not able to detect that **M** is actually a method of the super class.
Our worry is the following: if the malware author writes a class which extends an android general class (as it happens in Brazking for AccessibilityService), quark will never detect all the methods of the super class since in the smali code they appear with the signature of the child class.
Thanks in advance, hope to hear you soon
|
1.0
|
Methods from the "Extended Class" - Hi all, my friends (@Dil3mm3 and @3aglew0) and I are working to implement new quark rules (version 21.3.2) for a university semester project (our supervisor is @cryptax). We were analyzing Brazking malware (hash SHA256 be3d8500df167b9aaf21c5f76df61c466808b8fdf60e4a7da8d6057d476282b6, let us know if you want the sample).
In a nutshell the problem we have encountered is the following: *we noticed Quark is not able to detect all the API called from an object of a class which extends a noticed Android Class. The root of cause comes from the signature of the API that in the smali code appears with the name of the child class.*
To explain better this problem, we provide the following example: Acessibilidade class is a custom Brazking class which extends Android AccesibilityService class.
```
package com.gservice.autobot;
...
public class Acessibilidade extends AccessibilityService{
...
}
```
Acessibilidade is widely used by this malware to perform accessibilty service actions, as the one below:
```
public void Clicar_Pos(int i, int i2) {
Acessibilidade acessibilidade = this.Contexto;
if (acessibilidade != null) {
try {
Clica(i, i2, acessibilidade.getRootInActiveWindow(), 0);
} catch (Exception unused) {
}
}
}
```
The incriminant line of code is the one where it is called the method `getRootInActionWindow` which in the smali code appears as following
`invoke-virtual {v0}, Lcom/gservice/autobot/Acessibilidade;->getRootInActiveWindow()Landroid/view/accessibility/AccessibilityNodeInfo;`
In the Context of Quark Rules it makes sense link this API with the PerformAction API of AccessibilityNodeInfo class
```
public void Clica(int i, int i2, AccessibilityNodeInfo accessibilityNodeInfo, int i3) {
...
accessibilityNodeInfo.performAction(16);
...
}
```
Finally we have created the following rule (note that the first API is written with the signature of AccessibilityService class and not Acessibilidade since it would not have sense make a rule so specific for a single malware)
```
{
"crime": "Use accessibility service to perform action getting root in active window",
"permission": [],
"api": [
{
"class": "Landroid/accessibilityservice/AccessibilityService;",
"method": "getRootInActiveWindow",
"descriptor": "()Landroid/view/accessibility/AccessibilityNodeInfo;"
},
{
"class": "Landroid/view/accessibility/AccessibilityNodeInfo;",
"method": "performAction",
"descriptor": "(I)Z"
}
],
"score": 1,
"label": [
"accessibility service",
"perform action"
]
}
```
This behaviour may have some repercussions on the functionalities of Quark: launching this rule over Brazking malware the score we obtain is 40% since the first API is not caught.
To sum up, we think that if a class extends another Android class (as in the case of BrazKing for the accessibility service, see class *Acessibilidade* that extends *AccessibilityService*), and a method **M** of the super class is called, it appears in the smali code with the signature of the custom class. If we defined in Quark a rule with that method **M**, from what we have noticed, Quark is not able to detect that **M** is actually a method of the super class.
Our worry is the following: if the malware author writes a class which extends an android general class (as it happens in Brazking for AccessibilityService), quark will never detect all the methods of the super class since in the smali code they appear with the signature of the child class.
Thanks in advance, hope to hear you soon
|
process
|
methods from the extended class hi all my friends and and i are working to implement new quark rules version for a university semester project our supervisor is cryptax we were analyzing brazking malware hash let us know if you want the sample in a nutshell the problem we have encountered is the following we noticed quark is not able to detect all the api called from an object of a class which extends a noticed android class the root of cause comes from the signature of the api that in the smali code appears with the name of the child class to explain better this problem we provide the following example acessibilidade class is a custom brazking class which extends android accesibilityservice class package com gservice autobot public class acessibilidade extends accessibilityservice acessibilidade is widely used by this malware to perform accessibilty service actions as the one below public void clicar pos int i int acessibilidade acessibilidade this contexto if acessibilidade null try clica i acessibilidade getrootinactivewindow catch exception unused the incriminant line of code is the one where it is called the method getrootinactionwindow which in the smali code appears as following invoke virtual lcom gservice autobot acessibilidade getrootinactivewindow landroid view accessibility accessibilitynodeinfo in the context of quark rules it makes sense link this api with the performaction api of accessibilitynodeinfo class public void clica int i int accessibilitynodeinfo accessibilitynodeinfo int accessibilitynodeinfo performaction finally we have created the following rule note that the first api is written with the signature of accessibilityservice class and not acessibilidade since it would not have sense make a rule so specific for a single malware crime use accessibility service to perform action getting root in active window permission api class landroid accessibilityservice accessibilityservice method getrootinactivewindow descriptor landroid view accessibility accessibilitynodeinfo class landroid view accessibility accessibilitynodeinfo method performaction descriptor i z score label accessibility service perform action this behaviour may have some repercussions on the functionalities of quark launching this rule over brazking malware the score we obtain is since the first api is not caught to sum up we think that if a class extends another android class as in the case of brazking for the accessibility service see class acessibilidade that extends accessibilityservice and a method m of the super class is called it appears in the smali code with the signature of the custom class if we defined in quark a rule with that method m from what we have noticed quark is not able to detect that m is actually a method of the super class our worry is the following if the malware author writes a class which extends an android general class as it happens in brazking for accessibilityservice quark will never detect all the methods of the super class since in the smali code they appear with the signature of the child class thanks in advance hope to hear you soon
| 1
|
226,147
| 17,313,930,347
|
IssuesEvent
|
2021-07-27 01:32:19
|
microsoft/MixedRealityToolkit-Unity
|
https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity
|
closed
|
Using AR Foundation documentation needs to be updated
|
Documentation
|
## Describe the issue
Bug #6646 no longer effects building on MacOS Big Sur with Xcode 12. The step for un-checking the 'Strip Engine Code' check box in Project Settings > Player > Other Settings > Optimization is no longer necessary to successfully build the MRTK for iOS.
## Feature area
iOS/AR Kit
## Existing doc link
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/mrtk_development/Documentation/CrossPlatform/UsingARFoundation.md
## Additional context
Enabling the Unity AR camera settings provider should also be updated to include removing the WMR camera settings provider if using a profile that has that camera settings provider.
|
1.0
|
Using AR Foundation documentation needs to be updated - ## Describe the issue
Bug #6646 no longer effects building on MacOS Big Sur with Xcode 12. The step for un-checking the 'Strip Engine Code' check box in Project Settings > Player > Other Settings > Optimization is no longer necessary to successfully build the MRTK for iOS.
## Feature area
iOS/AR Kit
## Existing doc link
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/mrtk_development/Documentation/CrossPlatform/UsingARFoundation.md
## Additional context
Enabling the Unity AR camera settings provider should also be updated to include removing the WMR camera settings provider if using a profile that has that camera settings provider.
|
non_process
|
using ar foundation documentation needs to be updated describe the issue bug no longer effects building on macos big sur with xcode the step for un checking the strip engine code check box in project settings player other settings optimization is no longer necessary to successfully build the mrtk for ios feature area ios ar kit existing doc link additional context enabling the unity ar camera settings provider should also be updated to include removing the wmr camera settings provider if using a profile that has that camera settings provider
| 0
|
16,551
| 21,568,599,171
|
IssuesEvent
|
2022-05-02 04:17:57
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Diamond Jim
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Bonjour, Diamond Jim
Type (film/tv show): Film
Film or show in which it appears: Tim and Eric's Billion Dollar Movie
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? Start
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=nUpvDOxg8as
|
1.0
|
Diamond Jim - Please add as much of the following info as you can:
Title: Bonjour, Diamond Jim
Type (film/tv show): Film
Film or show in which it appears: Tim and Eric's Billion Dollar Movie
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? Start
Actual footage of the film/show can be seen (yes/no)? Yes
https://www.youtube.com/watch?v=nUpvDOxg8as
|
process
|
diamond jim please add as much of the following info as you can title bonjour diamond jim type film tv show film film or show in which it appears tim and eric s billion dollar movie is the parent film show streaming anywhere yes about when in the parent film show does it appear start actual footage of the film show can be seen yes no yes
| 1
|
20,393
| 27,050,747,754
|
IssuesEvent
|
2023-02-13 13:05:07
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Thomann t.racks 8x8 Matrix
|
NOT YET PROCESSED
|
- [X ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Thomann's the t.racks 8x8 Matrix
What you would like to be able to make it do from Companion:
Routing Inputs to defines Outputs, Swithcing In/Outputs ON/OFF (Muting them)
Direct links or attachments to the ethernet control protocol or API:
https://images.static-thomann.de/pics/atg/atgdata/document/manual/490507_c_490507_r1_en_online.pdf
Sadly they just document RS232 but it seems like it's just RS232 via Ethernet/TCP Bridge.
|
1.0
|
Thomann t.racks 8x8 Matrix - - [X ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Thomann's the t.racks 8x8 Matrix
What you would like to be able to make it do from Companion:
Routing Inputs to defines Outputs, Swithcing In/Outputs ON/OFF (Muting them)
Direct links or attachments to the ethernet control protocol or API:
https://images.static-thomann.de/pics/atg/atgdata/document/manual/490507_c_490507_r1_en_online.pdf
Sadly they just document RS232 but it seems like it's just RS232 via Ethernet/TCP Bridge.
|
process
|
thomann t racks matrix i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control thomann s the t racks matrix what you would like to be able to make it do from companion routing inputs to defines outputs swithcing in outputs on off muting them direct links or attachments to the ethernet control protocol or api sadly they just document but it seems like it s just via ethernet tcp bridge
| 1
|
6,609
| 9,694,244,392
|
IssuesEvent
|
2019-05-24 18:22:38
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Bug: Applicant able to select save and continue without having 3 selections
|
Apply Process State Dept.
|
Environment: UAT
Browser: Chrome
Issue: During the apply process where user selects 3 internships, Save and continue shouldn’t be active until 3 choices are selected AND if I only select 2 and click save and continue, I get a red bar on the 3rd selection but no error message.
Steps to Reproduce:
1) Select apply on an internship
2) Select a second internship to apply to but not a third
3) Select Save and continue
- Red bar displays with no error message
Resolution:
- Disable save & continue until 3 options are selected
Issue found during 5/9/19 bug bash with State
|
1.0
|
Bug: Applicant able to select save and continue without having 3 selections - Environment: UAT
Browser: Chrome
Issue: During the apply process where user selects 3 internships, Save and continue shouldn’t be active until 3 choices are selected AND if I only select 2 and click save and continue, I get a red bar on the 3rd selection but no error message.
Steps to Reproduce:
1) Select apply on an internship
2) Select a second internship to apply to but not a third
3) Select Save and continue
- Red bar displays with no error message
Resolution:
- Disable save & continue until 3 options are selected
Issue found during 5/9/19 bug bash with State
|
process
|
bug applicant able to select save and continue without having selections environment uat browser chrome issue during the apply process where user selects internships save and continue shouldn’t be active until choices are selected and if i only select and click save and continue i get a red bar on the selection but no error message steps to reproduce select apply on an internship select a second internship to apply to but not a third select save and continue red bar displays with no error message resolution disable save continue until options are selected issue found during bug bash with state
| 1
|
14,922
| 18,359,528,586
|
IssuesEvent
|
2021-10-09 01:45:38
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Cannot switch to an rewritten iframe
|
TYPE: bug AREA: client FREQUENCY: level 1 SYSTEM: iframe processing STATE: Stale
|
<!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest Hammerhead version (https://github.com/DevExpress/testcafe-hammerhead/releases), where this behavior might have been already addressed.
Before submitting an issue, please check existing issues in this repository (https://github.com/DevExpress/testcafe-hammerhead/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Scenario?
Execute actions in iframe rewritten via `document.open`.
### What is the Current behavior?
An `Iframe content is not loaded` error shown when trying to switch to an iframe.
### What is the Expected behavior?
TestCafe should switch to an iframe and execute test actions.
### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
const http = require('http');
http
.createServer((req, res) => {
switch (req.url) {
case '/':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(`<iframe src="http://localhost:2201/"></iframe>`);
break;
default:
res.end();
}
})
.listen(2200);
http
.createServer((req, res) => {
switch (req.url) {
case '/':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(`
<iframe src="/iframe"></iframe>
<script>
var iframe = document.querySelector('iframe');
iframe.contentDocument.open();
iframe.contentDocument.write('<div id="helloworld">Hello world</div>');
iframe.contentDocument.close();
</script>
`);
break;
case '/iframe':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(``);
break;
default:
res.end();
}
})
.listen(2201);
```
</details>
<details>
<summary>Test:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```js
fixture`fixture`
.page('http://localhost:2200/');
import { Selector } from 'testcafe';
test('test', async t => {
await t
.switchToIframe('iframe');
await t.debug(1000);
await t.switchToIframe(() => document.querySelector('iframe'));
const selector = Selector('div');
await s();
console.log("This is never logged as it's stuck");
});
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to: ...
2. Execute this command: ...
3. See the error: ...
### Your Environment details:
* node.js version: 12.14.1
* browser name and version: Chrome 79
* platform and version: Windows 10 1909
* other:
|
1.0
|
Cannot switch to an rewritten iframe - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest Hammerhead version (https://github.com/DevExpress/testcafe-hammerhead/releases), where this behavior might have been already addressed.
Before submitting an issue, please check existing issues in this repository (https://github.com/DevExpress/testcafe-hammerhead/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Scenario?
Execute actions in iframe rewritten via `document.open`.
### What is the Current behavior?
An `Iframe content is not loaded` error shown when trying to switch to an iframe.
### What is the Expected behavior?
TestCafe should switch to an iframe and execute test actions.
### What is your public web site URL?
<!-- Share a public accessible link to your web site or provide a simple app which we can run. -->
Your website URL (or attach your complete example):
<details>
<summary>Your complete app code (or attach your test files):</summary>
<!-- Paste your app code here: -->
```js
const http = require('http');
http
.createServer((req, res) => {
switch (req.url) {
case '/':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(`<iframe src="http://localhost:2201/"></iframe>`);
break;
default:
res.end();
}
})
.listen(2200);
http
.createServer((req, res) => {
switch (req.url) {
case '/':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(`
<iframe src="/iframe"></iframe>
<script>
var iframe = document.querySelector('iframe');
iframe.contentDocument.open();
iframe.contentDocument.write('<div id="helloworld">Hello world</div>');
iframe.contentDocument.close();
</script>
`);
break;
case '/iframe':
res.writeHead(200, { 'content-type': 'text/html' });
res.end(``);
break;
default:
res.end();
}
})
.listen(2201);
```
</details>
<details>
<summary>Test:</summary>
<!-- If applicable, add screenshots to help explain the issue. -->
```js
fixture`fixture`
.page('http://localhost:2200/');
import { Selector } from 'testcafe';
test('test', async t => {
await t
.switchToIframe('iframe');
await t.debug(1000);
await t.switchToIframe(() => document.querySelector('iframe'));
const selector = Selector('div');
await s();
console.log("This is never logged as it's stuck");
});
```
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to: ...
2. Execute this command: ...
3. See the error: ...
### Your Environment details:
* node.js version: 12.14.1
* browser name and version: Chrome 79
* platform and version: Windows 10 1909
* other:
|
process
|
cannot switch to an rewritten iframe if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest hammerhead version where this behavior might have been already addressed before submitting an issue please check existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your scenario execute actions in iframe rewritten via document open what is the current behavior an iframe content is not loaded error shown when trying to switch to an iframe what is the expected behavior testcafe should switch to an iframe and execute test actions what is your public web site url your website url or attach your complete example your complete app code or attach your test files js const http require http http createserver req res switch req url case res writehead content type text html res end iframe src break default res end listen http createserver req res switch req url case res writehead content type text html res end var iframe document queryselector iframe iframe contentdocument open iframe contentdocument write hello world iframe contentdocument close break case iframe res writehead content type text html res end break default res end listen test js fixture fixture page import selector from testcafe test test async t await t switchtoiframe iframe await t debug await t switchtoiframe document queryselector iframe const selector selector div await s console log this is never logged as it s stuck steps to reproduce go to execute this command see the error your environment details node js version browser name and version chrome platform and version windows other
| 1
|
268,793
| 20,361,469,425
|
IssuesEvent
|
2022-02-20 18:51:42
|
ChristopherSzczyglowski/python_package_template
|
https://api.github.com/repos/ChristopherSzczyglowski/python_package_template
|
closed
|
Move python package management to `pipenv`
|
documentation enhancement
|
I like the functionality of the `Pipfile` and `Pipfile.lock` - let's bring it in.
This is made necessary by the fact that CI was hanging because it could not resolve my dependencies so I had to use `pip freeze`. Once you have a frozen pip file you may as well got the whole hog and use a `Pipfile.lock`.
Tasks
* [ ] Need to update `README.md` with instructions on how to install `pipenv`
* [ ] Convert `requirements.txt` and `requirements-dev.txt` into `Pipfile`
* [ ] Update recipes in `Makefile`
* [ ] Update `install-deps` job in CI pipeline
|
1.0
|
Move python package management to `pipenv` - I like the functionality of the `Pipfile` and `Pipfile.lock` - let's bring it in.
This is made necessary by the fact that CI was hanging because it could not resolve my dependencies so I had to use `pip freeze`. Once you have a frozen pip file you may as well got the whole hog and use a `Pipfile.lock`.
Tasks
* [ ] Need to update `README.md` with instructions on how to install `pipenv`
* [ ] Convert `requirements.txt` and `requirements-dev.txt` into `Pipfile`
* [ ] Update recipes in `Makefile`
* [ ] Update `install-deps` job in CI pipeline
|
non_process
|
move python package management to pipenv i like the functionality of the pipfile and pipfile lock let s bring it in this is made necessary by the fact that ci was hanging because it could not resolve my dependencies so i had to use pip freeze once you have a frozen pip file you may as well got the whole hog and use a pipfile lock tasks need to update readme md with instructions on how to install pipenv convert requirements txt and requirements dev txt into pipfile update recipes in makefile update install deps job in ci pipeline
| 0
|
7,691
| 10,778,934,184
|
IssuesEvent
|
2019-11-04 09:27:31
|
lutraconsulting/qgis-crayfish-plugin
|
https://api.github.com/repos/lutraconsulting/qgis-crayfish-plugin
|
closed
|
Create contours (area/line) from mesh
|
enhancement processing
|
Adding support (like old Crayfish) to generate contour (areas or lines)
|
1.0
|
Create contours (area/line) from mesh - Adding support (like old Crayfish) to generate contour (areas or lines)
|
process
|
create contours area line from mesh adding support like old crayfish to generate contour areas or lines
| 1
|
18,081
| 24,096,568,872
|
IssuesEvent
|
2022-09-19 19:19:18
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
[Bug] Publish / release script can partially publish
|
Bug Keep Open Dev Release process
|
###### Reproduction Steps
1. Attempt to publish (using the release script called by the publish.yml workflow) while there is a translation error that will fail on lintRelease - https://github.com/ankidroid/Anki-Android/runs/2108464284?check_suite_focus=true
2.Try to publish again after fixing the build error (it will fail because the first publish did not fail completely, it only failed partially) https://github.com/ankidroid/Anki-Android/runs/2109306966?check_suite_focus=true
3.
###### Expected Result
A clean failure, no artifacts published anywhere, no git commits
###### Actual Result
It actually will publish to Google Play (thus consuming the version code because Google Play will now have APKs with the new version code) but it will then fail on the lintRelease error and not actually commit the new `AnkiDroid/build.gradle` edit with the incremented version code
The only resolution at that point is to manually bump the version information in build.gradle then you can publish again once you sort out the build error
This is obviously a serious failure in the automated release process but at the same time it has an easy fix and should not have happened in the first place (I rushed an alpha publish, my fault), so it is not a super high priority
But the next time I go into the github workflows to polish - something I do every month or so to add a new feature to them), I'll clean this up if no one beats me to it
###### Debug info
Refer to the [support page](https://ankidroid.org/docs/help.html) if you are unsure where to get the "debug info".
###### Research
*Enter an [x] character to confirm the points below:*
- [ ] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid
- [ ] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue
- [ ] I have searched for similar existing issues here and on the user forum
- [ ] (Optional) I have confirmed the issue is not resolved in the latest alpha release ([instructions](https://docs.ankidroid.org/manual.html#betaTesting))
|
1.0
|
[Bug] Publish / release script can partially publish - ###### Reproduction Steps
1. Attempt to publish (using the release script called by the publish.yml workflow) while there is a translation error that will fail on lintRelease - https://github.com/ankidroid/Anki-Android/runs/2108464284?check_suite_focus=true
2.Try to publish again after fixing the build error (it will fail because the first publish did not fail completely, it only failed partially) https://github.com/ankidroid/Anki-Android/runs/2109306966?check_suite_focus=true
3.
###### Expected Result
A clean failure, no artifacts published anywhere, no git commits
###### Actual Result
It actually will publish to Google Play (thus consuming the version code because Google Play will now have APKs with the new version code) but it will then fail on the lintRelease error and not actually commit the new `AnkiDroid/build.gradle` edit with the incremented version code
The only resolution at that point is to manually bump the version information in build.gradle then you can publish again once you sort out the build error
This is obviously a serious failure in the automated release process but at the same time it has an easy fix and should not have happened in the first place (I rushed an alpha publish, my fault), so it is not a super high priority
But the next time I go into the github workflows to polish - something I do every month or so to add a new feature to them), I'll clean this up if no one beats me to it
###### Debug info
Refer to the [support page](https://ankidroid.org/docs/help.html) if you are unsure where to get the "debug info".
###### Research
*Enter an [x] character to confirm the points below:*
- [ ] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid
- [ ] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue
- [ ] I have searched for similar existing issues here and on the user forum
- [ ] (Optional) I have confirmed the issue is not resolved in the latest alpha release ([instructions](https://docs.ankidroid.org/manual.html#betaTesting))
|
process
|
publish release script can partially publish reproduction steps attempt to publish using the release script called by the publish yml workflow while there is a translation error that will fail on lintrelease try to publish again after fixing the build error it will fail because the first publish did not fail completely it only failed partially expected result a clean failure no artifacts published anywhere no git commits actual result it actually will publish to google play thus consuming the version code because google play will now have apks with the new version code but it will then fail on the lintrelease error and not actually commit the new ankidroid build gradle edit with the incremented version code the only resolution at that point is to manually bump the version information in build gradle then you can publish again once you sort out the build error this is obviously a serious failure in the automated release process but at the same time it has an easy fix and should not have happened in the first place i rushed an alpha publish my fault so it is not a super high priority but the next time i go into the github workflows to polish something i do every month or so to add a new feature to them i ll clean this up if no one beats me to it debug info refer to the if you are unsure where to get the debug info research enter an character to confirm the points below i have read the and am reporting a bug or enhancement request specific to ankidroid i have checked the and the and could not find a solution to my issue i have searched for similar existing issues here and on the user forum optional i have confirmed the issue is not resolved in the latest alpha release
| 1
|
17,810
| 23,738,783,174
|
IssuesEvent
|
2022-08-31 10:28:29
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
closed
|
dialog组件内 overlay组件z-index问题
|
enhancement good first issue in process
|
### tdesign-miniprogram 版本
0.18.0
### 重现链接
_No response_
### 重现步骤
popup组件和dialog组件同时使用时 dialog设置z-index dialog里的overlay组件的z-index还是默认值
### 期望结果
组件内的z-index跟随 dialogz-index
### 实际结果
_No response_
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
1.0
|
dialog组件内 overlay组件z-index问题 - ### tdesign-miniprogram 版本
0.18.0
### 重现链接
_No response_
### 重现步骤
popup组件和dialog组件同时使用时 dialog设置z-index dialog里的overlay组件的z-index还是默认值
### 期望结果
组件内的z-index跟随 dialogz-index
### 实际结果
_No response_
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
process
|
dialog组件内 overlay组件z index问题 tdesign miniprogram 版本 重现链接 no response 重现步骤 popup组件和dialog组件同时使用时 dialog设置z index dialog里的overlay组件的z index还是默认值 期望结果 组件内的z index跟随 dialogz index 实际结果 no response 框架版本 no response 浏览器版本 no response 系统版本 no response node版本 no response 补充说明 no response
| 1
|
39,555
| 5,102,400,775
|
IssuesEvent
|
2017-01-04 18:11:38
|
18F/crime-data-api
|
https://api.github.com/repos/18F/crime-data-api
|
closed
|
Consolidate Visual Design Direction
|
design
|
- [x] Review existing visual design directions from discovery sprints and pick out best attributes and components to move forward with
- [x] Gather opinions/general feeling on existing directions from team (which and why)
- [ ] Combine selected attributes into one style
|
1.0
|
Consolidate Visual Design Direction - - [x] Review existing visual design directions from discovery sprints and pick out best attributes and components to move forward with
- [x] Gather opinions/general feeling on existing directions from team (which and why)
- [ ] Combine selected attributes into one style
|
non_process
|
consolidate visual design direction review existing visual design directions from discovery sprints and pick out best attributes and components to move forward with gather opinions general feeling on existing directions from team which and why combine selected attributes into one style
| 0
|
71,731
| 23,778,392,412
|
IssuesEvent
|
2022-09-02 00:04:28
|
CorfuDB/CorfuDB
|
https://api.github.com/repos/CorfuDB/CorfuDB
|
closed
|
Get rid of LogData deserialization dependency on CorfuRuntime
|
defect
|
## Overview
Seems like we used to serialize the CorfuObject to the global log data, which introduced a dependency on CorfuRuntime when deserializing the payload from log data. We need to remove this dependency as we're not serialize the whole CorfuObject now.
|
1.0
|
Get rid of LogData deserialization dependency on CorfuRuntime - ## Overview
Seems like we used to serialize the CorfuObject to the global log data, which introduced a dependency on CorfuRuntime when deserializing the payload from log data. We need to remove this dependency as we're not serialize the whole CorfuObject now.
|
non_process
|
get rid of logdata deserialization dependency on corfuruntime overview seems like we used to serialize the corfuobject to the global log data which introduced a dependency on corfuruntime when deserializing the payload from log data we need to remove this dependency as we re not serialize the whole corfuobject now
| 0
|
12,445
| 14,934,361,360
|
IssuesEvent
|
2021-01-25 10:25:57
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Implicit many-to-many relations don't work with native types
|
bug/2-confirmed kind/bug process/candidate team/migrations topic: migrate topic: native database types
|
Hi Prisma Team! Prisma Migrate just crashed.
## Versions
| Name | Version |
|-------------|--------------------|
| Platform | darwin |
| Node | v14.3.0 |
| Prisma CLI | 2.15.0 |
| Binary | e51dc3b5a9ee790a07104bec1c9477d51740fe54|
## Error
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_schema_calculator.rs:290:41] not yet implemented
Please create an issue in the migrate repo with
your `schema.prisma` and the prisma command you tried to use 🙏:
https://github.com/prisma/prisma/issues/new
```
|
1.0
|
Implicit many-to-many relations don't work with native types - Hi Prisma Team! Prisma Migrate just crashed.
## Versions
| Name | Version |
|-------------|--------------------|
| Platform | darwin |
| Node | v14.3.0 |
| Prisma CLI | 2.15.0 |
| Binary | e51dc3b5a9ee790a07104bec1c9477d51740fe54|
## Error
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_schema_calculator.rs:290:41] not yet implemented
Please create an issue in the migrate repo with
your `schema.prisma` and the prisma command you tried to use 🙏:
https://github.com/prisma/prisma/issues/new
```
|
process
|
implicit many to many relations don t work with native types hi prisma team prisma migrate just crashed versions name version platform darwin node prisma cli binary error error error in migration engine reason not yet implemented please create an issue in the migrate repo with your schema prisma and the prisma command you tried to use 🙏
| 1
|
7,956
| 7,161,042,363
|
IssuesEvent
|
2018-01-28 09:21:38
|
aseba-community/aseba
|
https://api.github.com/repos/aseba-community/aseba
|
opened
|
Create a minimalistic web site
|
Infrastructure Wish
|
We need a minimalistic web site for aseba.io. Some months ago @davidjsherman and I discussed a plan for a full-featured web site. While I still think it is a good target, I would like to take an intermediate step so that the identity of Aseba can be clarified and people can easily understand what it is.
Therefore I propose the following structure:
* An image, we could use [this one](https://github.com/aseba-community/aseba/blob/master/docsource/wiki/aseba-studio-overlay.xcf) as a start before we have better.
* A selection of languages.
* A first paragraph describing Aseba, taking from what we already have.
* A second paragraph giving a brief history and linking to the author list.
* A set of three boxes with each one link:
* Download installers (link to a download page)
* Read the documentation (link to the user manual on read the doc)
* Compile the source code (link to https://github.com/aseba-community/aseba#supported-platforms)
For me the main uncertainty is the download page. @davidjsherman and I discussed some times ago whether the idea to create github release for nightly builds and tags, and upload the binaries for at least Windows and macOS there. I contacted Github's team and they confirmed that it is the right way of doing it. @davidjsherman found a [script to upload releases on github](https://github.com/aktau/github-release). The question remains for Linux and embedded hosts, so we might want a separate download page that provide the following links:
- Latest and official github releases with Windows and macOS binaries
- Informations for Linux hosts (PPA for Ubuntu, official packages for Debian, Suse farm's builds for RPM, etc.)
We could put these in the main page but I worry for clutter. We could also try to detect the host system and provide by default a link for this system, but we would still need a page for alternative downloads. Anyway the requirements should be:
- Easy to use for the user
- Easy to maintain for us and contributors
Technically I suggest to use [Jekyll](https://jekyllrb.com/) as it is natively supported by github.
|
1.0
|
Create a minimalistic web site - We need a minimalistic web site for aseba.io. Some months ago @davidjsherman and I discussed a plan for a full-featured web site. While I still think it is a good target, I would like to take an intermediate step so that the identity of Aseba can be clarified and people can easily understand what it is.
Therefore I propose the following structure:
* An image, we could use [this one](https://github.com/aseba-community/aseba/blob/master/docsource/wiki/aseba-studio-overlay.xcf) as a start before we have better.
* A selection of languages.
* A first paragraph describing Aseba, taking from what we already have.
* A second paragraph giving a brief history and linking to the author list.
* A set of three boxes with each one link:
* Download installers (link to a download page)
* Read the documentation (link to the user manual on read the doc)
* Compile the source code (link to https://github.com/aseba-community/aseba#supported-platforms)
For me the main uncertainty is the download page. @davidjsherman and I discussed some times ago whether the idea to create github release for nightly builds and tags, and upload the binaries for at least Windows and macOS there. I contacted Github's team and they confirmed that it is the right way of doing it. @davidjsherman found a [script to upload releases on github](https://github.com/aktau/github-release). The question remains for Linux and embedded hosts, so we might want a separate download page that provide the following links:
- Latest and official github releases with Windows and macOS binaries
- Informations for Linux hosts (PPA for Ubuntu, official packages for Debian, Suse farm's builds for RPM, etc.)
We could put these in the main page but I worry for clutter. We could also try to detect the host system and provide by default a link for this system, but we would still need a page for alternative downloads. Anyway the requirements should be:
- Easy to use for the user
- Easy to maintain for us and contributors
Technically I suggest to use [Jekyll](https://jekyllrb.com/) as it is natively supported by github.
|
non_process
|
create a minimalistic web site we need a minimalistic web site for aseba io some months ago davidjsherman and i discussed a plan for a full featured web site while i still think it is a good target i would like to take an intermediate step so that the identity of aseba can be clarified and people can easily understand what it is therefore i propose the following structure an image we could use as a start before we have better a selection of languages a first paragraph describing aseba taking from what we already have a second paragraph giving a brief history and linking to the author list a set of three boxes with each one link download installers link to a download page read the documentation link to the user manual on read the doc compile the source code link to for me the main uncertainty is the download page davidjsherman and i discussed some times ago whether the idea to create github release for nightly builds and tags and upload the binaries for at least windows and macos there i contacted github s team and they confirmed that it is the right way of doing it davidjsherman found a the question remains for linux and embedded hosts so we might want a separate download page that provide the following links latest and official github releases with windows and macos binaries informations for linux hosts ppa for ubuntu official packages for debian suse farm s builds for rpm etc we could put these in the main page but i worry for clutter we could also try to detect the host system and provide by default a link for this system but we would still need a page for alternative downloads anyway the requirements should be easy to use for the user easy to maintain for us and contributors technically i suggest to use as it is natively supported by github
| 0
|
263,910
| 28,074,248,625
|
IssuesEvent
|
2023-03-29 21:40:48
|
dagger/dagger
|
https://api.github.com/repos/dagger/dagger
|
closed
|
🐞 secrets with whitespace are not scrubbed (yet)
|
kind/dx kind/security kind/user-request kind/ox
|
### What is the issue?
When I use an env variable as a secret, I would like to not get it leaked.
If the secret contains some whitespace, a simple `echo` would output it and break those characters which will defeat the secret scrub first implementation (which base itself on perfect string match). We need to improve it.
### Log output
*First execution*
```
DBGTHE: {"query":"query{host{envVariable(name:\"MY_SECRET_ID\"){secret{id}}}}","operationName":""}
DBGTHE: {"query":"query{container{from(address:\"alpine:latest\"){withEnvVariable(value:\"2023-02-15 18:10:32.602423457 +0100 CET m=+2.044853212\", name:\"CACHEBUSTER\"){withSecretVariable(name:\"MY_SECRET_ID\", secret:\"eyJob3N0X2VudiI6Ik1ZX1NFQ1JFVF9JRCJ9\"){withExec(args:[\"sh\",\"-c\",\"echo super secret: $MY_SECRET_ID; date;\"]){stdout}}}}}}","operationName":""}
#1 resolve image config for docker.io/library/alpine:latest
#1 DONE 0.7s
#2 importing cache manifest from dagger:10686922502337221602
#2 DONE 0.0s
#3
#3 DONE 0.0s
#4 from alpine:latest
#4 resolve docker.io/library/alpine:latest
#4 resolve docker.io/library/alpine:latest 0.1s done
#4 CACHED
#3
2023/02/15 18:10:33 LOGSTDOUT: super secret: my secret value
Wed Feb 15 17:10:33 UTC 2023
```
*second execution after changing var*
```
DBGTHE: {"query":"query{host{envVariable(name:\"MY_SECRET_ID\"){secret{id}}}}","operationName":""}
DBGTHE: {"query":"query{container{from(address:\"alpine:latest\"){withEnvVariable(name:\"CACHEBUSTER\", value:\"2023-02-15 18:11:00.37254732 +0100 CET m=+2.402145986\"){withSecretVariable(name:\"MY_SECRET_ID\", secret:\"eyJob3N0X2VudiI6Ik1ZX1NFQ1JFVF9JRCJ9\"){withExec(args:[\"sh\",\"-c\",\"echo super secret: $MY_SECRET_ID; date;\"]){stdout}}}}}}","operationName":""}
#1 resolve image config for docker.io/library/alpine:latest
#1 DONE 0.4s
#2 importing cache manifest from dagger:10686922502337221602
#2 DONE 0.0s
#3
#3 DONE 0.0s
#4 from alpine:latest
#4 resolve docker.io/library/alpine:latest
#4 resolve docker.io/library/alpine:latest 0.2s done
#4 CACHED
#3
2023/02/15 18:11:01 LOGSTDOUT: super secret: ***
Wed Feb 15 17:11:01 UTC 2023
```
In the query, we can see the `withSecretVariable.secret` has the same value, so some caching is not basing itself on the value, but more on the name.
### Steps to reproduce
```go
package main
import (
"context"
"log"
"os"
"dagger.io/dagger"
)
func main() {
// Change this value and reexecute.
err := os.Setenv("MY_SECRET_ID", "my secret value ")
if err != nil {
panic(err)
}
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stdout))
secret := c.Host().EnvVariable("MY_SECRET_ID").Secret()
stdout, err := c.Container().
From("alpine:latest").
//WithEnvVariable("CACHEBUSTER", time.Now().String()).
WithSecretVariable("MY_SECRET_ID", secret).
WithExec([]string{"sh", "-c", "echo super secret: $MY_SECRET_ID; date;"}).
Stdout(ctx)
log.Println("LOGSTDOUT:", stdout)
}
```
### SDK version
Go SDK v0.4.5
### OS version
Ubuntu 20.04
|
True
|
🐞 secrets with whitespace are not scrubbed (yet) - ### What is the issue?
When I use an env variable as a secret, I would like to not get it leaked.
If the secret contains some whitespace, a simple `echo` would output it and break those characters which will defeat the secret scrub first implementation (which base itself on perfect string match). We need to improve it.
### Log output
*First execution*
```
DBGTHE: {"query":"query{host{envVariable(name:\"MY_SECRET_ID\"){secret{id}}}}","operationName":""}
DBGTHE: {"query":"query{container{from(address:\"alpine:latest\"){withEnvVariable(value:\"2023-02-15 18:10:32.602423457 +0100 CET m=+2.044853212\", name:\"CACHEBUSTER\"){withSecretVariable(name:\"MY_SECRET_ID\", secret:\"eyJob3N0X2VudiI6Ik1ZX1NFQ1JFVF9JRCJ9\"){withExec(args:[\"sh\",\"-c\",\"echo super secret: $MY_SECRET_ID; date;\"]){stdout}}}}}}","operationName":""}
#1 resolve image config for docker.io/library/alpine:latest
#1 DONE 0.7s
#2 importing cache manifest from dagger:10686922502337221602
#2 DONE 0.0s
#3
#3 DONE 0.0s
#4 from alpine:latest
#4 resolve docker.io/library/alpine:latest
#4 resolve docker.io/library/alpine:latest 0.1s done
#4 CACHED
#3
2023/02/15 18:10:33 LOGSTDOUT: super secret: my secret value
Wed Feb 15 17:10:33 UTC 2023
```
*second execution after changing var*
```
DBGTHE: {"query":"query{host{envVariable(name:\"MY_SECRET_ID\"){secret{id}}}}","operationName":""}
DBGTHE: {"query":"query{container{from(address:\"alpine:latest\"){withEnvVariable(name:\"CACHEBUSTER\", value:\"2023-02-15 18:11:00.37254732 +0100 CET m=+2.402145986\"){withSecretVariable(name:\"MY_SECRET_ID\", secret:\"eyJob3N0X2VudiI6Ik1ZX1NFQ1JFVF9JRCJ9\"){withExec(args:[\"sh\",\"-c\",\"echo super secret: $MY_SECRET_ID; date;\"]){stdout}}}}}}","operationName":""}
#1 resolve image config for docker.io/library/alpine:latest
#1 DONE 0.4s
#2 importing cache manifest from dagger:10686922502337221602
#2 DONE 0.0s
#3
#3 DONE 0.0s
#4 from alpine:latest
#4 resolve docker.io/library/alpine:latest
#4 resolve docker.io/library/alpine:latest 0.2s done
#4 CACHED
#3
2023/02/15 18:11:01 LOGSTDOUT: super secret: ***
Wed Feb 15 17:11:01 UTC 2023
```
In the query, we can see the `withSecretVariable.secret` has the same value, so some caching is not basing itself on the value, but more on the name.
### Steps to reproduce
```go
package main
import (
"context"
"log"
"os"
"dagger.io/dagger"
)
func main() {
// Change this value and reexecute.
err := os.Setenv("MY_SECRET_ID", "my secret value ")
if err != nil {
panic(err)
}
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stdout))
secret := c.Host().EnvVariable("MY_SECRET_ID").Secret()
stdout, err := c.Container().
From("alpine:latest").
//WithEnvVariable("CACHEBUSTER", time.Now().String()).
WithSecretVariable("MY_SECRET_ID", secret).
WithExec([]string{"sh", "-c", "echo super secret: $MY_SECRET_ID; date;"}).
Stdout(ctx)
log.Println("LOGSTDOUT:", stdout)
}
```
### SDK version
Go SDK v0.4.5
### OS version
Ubuntu 20.04
|
non_process
|
🐞 secrets with whitespace are not scrubbed yet what is the issue when i use an env variable as a secret i would like to not get it leaked if the secret contains some whitespace a simple echo would output it and break those characters which will defeat the secret scrub first implementation which base itself on perfect string match we need to improve it log output first execution dbgthe query query host envvariable name my secret id secret id operationname dbgthe query query container from address alpine latest withenvvariable value cet m name cachebuster withsecretvariable name my secret id secret withexec args stdout operationname resolve image config for docker io library alpine latest done importing cache manifest from dagger done done from alpine latest resolve docker io library alpine latest resolve docker io library alpine latest done cached logstdout super secret my secret value wed feb utc second execution after changing var dbgthe query query host envvariable name my secret id secret id operationname dbgthe query query container from address alpine latest withenvvariable name cachebuster value cet m withsecretvariable name my secret id secret withexec args stdout operationname resolve image config for docker io library alpine latest done importing cache manifest from dagger done done from alpine latest resolve docker io library alpine latest resolve docker io library alpine latest done cached logstdout super secret wed feb utc in the query we can see the withsecretvariable secret has the same value so some caching is not basing itself on the value but more on the name steps to reproduce go package main import context log os dagger io dagger func main change this value and reexecute err os setenv my secret id my secret value if err nil panic err ctx context background c err dagger connect ctx dagger withlogoutput os stdout secret c host envvariable my secret id secret stdout err c container from alpine latest withenvvariable cachebuster time now string withsecretvariable my secret id secret withexec string sh c echo super secret my secret id date stdout ctx log println logstdout stdout sdk version go sdk os version ubuntu
| 0
|
4,701
| 7,542,782,268
|
IssuesEvent
|
2018-04-17 13:55:41
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Copy-to with key links to original rather than to copy
|
P2 bug preprocess/keyref
|
## Expected Behavior
If I have these key definitions:
```
<topicref href="alsocopykey.dita" keys="also1"/>
<topicref href="alsocopykey.dita" copy-to="alsocopykey-newname.dita" keys="also2"/>
```
Key references to `also1` should resolve to `alsocopykey.dita`, while key references to `also2` should resolve to the copied version, `alsocopykey-newname.dita`.
## Actual Behavior
Key references to both `also1` and `also2` resolve to the original topic, `alsocopykey.dita`. When I generate HTML5, two topics are generated but all links are to the original; when I generate PDF, the topic appears twice but all links are to the original.
I also have a reference that appears *only* with copy-to, and is never referenced without the copy-to:
`<topicref href="onlycopykey.dita" copy-to="onlycopykey-newname.dita" keys="only"/>`
For that case, the HTML5 build generates both `onlycopykey.html` and `onlycopykey-newname.html`, but all uses of `keyref="only"` resolve to the original name. In the PDF, `<xref keyref="only"/>` results in blue text but no link, because the original file `onlycopykey.dita` is not in the PDF to provide a link destination.
## Possible Solution
When the `keyref` value is resolved for key definitions that set `@copy-to`, the link should go to the copy specified in `@copy-to`. At this point don't yet know if that change is simple or would require a giant architectural change.
## Steps to Reproduce
Sample map includes both variants described above. Tested with DITA-OT 3.0 and 2.5.
[copykey.zip](https://github.com/dita-ot/dita-ot/files/1459173/copykey.zip)
This is related to #2841 (and is probably the preferred way to get the same result). Not sure if this overlaps #2826 at all, as my sample doesn't use branch filtering but does have a key that resolves to copy-to value.
|
1.0
|
Copy-to with key links to original rather than to copy - ## Expected Behavior
If I have these key definitions:
```
<topicref href="alsocopykey.dita" keys="also1"/>
<topicref href="alsocopykey.dita" copy-to="alsocopykey-newname.dita" keys="also2"/>
```
Key references to `also1` should resolve to `alsocopykey.dita`, while key references to `also2` should resolve to the copied version, `alsocopykey-newname.dita`.
## Actual Behavior
Key references to both `also1` and `also2` resolve to the original topic, `alsocopykey.dita`. When I generate HTML5, two topics are generated but all links are to the original; when I generate PDF, the topic appears twice but all links are to the original.
I also have a reference that appears *only* with copy-to, and is never referenced without the copy-to:
`<topicref href="onlycopykey.dita" copy-to="onlycopykey-newname.dita" keys="only"/>`
For that case, the HTML5 build generates both `onlycopykey.html` and `onlycopykey-newname.html`, but all uses of `keyref="only"` resolve to the original name. In the PDF, `<xref keyref="only"/>` results in blue text but no link, because the original file `onlycopykey.dita` is not in the PDF to provide a link destination.
## Possible Solution
When the `keyref` value is resolved for key definitions that set `@copy-to`, the link should go to the copy specified in `@copy-to`. At this point don't yet know if that change is simple or would require a giant architectural change.
## Steps to Reproduce
Sample map includes both variants described above. Tested with DITA-OT 3.0 and 2.5.
[copykey.zip](https://github.com/dita-ot/dita-ot/files/1459173/copykey.zip)
This is related to #2841 (and is probably the preferred way to get the same result). Not sure if this overlaps #2826 at all, as my sample doesn't use branch filtering but does have a key that resolves to copy-to value.
|
process
|
copy to with key links to original rather than to copy expected behavior if i have these key definitions key references to should resolve to alsocopykey dita while key references to should resolve to the copied version alsocopykey newname dita actual behavior key references to both and resolve to the original topic alsocopykey dita when i generate two topics are generated but all links are to the original when i generate pdf the topic appears twice but all links are to the original i also have a reference that appears only with copy to and is never referenced without the copy to for that case the build generates both onlycopykey html and onlycopykey newname html but all uses of keyref only resolve to the original name in the pdf results in blue text but no link because the original file onlycopykey dita is not in the pdf to provide a link destination possible solution when the keyref value is resolved for key definitions that set copy to the link should go to the copy specified in copy to at this point don t yet know if that change is simple or would require a giant architectural change steps to reproduce sample map includes both variants described above tested with dita ot and this is related to and is probably the preferred way to get the same result not sure if this overlaps at all as my sample doesn t use branch filtering but does have a key that resolves to copy to value
| 1
|
16,782
| 21,969,817,928
|
IssuesEvent
|
2022-05-25 01:52:18
|
quark-engine/quark-rules
|
https://api.github.com/repos/quark-engine/quark-rules
|
closed
|
Introduce Detection Rules Viewer in README
|
issue-processing-state-06
|
The team recently released a new tool called Detection Rules Viewer. With this tool, users can use labels and keywords to search rules in this repo. However, README hasn't introduced this tool to users yet. Thus, we need to update the file.
|
1.0
|
Introduce Detection Rules Viewer in README - The team recently released a new tool called Detection Rules Viewer. With this tool, users can use labels and keywords to search rules in this repo. However, README hasn't introduced this tool to users yet. Thus, we need to update the file.
|
process
|
introduce detection rules viewer in readme the team recently released a new tool called detection rules viewer with this tool users can use labels and keywords to search rules in this repo however readme hasn t introduced this tool to users yet thus we need to update the file
| 1
|
50,641
| 6,418,071,789
|
IssuesEvent
|
2017-08-08 18:08:53
|
18F/openFEC
|
https://api.github.com/repos/18F/openFEC
|
opened
|
Design solution for financial summaries of committees who file multiple form types
|
Work: Design
|
Over the course of a two-year period, the same committee might file reports on different form types (for example, a form 3 for congressional candidate committees and a form 3X for PACs and parties). This could either be because of mistake or because the committee changed types. But users need to be able to see financial summary totals in this case, which they currently can't do.
---
This issue is to solve the logic and figure out how we want to solve this problem. How do we show data from multiple form types in the financial summary tab since the different forms have different subfields?
This is a precursor to https://github.com/18F/openFEC/issues/2547.
**Completion criteria:**
- [ ] Write out logic for how this works
- [ ] Design screens
|
1.0
|
Design solution for financial summaries of committees who file multiple form types - Over the course of a two-year period, the same committee might file reports on different form types (for example, a form 3 for congressional candidate committees and a form 3X for PACs and parties). This could either be because of mistake or because the committee changed types. But users need to be able to see financial summary totals in this case, which they currently can't do.
---
This issue is to solve the logic and figure out how we want to solve this problem. How do we show data from multiple form types in the financial summary tab since the different forms have different subfields?
This is a precursor to https://github.com/18F/openFEC/issues/2547.
**Completion criteria:**
- [ ] Write out logic for how this works
- [ ] Design screens
|
non_process
|
design solution for financial summaries of committees who file multiple form types over the course of a two year period the same committee might file reports on different form types for example a form for congressional candidate committees and a form for pacs and parties this could either be because of mistake or because the committee changed types but users need to be able to see financial summary totals in this case which they currently can t do this issue is to solve the logic and figure out how we want to solve this problem how do we show data from multiple form types in the financial summary tab since the different forms have different subfields this is a precursor to completion criteria write out logic for how this works design screens
| 0
|
21,479
| 29,513,579,772
|
IssuesEvent
|
2023-06-04 08:09:55
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
opened
|
CSS module composes triggers `Got unexpected null` in ScopeHoistingPackager
|
:bug: Bug CSS Preprocessing 🌳 Scope Hoisting
|
# 🐛 bug report
CSS module with composes triggers `Got unexpected null`
## 🤔 Expected Behavior
Works
## 😯 Current Behavior
```
Error: Got unexpected null
at nullthrows (parcel/node_modules/nullthrows/nullthrows.js:7:15)
at every (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1081:35)
at Array.every (<anonymous>)
at filter (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1080:35)
at Array.filter (<anonymous>)
at ScopeHoistingPackager.buildAssetPrelude (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1069:60)
at ScopeHoistingPackager.buildAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:448:48)
at ScopeHoistingPackager.visitAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:383:17)
at processAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:143:40)
at visit (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:170:7)
at visit (parcel/packages/core/graph/src/Graph.js:504:16)
at visit (parcel/packages/core/graph/src/Graph.js:504:16)
at enter (parcel/packages/core/graph/src/Graph.js:504:16)
at walk (parcel/packages/core/graph/src/Graph.js:346:26)
at walk (parcel/packages/core/graph/src/Graph.js:367:22)
at ContentGraph.dfs (parcel/packages/core/graph/src/Graph.js:395:12)
at BundleGraph.traverseBundle (parcel/packages/core/core/src/BundleGraph.js:1321:24)
at BundleGraph.traverseAssets (parcel/packages/core/core/src/BundleGraph.js:1117:17)
at NamedBundle.traverseAssets (parcel/packages/core/core/src/public/Bundle.js:193:30)
at ScopeHoistingPackager.package (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:164:17)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.package (parcel/packages/packagers/js/src/index.js:62:26)
```
## 💁 Possible Solution
Something about the JS generated for the CSS module has to trigger this?
## 🔦 Context
https://github.com/parcel-bundler/parcel/issues/9058
## 💻 Code Sample
[src.zip](https://github.com/parcel-bundler/parcel/files/11645155/src.zip)
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 0645f644238f6dbdadb8e01fa63be6ed2152830b
|
1.0
|
CSS module composes triggers `Got unexpected null` in ScopeHoistingPackager -
# 🐛 bug report
CSS module with composes triggers `Got unexpected null`
## 🤔 Expected Behavior
Works
## 😯 Current Behavior
```
Error: Got unexpected null
at nullthrows (parcel/node_modules/nullthrows/nullthrows.js:7:15)
at every (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1081:35)
at Array.every (<anonymous>)
at filter (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1080:35)
at Array.filter (<anonymous>)
at ScopeHoistingPackager.buildAssetPrelude (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:1069:60)
at ScopeHoistingPackager.buildAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:448:48)
at ScopeHoistingPackager.visitAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:383:17)
at processAsset (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:143:40)
at visit (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:170:7)
at visit (parcel/packages/core/graph/src/Graph.js:504:16)
at visit (parcel/packages/core/graph/src/Graph.js:504:16)
at enter (parcel/packages/core/graph/src/Graph.js:504:16)
at walk (parcel/packages/core/graph/src/Graph.js:346:26)
at walk (parcel/packages/core/graph/src/Graph.js:367:22)
at ContentGraph.dfs (parcel/packages/core/graph/src/Graph.js:395:12)
at BundleGraph.traverseBundle (parcel/packages/core/core/src/BundleGraph.js:1321:24)
at BundleGraph.traverseAssets (parcel/packages/core/core/src/BundleGraph.js:1117:17)
at NamedBundle.traverseAssets (parcel/packages/core/core/src/public/Bundle.js:193:30)
at ScopeHoistingPackager.package (parcel/packages/packagers/js/src/ScopeHoistingPackager.js:164:17)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.package (parcel/packages/packagers/js/src/index.js:62:26)
```
## 💁 Possible Solution
Something about the JS generated for the CSS module has to trigger this?
## 🔦 Context
https://github.com/parcel-bundler/parcel/issues/9058
## 💻 Code Sample
[src.zip](https://github.com/parcel-bundler/parcel/files/11645155/src.zip)
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 0645f644238f6dbdadb8e01fa63be6ed2152830b
|
process
|
css module composes triggers got unexpected null in scopehoistingpackager 🐛 bug report css module with composes triggers got unexpected null 🤔 expected behavior works 😯 current behavior error got unexpected null at nullthrows parcel node modules nullthrows nullthrows js at every parcel packages packagers js src scopehoistingpackager js at array every at filter parcel packages packagers js src scopehoistingpackager js at array filter at scopehoistingpackager buildassetprelude parcel packages packagers js src scopehoistingpackager js at scopehoistingpackager buildasset parcel packages packagers js src scopehoistingpackager js at scopehoistingpackager visitasset parcel packages packagers js src scopehoistingpackager js at processasset parcel packages packagers js src scopehoistingpackager js at visit parcel packages packagers js src scopehoistingpackager js at visit parcel packages core graph src graph js at visit parcel packages core graph src graph js at enter parcel packages core graph src graph js at walk parcel packages core graph src graph js at walk parcel packages core graph src graph js at contentgraph dfs parcel packages core graph src graph js at bundlegraph traversebundle parcel packages core core src bundlegraph js at bundlegraph traverseassets parcel packages core core src bundlegraph js at namedbundle traverseassets parcel packages core core src public bundle js at scopehoistingpackager package parcel packages packagers js src scopehoistingpackager js at processticksandrejections node internal process task queues at object package parcel packages packagers js src index js 💁 possible solution something about the js generated for the css module has to trigger this 🔦 context 💻 code sample 🌍 your environment software version s parcel
| 1
|
100,279
| 4,082,257,217
|
IssuesEvent
|
2016-05-31 12:13:27
|
seiyria/deck.zone
|
https://api.github.com/repos/seiyria/deck.zone
|
closed
|
add autocomplete for snippets
|
feature:standard priority:anytime
|
one for each function type (it should show the zones and placeholder the arguments if possible)
additionally, when hitting enter after a loop line, it should insert an `endloop` afterwards.
|
1.0
|
add autocomplete for snippets - one for each function type (it should show the zones and placeholder the arguments if possible)
additionally, when hitting enter after a loop line, it should insert an `endloop` afterwards.
|
non_process
|
add autocomplete for snippets one for each function type it should show the zones and placeholder the arguments if possible additionally when hitting enter after a loop line it should insert an endloop afterwards
| 0
|
219,000
| 16,812,770,555
|
IssuesEvent
|
2021-06-17 01:28:53
|
Quansight/qhub
|
https://api.github.com/repos/Quansight/qhub
|
closed
|
Supporting multiple environments for deployment
|
Stale documentation
|
Talking with @aktech and @tylerpotts we arrived at a solution after quite a bit of discussion with goals of:
- minimize changes to the code
- be as straightforward as possible
We will document that for multiple deployment environments:
- separate repo per deployment
- project_id will reflect the environment e.g. `<project>-<environment>`
- we will remove the `environment` variable from variables.tf to prevent the confusion https://github.com/Quansight/qhub/blob/master/qhub/template/%7B%7B%20cookiecutter.repo_directory%20%7D%7D/infrastructure/variables.tf#L6-L9
|
1.0
|
Supporting multiple environments for deployment - Talking with @aktech and @tylerpotts we arrived at a solution after quite a bit of discussion with goals of:
- minimize changes to the code
- be as straightforward as possible
We will document that for multiple deployment environments:
- separate repo per deployment
- project_id will reflect the environment e.g. `<project>-<environment>`
- we will remove the `environment` variable from variables.tf to prevent the confusion https://github.com/Quansight/qhub/blob/master/qhub/template/%7B%7B%20cookiecutter.repo_directory%20%7D%7D/infrastructure/variables.tf#L6-L9
|
non_process
|
supporting multiple environments for deployment talking with aktech and tylerpotts we arrived at a solution after quite a bit of discussion with goals of minimize changes to the code be as straightforward as possible we will document that for multiple deployment environments separate repo per deployment project id will reflect the environment e g we will remove the environment variable from variables tf to prevent the confusion
| 0
|
35,586
| 12,360,812,949
|
IssuesEvent
|
2020-05-17 16:50:14
|
CephalonTobran/web
|
https://api.github.com/repos/CephalonTobran/web
|
closed
|
CVE-2018-19839 (Medium) detected in CSS::Sass-v3.4.11
|
security vulnerability
|
## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in HEAD commit: <a href="https://github.com/CephalonTobran/web/commit/6481ee08faa914d71937f6c7abc92e6a96c8bca0">6481ee08faa914d71937f6c7abc92e6a96c8bca0</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (60)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /web/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /web/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /web/node_modules/node-sass/src/libsass/src/output.hpp
- /web/node_modules/node-sass/src/libsass/src/b64/cencode.h
- /web/node_modules/node-sass/src/libsass/src/source_map.cpp
- /web/node_modules/node-sass/src/libsass/src/lexer.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8.h
- /web/node_modules/node-sass/src/libsass/test/test_node.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /web/node_modules/node-sass/src/libsass/src/plugins.cpp
- /web/node_modules/node-sass/src/libsass/src/node.hpp
- /web/node_modules/node-sass/src/libsass/include/sass/base.h
- /web/node_modules/node-sass/src/libsass/src/json.hpp
- /web/node_modules/node-sass/src/libsass/src/environment.cpp
- /web/node_modules/node-sass/src/libsass/src/position.hpp
- /web/node_modules/node-sass/src/libsass/src/extend.hpp
- /web/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /web/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /web/node_modules/node-sass/src/libsass/src/sass.hpp
- /web/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /web/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8/core.h
- /web/node_modules/node-sass/src/libsass/include/sass/functions.h
- /web/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /web/node_modules/node-sass/src/libsass/src/node.cpp
- /web/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /web/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /web/node_modules/node-sass/src/libsass/src/listize.cpp
- /web/node_modules/node-sass/src/libsass/src/c99func.c
- /web/node_modules/node-sass/src/libsass/src/position.cpp
- /web/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /web/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /web/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /web/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /web/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /web/node_modules/node-sass/src/libsass/src/paths.hpp
- /web/node_modules/node-sass/src/libsass/include/sass/context.h
- /web/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /web/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /web/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /web/node_modules/node-sass/src/libsass/src/source_map.hpp
- /web/node_modules/node-sass/src/libsass/src/lexer.hpp
- /web/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /web/node_modules/node-sass/src/libsass/src/json.cpp
- /web/node_modules/node-sass/src/libsass/src/units.cpp
- /web/node_modules/node-sass/src/libsass/src/to_c.hpp
- /web/node_modules/node-sass/src/libsass/src/units.hpp
- /web/node_modules/node-sass/src/libsass/src/b64/encode.h
- /web/node_modules/node-sass/src/libsass/src/file.hpp
- /web/node_modules/node-sass/src/libsass/src/environment.hpp
- /web/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /web/node_modules/node-sass/src/libsass/src/plugins.hpp
- /web/node_modules/node-sass/src/libsass/src/listize.hpp
- /web/node_modules/node-sass/src/libsass/src/debug.hpp
- /web/node_modules/node-sass/src/libsass/include/sass2scss.h
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2020-03-20</p>
<p>Fix Resolution: LibSass - 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-19839 (Medium) detected in CSS::Sass-v3.4.11 - ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in HEAD commit: <a href="https://github.com/CephalonTobran/web/commit/6481ee08faa914d71937f6c7abc92e6a96c8bca0">6481ee08faa914d71937f6c7abc92e6a96c8bca0</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (60)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /web/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /web/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /web/node_modules/node-sass/src/libsass/src/output.hpp
- /web/node_modules/node-sass/src/libsass/src/b64/cencode.h
- /web/node_modules/node-sass/src/libsass/src/source_map.cpp
- /web/node_modules/node-sass/src/libsass/src/lexer.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8.h
- /web/node_modules/node-sass/src/libsass/test/test_node.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /web/node_modules/node-sass/src/libsass/src/plugins.cpp
- /web/node_modules/node-sass/src/libsass/src/node.hpp
- /web/node_modules/node-sass/src/libsass/include/sass/base.h
- /web/node_modules/node-sass/src/libsass/src/json.hpp
- /web/node_modules/node-sass/src/libsass/src/environment.cpp
- /web/node_modules/node-sass/src/libsass/src/position.hpp
- /web/node_modules/node-sass/src/libsass/src/extend.hpp
- /web/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /web/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /web/node_modules/node-sass/src/libsass/src/sass.hpp
- /web/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /web/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8/core.h
- /web/node_modules/node-sass/src/libsass/include/sass/functions.h
- /web/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /web/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /web/node_modules/node-sass/src/libsass/src/node.cpp
- /web/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /web/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /web/node_modules/node-sass/src/libsass/src/listize.cpp
- /web/node_modules/node-sass/src/libsass/src/c99func.c
- /web/node_modules/node-sass/src/libsass/src/position.cpp
- /web/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /web/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /web/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /web/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /web/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /web/node_modules/node-sass/src/libsass/src/paths.hpp
- /web/node_modules/node-sass/src/libsass/include/sass/context.h
- /web/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /web/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /web/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /web/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /web/node_modules/node-sass/src/libsass/src/source_map.hpp
- /web/node_modules/node-sass/src/libsass/src/lexer.hpp
- /web/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /web/node_modules/node-sass/src/libsass/src/json.cpp
- /web/node_modules/node-sass/src/libsass/src/units.cpp
- /web/node_modules/node-sass/src/libsass/src/to_c.hpp
- /web/node_modules/node-sass/src/libsass/src/units.hpp
- /web/node_modules/node-sass/src/libsass/src/b64/encode.h
- /web/node_modules/node-sass/src/libsass/src/file.hpp
- /web/node_modules/node-sass/src/libsass/src/environment.hpp
- /web/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /web/node_modules/node-sass/src/libsass/src/plugins.hpp
- /web/node_modules/node-sass/src/libsass/src/listize.hpp
- /web/node_modules/node-sass/src/libsass/src/debug.hpp
- /web/node_modules/node-sass/src/libsass/include/sass2scss.h
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2020-03-20</p>
<p>Fix Resolution: LibSass - 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in css sass cve medium severity vulnerability vulnerable library css library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries web node modules node sass src libsass src color maps cpp web node modules node sass src libsass src sass util hpp web node modules node sass src libsass src unchecked h web node modules node sass src libsass src output hpp web node modules node sass src libsass src cencode h web node modules node sass src libsass src source map cpp web node modules node sass src libsass src lexer cpp web node modules node sass src libsass src h web node modules node sass src libsass test test node cpp web node modules node sass src libsass src string cpp web node modules node sass src libsass src plugins cpp web node modules node sass src libsass src node hpp web node modules node sass src libsass include sass base h web node modules node sass src libsass src json hpp web node modules node sass src libsass src environment cpp web node modules node sass src libsass src position hpp web node modules node sass src libsass src extend hpp web node modules node sass src libsass src subset map hpp web node modules node sass src libsass src remove placeholders cpp web node modules node sass src libsass src sass context hpp web node modules node sass src libsass src sass hpp web node modules node sass src libsass src ast fwd decl cpp web node modules node sass src libsass contrib plugin cpp web node modules node sass src libsass src core h web node modules node sass src libsass include sass functions h web node modules node sass src libsass test test superselector cpp web node modules node sass src libsass src sass functions cpp web node modules node sass src libsass src string hpp web node modules node sass src libsass src node cpp web node modules node sass src libsass src subset map cpp web node modules node sass src libsass src cpp web node modules node sass src libsass src listize cpp web node modules node sass src libsass src c web node modules node sass src libsass src position cpp web node modules node sass src libsass src remove placeholders hpp web node modules node sass src libsass src sass functions hpp web node modules node sass src libsass test test subset map cpp web node modules node sass src libsass src cpp web node modules node sass src libsass src memory sharedptr cpp web node modules node sass src libsass src paths hpp web node modules node sass src libsass include sass context h web node modules node sass src libsass src color maps hpp web node modules node sass src libsass test test unification cpp web node modules node sass src libsass src sass util cpp web node modules node sass src libsass script test leaks pl web node modules node sass src libsass src source map hpp web node modules node sass src libsass src lexer hpp web node modules node sass src libsass src memory sharedptr hpp web node modules node sass src libsass src json cpp web node modules node sass src libsass src units cpp web node modules node sass src libsass src to c hpp web node modules node sass src libsass src units hpp web node modules node sass src libsass src encode h web node modules node sass src libsass src file hpp web node modules node sass src libsass src environment hpp web node modules node sass src libsass src checked h web node modules node sass src libsass src plugins hpp web node modules node sass src libsass src listize hpp web node modules node sass src libsass src debug hpp web node modules node sass src libsass include h vulnerability details in libsass prior to the function handle error in sass context cpp allows attackers to cause a denial of service resulting from a heap based buffer over read via a crafted sass file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
480,572
| 13,854,254,026
|
IssuesEvent
|
2020-10-15 09:16:08
|
AY2021S1-CS2113T-F12-4/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-F12-4/tp
|
closed
|
As a user, view breakdown of my timespent on each module
|
priority.High type.Story
|
... so that I can pinpoint in detail where I spend most of my time on
|
1.0
|
As a user, view breakdown of my timespent on each module - ... so that I can pinpoint in detail where I spend most of my time on
|
non_process
|
as a user view breakdown of my timespent on each module so that i can pinpoint in detail where i spend most of my time on
| 0
|
5,262
| 2,610,184,310
|
IssuesEvent
|
2015-02-26 18:58:36
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
力荐去色斑的产品
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
岁月忽已远,且随流年散,思思长长无日是尽头,情情深深无夜是尾声,不若肯爱红尘轻一笑,如一花一世界,不若肯爱时间淡流光,如一方一净土,不若肯爱尘世步清风,如一木一浮生。轻弹古琴,轻奏流年,淡听荣辱,淡临湖海,浅吟清茶,浅问年华,流光溢彩,流光岁月,独步往昔,独步清风,静看云卷,静观花落。岁月雕琢的美景,时光堆砌的伤痕,我们总要一一领略过,才能真的身随心动,凌波万里路,我们也总要一一思虑过,才能真的身心合一,含笑忘忧人。你还在为你脸上的色斑黯然伤神吗?是不是找了很多的药物之后都觉得没有效果呢?现在最有效的祛斑产品黛芙薇尔祛斑胶囊彻底治愈色斑,你还在等什么那去色斑的产品,
《客户案例》
作为八十年代的女强人,我一直在忙忙碌碌经营我的事��
�,可近年来发现自己脸上和手上长了许多的老年斑出来,皮�
��也变的粗糙、灰暗起来,这可给了我很大的打击,我才五十
多岁呢,怎么可以就步入老年人的行列,可是对着自己一脸��
�老年斑,我不服老也不行。为了把脸上的老年斑去掉,我买�
��很多最高档的化妆品和护肤品来用,而且只要一听说能祛斑
的产品我就用,就,可总是没法去掉脸上的老年斑。<br>
今年三月份的时候我在网上偶然见到了「黛芙薇尔精华��
�」,我一看是外用的祛斑产品,感觉应该比内服的祛斑产品�
��全多了,因为我一直在用外敷的祛斑产品,不但祛斑效果慢
,而且会在皮肤上留下色印子。使用「黛芙薇尔精华液」前��
�我咨询过「黛芙薇尔精华液」祛斑专家,专家说「黛芙薇尔�
��华液」主要是通过全面调节人体内分泌,加速皮肤的色斑色
素代谢来达到祛斑的目的,我听专家说的专业而且在理,当��
�就在公司网站上定购了一个周期的产品。货到付款,很快快�
��人员就把我定购的套装送到家里来了。<br>
使用前几天感觉不出什么效果,脸上的色斑什么变化都��
�有,但是也没有什么不舒服的症状出现,没有“不耐受”“�
��敏”之类的问题。半个月以后,就发现颧骨附近的斑颜色变
淡了,脸上没有以前那么干燥,皮肤看起来也没那么灰暗了��
�使用半个月以后我就感觉自己的精神状况都好了,记忆情况�
��改善了,不会丢三落四动不动感觉疲倦。一个月过去了,我
体验到的使用效果就更明显了,首先是皮肤得到了改善,变��
�不再粗糙,还白了许多,斑也开始缩小变淡,鼻梁上芝麻粒�
��小的斑已经基本见不到了,大块一点的斑摸上去也没有硬块
,颜色从褐色变成了浅褐色。我就又订购了一个周期的,第��
�个周期使用完以后,脸上的斑点差不多都消失了,下巴和鼻�
��附近的斑大部分都消失了,颧骨附近还有一些没有消退,但
是颜色已经变得很淡,不仔细看看不出来。「黛芙薇尔精华��
�」祛斑专家说我吸收好,体质也还可以,所以祛斑的效果特�
��显著,此外,我皮肤还变好了,也变白了不少。
后来我打电话咨询了一下专家,问她像我这种情况还需要使��
�多久的产品才能彻底清除脸上的斑点,专家说只要我巩固治�
��一下就可以了,于是我又买了第三个周期继续使用,结果非
常理想,我脸上的斑已经见不到了,褐色颜色的斑似乎在一��
�之间被皮肤抚平了,斑印、斑痕也不见了,皮肤变得白净,�
��好高兴,老年斑再不会困扰我了,我又做回当初那个女强人
。
阅读了去色斑的产品,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
去色斑的产品,同时为您分享祛斑小方法
1、维生素类:维生素A、B1、B2、B6、C、D、E,各有其用途。
2、芦荟:味苦寒,水解后产生芦荟大黄蒽苷,含丰富的维生�
��B2、B6、B12和多种氨基酸,内含大量的芦荟苷,不论外用或��
�服,均有十分奇特的美容效果。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:18
|
1.0
|
力荐去色斑的产品 - ```
《摘要》
岁月忽已远,且随流年散,思思长长无日是尽头,情情深深无夜是尾声,不若肯爱红尘轻一笑,如一花一世界,不若肯爱时间淡流光,如一方一净土,不若肯爱尘世步清风,如一木一浮生。轻弹古琴,轻奏流年,淡听荣辱,淡临湖海,浅吟清茶,浅问年华,流光溢彩,流光岁月,独步往昔,独步清风,静看云卷,静观花落。岁月雕琢的美景,时光堆砌的伤痕,我们总要一一领略过,才能真的身随心动,凌波万里路,我们也总要一一思虑过,才能真的身心合一,含笑忘忧人。你还在为你脸上的色斑黯然伤神吗?是不是找了很多的药物之后都觉得没有效果呢?现在最有效的祛斑产品黛芙薇尔祛斑胶囊彻底治愈色斑,你还在等什么那去色斑的产品,
《客户案例》
作为八十年代的女强人,我一直在忙忙碌碌经营我的事��
�,可近年来发现自己脸上和手上长了许多的老年斑出来,皮�
��也变的粗糙、灰暗起来,这可给了我很大的打击,我才五十
多岁呢,怎么可以就步入老年人的行列,可是对着自己一脸��
�老年斑,我不服老也不行。为了把脸上的老年斑去掉,我买�
��很多最高档的化妆品和护肤品来用,而且只要一听说能祛斑
的产品我就用,就,可总是没法去掉脸上的老年斑。<br>
今年三月份的时候我在网上偶然见到了「黛芙薇尔精华��
�」,我一看是外用的祛斑产品,感觉应该比内服的祛斑产品�
��全多了,因为我一直在用外敷的祛斑产品,不但祛斑效果慢
,而且会在皮肤上留下色印子。使用「黛芙薇尔精华液」前��
�我咨询过「黛芙薇尔精华液」祛斑专家,专家说「黛芙薇尔�
��华液」主要是通过全面调节人体内分泌,加速皮肤的色斑色
素代谢来达到祛斑的目的,我听专家说的专业而且在理,当��
�就在公司网站上定购了一个周期的产品。货到付款,很快快�
��人员就把我定购的套装送到家里来了。<br>
使用前几天感觉不出什么效果,脸上的色斑什么变化都��
�有,但是也没有什么不舒服的症状出现,没有“不耐受”“�
��敏”之类的问题。半个月以后,就发现颧骨附近的斑颜色变
淡了,脸上没有以前那么干燥,皮肤看起来也没那么灰暗了��
�使用半个月以后我就感觉自己的精神状况都好了,记忆情况�
��改善了,不会丢三落四动不动感觉疲倦。一个月过去了,我
体验到的使用效果就更明显了,首先是皮肤得到了改善,变��
�不再粗糙,还白了许多,斑也开始缩小变淡,鼻梁上芝麻粒�
��小的斑已经基本见不到了,大块一点的斑摸上去也没有硬块
,颜色从褐色变成了浅褐色。我就又订购了一个周期的,第��
�个周期使用完以后,脸上的斑点差不多都消失了,下巴和鼻�
��附近的斑大部分都消失了,颧骨附近还有一些没有消退,但
是颜色已经变得很淡,不仔细看看不出来。「黛芙薇尔精华��
�」祛斑专家说我吸收好,体质也还可以,所以祛斑的效果特�
��显著,此外,我皮肤还变好了,也变白了不少。
后来我打电话咨询了一下专家,问她像我这种情况还需要使��
�多久的产品才能彻底清除脸上的斑点,专家说只要我巩固治�
��一下就可以了,于是我又买了第三个周期继续使用,结果非
常理想,我脸上的斑已经见不到了,褐色颜色的斑似乎在一��
�之间被皮肤抚平了,斑印、斑痕也不见了,皮肤变得白净,�
��好高兴,老年斑再不会困扰我了,我又做回当初那个女强人
。
阅读了去色斑的产品,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
去色斑的产品,同时为您分享祛斑小方法
1、维生素类:维生素A、B1、B2、B6、C、D、E,各有其用途。
2、芦荟:味苦寒,水解后产生芦荟大黄蒽苷,含丰富的维生�
��B2、B6、B12和多种氨基酸,内含大量的芦荟苷,不论外用或��
�服,均有十分奇特的美容效果。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:18
|
non_process
|
力荐去色斑的产品 《摘要》 岁月忽已远,且随流年散,思思长长无日是尽头,情情深深无夜是尾声,不若肯爱红尘轻一笑,如一花一世界,不若肯爱时间淡流光,如一方一净土,不若肯爱尘世步清风,如一木一浮生。轻弹古琴,轻奏流年,淡听荣辱,淡临湖海,浅吟清茶,浅问年华,流光溢彩,流光岁月,独步往昔,独步清风,静看云卷,静观花落。岁月雕琢的美景,时光堆砌的伤痕,我们总要一一领略过,才能真的身随心动,凌波万里路,我们也总要一一思虑过,才能真的身心合一,含笑忘忧人。你还在为你脸上的色斑黯然伤神吗?是不是找了很多的药物之后都觉得没有效果呢?现在最有效的祛斑产品黛芙薇尔祛斑胶囊彻底治愈色斑,你还在等什么那去色斑的产品, 《客户案例》 作为八十年代的女强人,我一直在忙忙碌碌经营我的事�� �,可近年来发现自己脸上和手上长了许多的老年斑出来,皮� ��也变的粗糙、灰暗起来,这可给了我很大的打击,我才五十 多岁呢,怎么可以就步入老年人的行列,可是对着自己一脸�� �老年斑,我不服老也不行。为了把脸上的老年斑去掉,我买� ��很多最高档的化妆品和护肤品来用,而且只要一听说能祛斑 的产品我就用,就,可总是没法去掉脸上的老年斑。 今年三月份的时候我在网上偶然见到了「黛芙薇尔精华�� �」,我一看是外用的祛斑产品,感觉应该比内服的祛斑产品� ��全多了,因为我一直在用外敷的祛斑产品,不但祛斑效果慢 ,而且会在皮肤上留下色印子。使用「黛芙薇尔精华液」前�� �我咨询过「黛芙薇尔精华液」祛斑专家,专家说「黛芙薇尔� ��华液」主要是通过全面调节人体内分泌,加速皮肤的色斑色 素代谢来达到祛斑的目的,我听专家说的专业而且在理,当�� �就在公司网站上定购了一个周期的产品。货到付款,很快快� ��人员就把我定购的套装送到家里来了。 使用前几天感觉不出什么效果,脸上的色斑什么变化都�� �有,但是也没有什么不舒服的症状出现,没有“不耐受”“� ��敏”之类的问题。半个月以后,就发现颧骨附近的斑颜色变 淡了,脸上没有以前那么干燥,皮肤看起来也没那么灰暗了�� �使用半个月以后我就感觉自己的精神状况都好了,记忆情况� ��改善了,不会丢三落四动不动感觉疲倦。一个月过去了,我 体验到的使用效果就更明显了,首先是皮肤得到了改善,变�� �不再粗糙,还白了许多,斑也开始缩小变淡,鼻梁上芝麻粒� ��小的斑已经基本见不到了,大块一点的斑摸上去也没有硬块 ,颜色从褐色变成了浅褐色。我就又订购了一个周期的,第�� �个周期使用完以后,脸上的斑点差不多都消失了,下巴和鼻� ��附近的斑大部分都消失了,颧骨附近还有一些没有消退,但 是颜色已经变得很淡,不仔细看看不出来。「黛芙薇尔精华�� �」祛斑专家说我吸收好,体质也还可以,所以祛斑的效果特� ��显著,此外,我皮肤还变好了,也变白了不少。 后来我打电话咨询了一下专家,问她像我这种情况还需要使�� �多久的产品才能彻底清除脸上的斑点,专家说只要我巩固治� ��一下就可以了,于是我又买了第三个周期继续使用,结果非 常理想,我脸上的斑已经见不到了,褐色颜色的斑似乎在一�� �之间被皮肤抚平了,斑印、斑痕也不见了,皮肤变得白净,� ��好高兴,老年斑再不会困扰我了,我又做回当初那个女强人 。 阅读了去色斑的产品,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 去色斑的产品,同时为您分享祛斑小方法 、维生素类:维生素a、 、 、 、c、d、e,各有其用途。 、芦荟:味苦寒,水解后产生芦荟大黄蒽苷,含丰富的维生� �� 、 、 ,内含大量的芦荟苷,不论外用或�� �服,均有十分奇特的美容效果。 original issue reported on code google com by additive gmail com on jul at
| 0
|
65,477
| 8,815,684,205
|
IssuesEvent
|
2018-12-29 21:59:12
|
skalpel-tech/skpcrm
|
https://api.github.com/repos/skalpel-tech/skpcrm
|
closed
|
Please review installation steps
|
documentation
|
Please review installation steps as there seems to be a step missing, as multiple modules are missing when we try "invoke app.run":
<img width="681" alt="screenshot 2018-12-26 16 31 59" src="https://user-images.githubusercontent.com/23227651/50460596-44dc2580-092c-11e9-8b50-c6d94730ff88.png">
|
1.0
|
Please review installation steps - Please review installation steps as there seems to be a step missing, as multiple modules are missing when we try "invoke app.run":
<img width="681" alt="screenshot 2018-12-26 16 31 59" src="https://user-images.githubusercontent.com/23227651/50460596-44dc2580-092c-11e9-8b50-c6d94730ff88.png">
|
non_process
|
please review installation steps please review installation steps as there seems to be a step missing as multiple modules are missing when we try invoke app run img width alt screenshot src
| 0
|
73,100
| 7,327,132,857
|
IssuesEvent
|
2018-03-04 05:49:30
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Ingress - "503 Service Temporarily Unavailable" when providing targets from other namespace (other than were ingress is created).
|
area/loadbalancer area/ui kind/bug status/resolved status/to-test version/2.0
|
Steps to reproduce the problem:
Create a project with 2 namespaces , n1 and n2.
Create ingress in n1 with targets pointing to workloads in n2.
Accessing ingress url results in "503 Service Temporarily Unavailable".
```
Sangeethas-MBP:testwget sangeethahariharan1$ wget --header="host: hello.com" http:<ip>//name.html
--2018-02-26 14:27:57-- http://<ip>/name.html
Connecting to 159.65.160.243:80... connected.
HTTP request sent, awaiting response... 503 Service Temporarily Unavailable
2018-02-26 14:27:57 ERROR 503: Service Temporarily Unavailable.
Sangeethas-MBP:testwget sangeethahariharan1$
```
Should we not allow for creating rules with targets from different namespace ?
|
1.0
|
Ingress - "503 Service Temporarily Unavailable" when providing targets from other namespace (other than were ingress is created). - Steps to reproduce the problem:
Create a project with 2 namespaces , n1 and n2.
Create ingress in n1 with targets pointing to workloads in n2.
Accessing ingress url results in "503 Service Temporarily Unavailable".
```
Sangeethas-MBP:testwget sangeethahariharan1$ wget --header="host: hello.com" http:<ip>//name.html
--2018-02-26 14:27:57-- http://<ip>/name.html
Connecting to 159.65.160.243:80... connected.
HTTP request sent, awaiting response... 503 Service Temporarily Unavailable
2018-02-26 14:27:57 ERROR 503: Service Temporarily Unavailable.
Sangeethas-MBP:testwget sangeethahariharan1$
```
Should we not allow for creating rules with targets from different namespace ?
|
non_process
|
ingress service temporarily unavailable when providing targets from other namespace other than were ingress is created steps to reproduce the problem create a project with namespaces and create ingress in with targets pointing to workloads in accessing ingress url results in service temporarily unavailable sangeethas mbp testwget wget header host hello com http name html connecting to connected http request sent awaiting response service temporarily unavailable error service temporarily unavailable sangeethas mbp testwget should we not allow for creating rules with targets from different namespace
| 0
|
214,981
| 7,285,350,039
|
IssuesEvent
|
2018-02-23 03:32:41
|
llmhyy/microbat
|
https://api.github.com/repos/llmhyy/microbat
|
closed
|
[Instrumentation] Missing ControlScope in Dump File
|
high priority of priority
|
hi @lylytran
It seems that you forget to record the controlScope field of breakpoint of each trace node. I have computed such information by invoking constructControlDomianceRelation() in microbat.instrumentation.Agent.shutdown(StopTimer timer). Would you please kindly have a check?
|
2.0
|
[Instrumentation] Missing ControlScope in Dump File - hi @lylytran
It seems that you forget to record the controlScope field of breakpoint of each trace node. I have computed such information by invoking constructControlDomianceRelation() in microbat.instrumentation.Agent.shutdown(StopTimer timer). Would you please kindly have a check?
|
non_process
|
missing controlscope in dump file hi lylytran it seems that you forget to record the controlscope field of breakpoint of each trace node i have computed such information by invoking constructcontroldomiancerelation in microbat instrumentation agent shutdown stoptimer timer would you please kindly have a check
| 0
|
388,764
| 11,492,354,031
|
IssuesEvent
|
2020-02-11 20:49:23
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
EXPERIMENTAL feature flag
|
area/devfile kind/user-story priority/Medium
|
## User story
- As a user, I have an expectation that all features in odo are stable and well tested.
- As an advanced user, I'm interested in new uncomplete features and I'm aware that that there might not be stable but I want to test them.
## Acceptance Criteria
- odo should not show any commands or flags that are marked as EXPERIMENTAL if experimental features are not enabled
- odo should show additional flags or commands that are not complete or marked as EXPERIMENTAL if experimental features are enabled
## Background
We do time-based releases every sprint, and we will need to merge something that is not yet fully implemented in order to allow early adopters to test it and give us feedback.
It would be nice to have the option to hide partially implemented stuff from users, but at the same time allow them to test it if they really want.
We could either use the environment variable `ODO_EXPERIMENTAL=true` environment variable or we could do something like `odo preference set odo-experimental true` and use odo preference file to enable or disable experimental features.
`odo preference` might be slightly better as it will be more platform-independent and easier to use for tools build on top of odo (like ide plugins).
We will use this with the Devfile implementation.
The idea is that all The Devfile related functionality will be hidden by default unless the user enables it by running `odo preference set odo-experimental true` (or `export ODO_EXPERIMENTAL=true`).
This will allow us to merge even partially functional Devfile implementation.
Once the Devfile implementation is more complete and functional we can enable it by default.
## How this will be used in Devfile implementation
### No experimental features enabled:
```
$ odo create -h
# --devfile flag is not mentioned anywhere
$ odo create java --devfile
Uknown flag --devfile
$ ls
src
.odo
devfile.yaml
$ odo push
# "old style (s2i) " component defined in .odo will be pushed
```
### Experimental features enabled `odo preference set odo-experimental true` or `export ODO_EXPERIMENTAL=true`
```
$ odo create -h
# help output includes --devfile flag and its description
$ odo create java --devfile
# will create new devfile.yaml file in the current directory (To Be Defined)
$ ls
src
.odo
devfile.yaml
$ odo push
# odo will use devfile.yaml definition to create and push component
# More details will be defined in https://github.com/openshift/odo/issues/2470
```
/kind user-story
/priority medium
/area devfile
|
1.0
|
EXPERIMENTAL feature flag - ## User story
- As a user, I have an expectation that all features in odo are stable and well tested.
- As an advanced user, I'm interested in new uncomplete features and I'm aware that that there might not be stable but I want to test them.
## Acceptance Criteria
- odo should not show any commands or flags that are marked as EXPERIMENTAL if experimental features are not enabled
- odo should show additional flags or commands that are not complete or marked as EXPERIMENTAL if experimental features are enabled
## Background
We do time-based releases every sprint, and we will need to merge something that is not yet fully implemented in order to allow early adopters to test it and give us feedback.
It would be nice to have the option to hide partially implemented stuff from users, but at the same time allow them to test it if they really want.
We could either use the environment variable `ODO_EXPERIMENTAL=true` environment variable or we could do something like `odo preference set odo-experimental true` and use odo preference file to enable or disable experimental features.
`odo preference` might be slightly better as it will be more platform-independent and easier to use for tools build on top of odo (like ide plugins).
We will use this with the Devfile implementation.
The idea is that all The Devfile related functionality will be hidden by default unless the user enables it by running `odo preference set odo-experimental true` (or `export ODO_EXPERIMENTAL=true`).
This will allow us to merge even partially functional Devfile implementation.
Once the Devfile implementation is more complete and functional we can enable it by default.
## How this will be used in Devfile implementation
### No experimental features enabled:
```
$ odo create -h
# --devfile flag is not mentioned anywhere
$ odo create java --devfile
Uknown flag --devfile
$ ls
src
.odo
devfile.yaml
$ odo push
# "old style (s2i) " component defined in .odo will be pushed
```
### Experimental features enabled `odo preference set odo-experimental true` or `export ODO_EXPERIMENTAL=true`
```
$ odo create -h
# help output includes --devfile flag and its description
$ odo create java --devfile
# will create new devfile.yaml file in the current directory (To Be Defined)
$ ls
src
.odo
devfile.yaml
$ odo push
# odo will use devfile.yaml definition to create and push component
# More details will be defined in https://github.com/openshift/odo/issues/2470
```
/kind user-story
/priority medium
/area devfile
|
non_process
|
experimental feature flag user story as a user i have an expectation that all features in odo are stable and well tested as an advanced user i m interested in new uncomplete features and i m aware that that there might not be stable but i want to test them acceptance criteria odo should not show any commands or flags that are marked as experimental if experimental features are not enabled odo should show additional flags or commands that are not complete or marked as experimental if experimental features are enabled background we do time based releases every sprint and we will need to merge something that is not yet fully implemented in order to allow early adopters to test it and give us feedback it would be nice to have the option to hide partially implemented stuff from users but at the same time allow them to test it if they really want we could either use the environment variable odo experimental true environment variable or we could do something like odo preference set odo experimental true and use odo preference file to enable or disable experimental features odo preference might be slightly better as it will be more platform independent and easier to use for tools build on top of odo like ide plugins we will use this with the devfile implementation the idea is that all the devfile related functionality will be hidden by default unless the user enables it by running odo preference set odo experimental true or export odo experimental true this will allow us to merge even partially functional devfile implementation once the devfile implementation is more complete and functional we can enable it by default how this will be used in devfile implementation no experimental features enabled odo create h devfile flag is not mentioned anywhere odo create java devfile uknown flag devfile ls src odo devfile yaml odo push old style component defined in odo will be pushed experimental features enabled odo preference set odo experimental true or export odo experimental true odo create h help output includes devfile flag and its description odo create java devfile will create new devfile yaml file in the current directory to be defined ls src odo devfile yaml odo push odo will use devfile yaml definition to create and push component more details will be defined in kind user story priority medium area devfile
| 0
|
4,122
| 7,065,237,340
|
IssuesEvent
|
2018-01-06 17:34:59
|
pwittchen/ReactiveWiFi
|
https://api.github.com/repos/pwittchen/ReactiveWiFi
|
closed
|
Release 0.3.0
|
release process
|
**Initial release notes**:
- fixed bug: `Error receiving broadcast Intent...` #28
- migrated library to RxJava2.x as a separate artifact
- bumped Gradle to v. 3.0
- updated project dependencies
- added Retrolambda to sample java app
**Things to do**:
- [x] RxJava1.x branch
- [x] update JavaDoc on gh-pages
- [x] bump library version to 0.3.0
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] update download section in `README.md` after Maven Sync
- [x] create new GitHub release
- [x] RxJava2.x branch
- [x] update JavaDoc on gh-pages
- [x] bump library version to 0.3.0
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] update download section in `README.md` after Maven Sync
- [x] create new GitHub release
|
1.0
|
Release 0.3.0 - **Initial release notes**:
- fixed bug: `Error receiving broadcast Intent...` #28
- migrated library to RxJava2.x as a separate artifact
- bumped Gradle to v. 3.0
- updated project dependencies
- added Retrolambda to sample java app
**Things to do**:
- [x] RxJava1.x branch
- [x] update JavaDoc on gh-pages
- [x] bump library version to 0.3.0
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] update download section in `README.md` after Maven Sync
- [x] create new GitHub release
- [x] RxJava2.x branch
- [x] update JavaDoc on gh-pages
- [x] bump library version to 0.3.0
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] update download section in `README.md` after Maven Sync
- [x] create new GitHub release
|
process
|
release initial release notes fixed bug error receiving broadcast intent migrated library to x as a separate artifact bumped gradle to v updated project dependencies added retrolambda to sample java app things to do x branch update javadoc on gh pages bump library version to upload archives to maven central repository close and release artifact on nexus update changelog md after maven sync update download section in readme md after maven sync create new github release x branch update javadoc on gh pages bump library version to upload archives to maven central repository close and release artifact on nexus update changelog md after maven sync update download section in readme md after maven sync create new github release
| 1
|
19,554
| 25,876,313,036
|
IssuesEvent
|
2022-12-14 08:09:30
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
MacOS debug not work with bazel
|
more data needed type: support / not a bug (process)
|
# Status of Bazel 5.1.1
### clion and vscode can't debug executable file ;
### bazel's functionality and third-party package management are friendly and hopefully better for debugging and use
run the following code with macos
```
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) {
return "Hello " + who;
}
void print_localtime() {
std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {
who = argv[1];
}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
```
# g++ build cpp file then I can debug using vscode
g++ hello-world.cpp -g

# bazel build cpp file ; can`t debug cpp file, Breakpoint not effective
bazel build -c dbg //main:hello-world.cpp

|
1.0
|
MacOS debug not work with bazel - # Status of Bazel 5.1.1
### clion and vscode can't debug executable file ;
### bazel's functionality and third-party package management are friendly and hopefully better for debugging and use
run the following code with macos
```
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) {
return "Hello " + who;
}
void print_localtime() {
std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {
who = argv[1];
}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
```
# g++ build cpp file then I can debug using vscode
g++ hello-world.cpp -g

# bazel build cpp file ; can`t debug cpp file, Breakpoint not effective
bazel build -c dbg //main:hello-world.cpp

|
process
|
macos debug not work with bazel status of bazel clion and vscode can t debug executable file bazel s functionality and third party package management are friendly and hopefully better for debugging and use run the following code with macos include include include std string get greet const std string who return hello who void print localtime std time t result std time nullptr std cout std asctime std localtime result int main int argc char argv std string who world if argc who argv std cout get greet who std endl print localtime return g build cpp file then i can debug using vscode g hello world cpp g bazel build cpp file can t debug cpp file breakpoint not effective bazel build c dbg main hello world cpp
| 1
|
148,544
| 19,534,398,251
|
IssuesEvent
|
2021-12-31 01:33:28
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
opened
|
CVE-2017-17853 (High) detected in linux-stable-rtv4.1.33
|
security vulnerability
|
## CVE-2017-17853 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging incorrect BPF_RSH signed bounds calculations.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17853>CVE-2017-17853</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17853">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17853</a></p>
<p>Release Date: 2017-12-27</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-17853 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2017-17853 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
kernel/bpf/verifier.c in the Linux kernel through 4.14.8 allows local users to cause a denial of service (memory corruption) or possibly have unspecified other impact by leveraging incorrect BPF_RSH signed bounds calculations.
<p>Publish Date: 2017-12-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17853>CVE-2017-17853</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17853">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17853</a></p>
<p>Release Date: 2017-12-27</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files kernel bpf verifier c kernel bpf verifier c vulnerability details kernel bpf verifier c in the linux kernel through allows local users to cause a denial of service memory corruption or possibly have unspecified other impact by leveraging incorrect bpf rsh signed bounds calculations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
47,088
| 24,862,711,505
|
IssuesEvent
|
2022-10-27 09:29:49
|
Artelnics/opennn
|
https://api.github.com/repos/Artelnics/opennn
|
closed
|
Problems with Regularization Methods
|
question type:performance
|
I've begun to work with your Library since version 5.0.5 Now passing to your new version 6.0, but the Regularization methods seems not working anymore. If you test any std network (Approximation, with any learning method) with constant seed initialization parameters, and sequential splitting training and selection datasets, you will see that the network behavement in terms of net convergence results will be exactly the same, regardless what regularization methods you choose or regularization weight, or without any regularization. I've seen you've done deep changes in this part of code inside classes and methods in files "loss_index".
In my opinion seems that the Regularization error gradient is not propagated to the network. May be you should have a look.
|
True
|
Problems with Regularization Methods - I've begun to work with your Library since version 5.0.5 Now passing to your new version 6.0, but the Regularization methods seems not working anymore. If you test any std network (Approximation, with any learning method) with constant seed initialization parameters, and sequential splitting training and selection datasets, you will see that the network behavement in terms of net convergence results will be exactly the same, regardless what regularization methods you choose or regularization weight, or without any regularization. I've seen you've done deep changes in this part of code inside classes and methods in files "loss_index".
In my opinion seems that the Regularization error gradient is not propagated to the network. May be you should have a look.
|
non_process
|
problems with regularization methods i ve begun to work with your library since version now passing to your new version but the regularization methods seems not working anymore if you test any std network approximation with any learning method with constant seed initialization parameters and sequential splitting training and selection datasets you will see that the network behavement in terms of net convergence results will be exactly the same regardless what regularization methods you choose or regularization weight or without any regularization i ve seen you ve done deep changes in this part of code inside classes and methods in files loss index in my opinion seems that the regularization error gradient is not propagated to the network may be you should have a look
| 0
|
9,961
| 12,992,200,073
|
IssuesEvent
|
2020-07-23 06:14:41
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Add parameter InheritHandles to System.Diagnostics.ProcessStartInfo class
|
api-suggestion area-System.Diagnostics.Process untriaged
|
## Background and Motivation
When working with C++ DLLs that create file handles, for example log files, using the MFC CFile class or CStdioFile class, the DLL holds a file handle to the file opened. If this DLL is loaded into the address space of a .Net Framework application and this application starts a new process using the ProcessStartInfo.UseShellExecute parameter set to FALSE, all file handles are inherited to all child processes. This results in open (and possible write and read locked) files even if the file is closed in the C++ DLL which originally opened it! This leads to issues especially if the C++ DLL attempts to re-open the log file again - it will receive a sharing violation.
Currently, the only way to work around this issue is to set ProcessStartInfo.UseShellExecute=TRUE but then you loose the Redirect stdout feature of the ProcessStart (which is only supported if ProcessStartInfo.UseShellExecute=FALSE).
## Proposed API
Add a new bool parameter to ProcessStartInfo: InheritHandles (Default: true)
```
public bool InheritHandles { get; set; } = true;
```
Then, when creating the process using CreateProcess() use the parameter (see `//handle inheritance` flag below)
```
fixed (char* environmentBlockPtr = environmentBlock)
{
retVal = Interop.Kernel32.CreateProcess(
null, // we don't need this since all the info is in commandLine
commandLine, // pointer to the command line string
ref unused_SecAttrs, // address to process security attributes, we don't need to inherit the handle
ref unused_SecAttrs, // address to thread security attributes.
_inheritHandles, // handle inheritance flag
creationFlags, // creation flags
(IntPtr)environmentBlockPtr, // pointer to new environment block
workingDirectory, // pointer to current directory name
ref startupInfo, // pointer to STARTUPINFO
ref processInfo // pointer to PROCESS_INFORMATION
);
if (!retVal)
errorCode = Marshal.GetLastWin32Error();
}
```
## Risks
If the default of the new property is set to true, there are no risks because the current behavior is unchanged.
|
1.0
|
Add parameter InheritHandles to System.Diagnostics.ProcessStartInfo class - ## Background and Motivation
When working with C++ DLLs that create file handles, for example log files, using the MFC CFile class or CStdioFile class, the DLL holds a file handle to the file opened. If this DLL is loaded into the address space of a .Net Framework application and this application starts a new process using the ProcessStartInfo.UseShellExecute parameter set to FALSE, all file handles are inherited to all child processes. This results in open (and possible write and read locked) files even if the file is closed in the C++ DLL which originally opened it! This leads to issues especially if the C++ DLL attempts to re-open the log file again - it will receive a sharing violation.
Currently, the only way to work around this issue is to set ProcessStartInfo.UseShellExecute=TRUE but then you loose the Redirect stdout feature of the ProcessStart (which is only supported if ProcessStartInfo.UseShellExecute=FALSE).
## Proposed API
Add a new bool parameter to ProcessStartInfo: InheritHandles (Default: true)
```
public bool InheritHandles { get; set; } = true;
```
Then, when creating the process using CreateProcess() use the parameter (see `//handle inheritance` flag below)
```
fixed (char* environmentBlockPtr = environmentBlock)
{
retVal = Interop.Kernel32.CreateProcess(
null, // we don't need this since all the info is in commandLine
commandLine, // pointer to the command line string
ref unused_SecAttrs, // address to process security attributes, we don't need to inherit the handle
ref unused_SecAttrs, // address to thread security attributes.
_inheritHandles, // handle inheritance flag
creationFlags, // creation flags
(IntPtr)environmentBlockPtr, // pointer to new environment block
workingDirectory, // pointer to current directory name
ref startupInfo, // pointer to STARTUPINFO
ref processInfo // pointer to PROCESS_INFORMATION
);
if (!retVal)
errorCode = Marshal.GetLastWin32Error();
}
```
## Risks
If the default of the new property is set to true, there are no risks because the current behavior is unchanged.
|
process
|
add parameter inherithandles to system diagnostics processstartinfo class background and motivation when working with c dlls that create file handles for example log files using the mfc cfile class or cstdiofile class the dll holds a file handle to the file opened if this dll is loaded into the address space of a net framework application and this application starts a new process using the processstartinfo useshellexecute parameter set to false all file handles are inherited to all child processes this results in open and possible write and read locked files even if the file is closed in the c dll which originally opened it this leads to issues especially if the c dll attempts to re open the log file again it will receive a sharing violation currently the only way to work around this issue is to set processstartinfo useshellexecute true but then you loose the redirect stdout feature of the processstart which is only supported if processstartinfo useshellexecute false proposed api add a new bool parameter to processstartinfo inherithandles default true public bool inherithandles get set true then when creating the process using createprocess use the parameter see handle inheritance flag below fixed char environmentblockptr environmentblock retval interop createprocess null we don t need this since all the info is in commandline commandline pointer to the command line string ref unused secattrs address to process security attributes we don t need to inherit the handle ref unused secattrs address to thread security attributes inherithandles handle inheritance flag creationflags creation flags intptr environmentblockptr pointer to new environment block workingdirectory pointer to current directory name ref startupinfo pointer to startupinfo ref processinfo pointer to process information if retval errorcode marshal risks if the default of the new property is set to true there are no risks because the current behavior is unchanged
| 1
|
102,678
| 8,852,242,663
|
IssuesEvent
|
2019-01-08 17:47:09
|
phetsims/circuit-construction-kit-dc
|
https://api.github.com/repos/phetsims/circuit-construction-kit-dc
|
closed
|
CT: value failed isValidValue
|
type:automated-testing
|
```
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546930661649:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546930661649:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546930661649:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546930661649:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546930661649:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546930661649:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546930661649:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546930661649:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546930661649:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546945691610:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546945691610:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546945691610:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546945691610:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546945691610:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546945691610:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546945691610:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546945691610:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546945691610:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546951324235:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546951324235:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546951324235:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546951324235:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546951324235:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546951324235:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546951324235:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546951324235:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546951324235:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546927501168:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546927501168:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546927501168:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546927501168:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546927501168:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546927501168:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546927501168:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546927501168:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546927501168:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546949566719:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546949566719:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546949566719:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546949566719:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546949566719:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546949566719:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546949566719:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546949566719:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546949566719:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546957290306:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957290306:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957290306:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957290306:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957290306:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546957290306:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546957290306:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546957290306:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546957290306:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : xss-fuzz : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546937168669:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546937168669:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546937168669:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546937168669:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546937168669:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546937168669:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546937168669:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546937168669:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546937168669:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : xss-fuzz : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546957075159:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957075159:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957075159:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957075159:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957075159:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546957075159:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546957075159:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546957075159:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546957075159:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
```
|
1.0
|
CT: value failed isValidValue - ```
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546930661649:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546930661649:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546930661649:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546930661649:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546930661649:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546930661649:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546930661649:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546930661649:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546930661649:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546945691610:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546945691610:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546945691610:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546945691610:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546945691610:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546945691610:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546945691610:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546945691610:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546945691610:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546951324235:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546951324235:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546951324235:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546951324235:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546951324235:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546951324235:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546951324235:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546951324235:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546951324235:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546927501168:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546927501168:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546927501168:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546927501168:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546927501168:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546927501168:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546927501168:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546927501168:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546927501168:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546949566719:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546949566719:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546949566719:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546949566719:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546949566719:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546949566719:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546949566719:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546949566719:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546949566719:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : fuzz : require.js-canvas : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546957290306:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957290306:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957290306:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957290306:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957290306:333:14)
at WireNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546957290306:1920:40)
at WireNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546957290306:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546957290306:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546957290306:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : xss-fuzz : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546937168669:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546937168669:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546937168669:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546937168669:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546937168669:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546937168669:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546937168669:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546937168669:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546937168669:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
circuit-construction-kit-dc : xss-fuzz : run
Uncaught Error: Assertion failed: value failed isValidValue: [object Object]
Error: Assertion failed: value failed isValidValue: [object Object]
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/assert/js/assert.js:22:13)
at Object.validate (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Validator.js?bust=1546957075159:75:33)
at Emitter.assertEmittingValidValues (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957075159:80:21)
at Emitter.emit (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/axon/js/Emitter.js?bust=1546957075159:190:72)
at SimpleDragHandler.endDrag (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957075159:317:29)
at SimpleDragHandler.interrupt (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/SimpleDragHandler.js?bust=1546957075159:333:14)
at VertexNode.interruptInput (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/nodes/Node.js?bust=1546957075159:1920:40)
at VertexNode.blur (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/accessibility/Accessibility.js?bust=1546957075159:532:18)
at Function.set focus [as focus] (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/display/Display.js?bust=1546957075159:1718:25)
at Input.downEvent (https://bayes.colorado.edu/continuous-testing/snapshot-1546923822816/scenery/js/input/Input.js?bust=1546957075159:1524:33)
id: Bayes Chrome
Approximately 1/7/2019, 10:03:42 PM
```
|
non_process
|
ct value failed isvalidvalue circuit construction kit dc fuzz require js run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at vertexnode interruptinput at vertexnode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc fuzz require js run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at wirenode interruptinput at wirenode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc fuzz require js run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at wirenode interruptinput at wirenode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc fuzz require js canvas run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at wirenode interruptinput at wirenode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc fuzz require js canvas run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at wirenode interruptinput at wirenode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc fuzz require js canvas run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at wirenode interruptinput at wirenode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc xss fuzz run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at vertexnode interruptinput at vertexnode blur at function set focus at input downevent id bayes chrome approximately pm circuit construction kit dc xss fuzz run uncaught error assertion failed value failed isvalidvalue error assertion failed value failed isvalidvalue at window assertions assertfunction at object validate at emitter assertemittingvalidvalues at emitter emit at simpledraghandler enddrag at simpledraghandler interrupt at vertexnode interruptinput at vertexnode blur at function set focus at input downevent id bayes chrome approximately pm
| 0
|
227,726
| 18,096,113,570
|
IssuesEvent
|
2021-09-22 09:11:18
|
input-output-hk/ouroboros-network
|
https://api.github.com/repos/input-output-hk/ouroboros-network
|
opened
|
Network simulator should allow to specify BearerInfo scripts per connection
|
networking peer2peer testing net-sim
|
Currently `BearerInfo`'s are specified as a list which is stepped over for all
connections. If we do that per connection we will get better shrinking
behaviour for network simulations.
|
1.0
|
Network simulator should allow to specify BearerInfo scripts per connection - Currently `BearerInfo`'s are specified as a list which is stepped over for all
connections. If we do that per connection we will get better shrinking
behaviour for network simulations.
|
non_process
|
network simulator should allow to specify bearerinfo scripts per connection currently bearerinfo s are specified as a list which is stepped over for all connections if we do that per connection we will get better shrinking behaviour for network simulations
| 0
|
604,551
| 18,714,520,524
|
IssuesEvent
|
2021-11-03 01:24:33
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
Legal API: HTTP 500 when trying to fetch a pending correction filing
|
bug Priority2 ENTITY
|
**To reproduce:**
1. login as IDIR
2. go to a business
3. create an AR correction filing (or COA or COD)
4. file it (no fee is OK)
5. observe console or network tab in dev tools
**Explanation:**
As the correction filing is not processed right away, it remains in the Todo List with status PENDING_CORRECTION, ie, it is being processed. Meanwhile, the dashboard retries several times in the background to see if this filing happens to finish processing and appear in the ledger (and if so, the dashboard will be fully reloaded to show the new state). It is this background "get filing X" that is experiencing a HTTP 500.
(This worked previously.)
|
1.0
|
Legal API: HTTP 500 when trying to fetch a pending correction filing - **To reproduce:**
1. login as IDIR
2. go to a business
3. create an AR correction filing (or COA or COD)
4. file it (no fee is OK)
5. observe console or network tab in dev tools
**Explanation:**
As the correction filing is not processed right away, it remains in the Todo List with status PENDING_CORRECTION, ie, it is being processed. Meanwhile, the dashboard retries several times in the background to see if this filing happens to finish processing and appear in the ledger (and if so, the dashboard will be fully reloaded to show the new state). It is this background "get filing X" that is experiencing a HTTP 500.
(This worked previously.)
|
non_process
|
legal api http when trying to fetch a pending correction filing to reproduce login as idir go to a business create an ar correction filing or coa or cod file it no fee is ok observe console or network tab in dev tools explanation as the correction filing is not processed right away it remains in the todo list with status pending correction ie it is being processed meanwhile the dashboard retries several times in the background to see if this filing happens to finish processing and appear in the ledger and if so the dashboard will be fully reloaded to show the new state it is this background get filing x that is experiencing a http this worked previously
| 0
|
416,202
| 28,072,080,652
|
IssuesEvent
|
2023-03-29 19:54:18
|
SchlossLab/mikropml
|
https://api.github.com/repos/SchlossLab/mikropml
|
opened
|
New vignette for visualizing performance and feature importance
|
documentation
|
Currently, a lot of plots are buried in the `parallel` vignette. We should break them out into a separate vignette.
- plots for a single model
- ROC and PRC
- Feature importance with 95% CI
- plots for multiple models on same dataset
- Mean ROC and PRC
- Performance boxplots with AUROC and AUPRC
- Feature importance with 95% CI
- plots for model(s) on different datasets with class imbalances
- Mean balanced PRC
- Boxplot with AUBPRC
|
1.0
|
New vignette for visualizing performance and feature importance - Currently, a lot of plots are buried in the `parallel` vignette. We should break them out into a separate vignette.
- plots for a single model
- ROC and PRC
- Feature importance with 95% CI
- plots for multiple models on same dataset
- Mean ROC and PRC
- Performance boxplots with AUROC and AUPRC
- Feature importance with 95% CI
- plots for model(s) on different datasets with class imbalances
- Mean balanced PRC
- Boxplot with AUBPRC
|
non_process
|
new vignette for visualizing performance and feature importance currently a lot of plots are buried in the parallel vignette we should break them out into a separate vignette plots for a single model roc and prc feature importance with ci plots for multiple models on same dataset mean roc and prc performance boxplots with auroc and auprc feature importance with ci plots for model s on different datasets with class imbalances mean balanced prc boxplot with aubprc
| 0
|
1,506
| 4,098,232,426
|
IssuesEvent
|
2016-06-03 07:22:16
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Дніпропетровська область - Видача довідки про встановлення режиму роботи об’єкта
|
In process of testing in work test
|
створити послугу на наступні міста Дніпропетровської області:
- [x] Дніпродзержинськ
- [x] Тернівка
- [x] Першотравенськ
https://igov.org.ua/service/1380/general - серая на портале
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
приклад
http://e-services.dp.gov.ua/_layouts/Information/pgCardServices.aspx?ID=4997&TypeAg=0&Filter=%u0440%u0435%u0436%u0438%u043C&IsO=1&IsV=1
|
1.0
|
Дніпропетровська область - Видача довідки про встановлення режиму роботи об’єкта - створити послугу на наступні міста Дніпропетровської області:
- [x] Дніпродзержинськ
- [x] Тернівка
- [x] Першотравенськ
https://igov.org.ua/service/1380/general - серая на портале
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
приклад
http://e-services.dp.gov.ua/_layouts/Information/pgCardServices.aspx?ID=4997&TypeAg=0&Filter=%u0440%u0435%u0436%u0438%u043C&IsO=1&IsV=1
|
process
|
дніпропетровська область видача довідки про встановлення режиму роботи об’єкта створити послугу на наступні міста дніпропетровської області дніпродзержинськ тернівка першотравенськ серая на портале контакти відповідальних осіб у інфо карти знахдяться на офіційному приклад
| 1
|
234,397
| 17,953,563,704
|
IssuesEvent
|
2021-09-13 03:02:37
|
ccodwg/Covid19CanadaArchive
|
https://api.github.com/repos/ccodwg/Covid19CanadaArchive
|
closed
|
All data in README should be stored in datasets.json
|
documentation
|
README should be constructed from datasets.json.
|
1.0
|
All data in README should be stored in datasets.json - README should be constructed from datasets.json.
|
non_process
|
all data in readme should be stored in datasets json readme should be constructed from datasets json
| 0
|
3,002
| 5,997,230,770
|
IssuesEvent
|
2017-06-03 21:48:18
|
capnkirok/animania
|
https://api.github.com/repos/capnkirok/animania
|
closed
|
New animals should be added to Curse overview page
|
enhancement in process priority: high
|
Currently the new animals from the latest update (Draft horses, etc) haven't been added to the [Curse overview page](https://minecraft.curseforge.com/projects/animania).
I'm also not too sure how Draft Horses work. Could a description be provided on the same page, or is there documentation elsewhere?
Much thanks! Excited to play with the new content.
|
1.0
|
New animals should be added to Curse overview page - Currently the new animals from the latest update (Draft horses, etc) haven't been added to the [Curse overview page](https://minecraft.curseforge.com/projects/animania).
I'm also not too sure how Draft Horses work. Could a description be provided on the same page, or is there documentation elsewhere?
Much thanks! Excited to play with the new content.
|
process
|
new animals should be added to curse overview page currently the new animals from the latest update draft horses etc haven t been added to the i m also not too sure how draft horses work could a description be provided on the same page or is there documentation elsewhere much thanks excited to play with the new content
| 1
|
17,423
| 23,246,201,450
|
IssuesEvent
|
2022-08-03 20:26:34
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
brightcove.map.fastly.net
|
whitelisting process
|
**Domains or links**
brightcove.map.fastly.net
**More Information**
How did you discover your web site or domain was listed here?
Brightcove player on New Zealand news website stuff.co.nz won't play any content unless the above link is whitelisted.
**Have you requested removal from other sources?**
No, the link is not blacklisted in any other blacklist I use on pihole.
**Additional context**
Add any other context about the problem here.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
1.0
|
brightcove.map.fastly.net - **Domains or links**
brightcove.map.fastly.net
**More Information**
How did you discover your web site or domain was listed here?
Brightcove player on New Zealand news website stuff.co.nz won't play any content unless the above link is whitelisted.
**Have you requested removal from other sources?**
No, the link is not blacklisted in any other blacklist I use on pihole.
**Additional context**
Add any other context about the problem here.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
process
|
brightcove map fastly net domains or links brightcove map fastly net more information how did you discover your web site or domain was listed here brightcove player on new zealand news website stuff co nz won t play any content unless the above link is whitelisted have you requested removal from other sources no the link is not blacklisted in any other blacklist i use on pihole additional context add any other context about the problem here exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
131,446
| 18,286,825,039
|
IssuesEvent
|
2021-10-05 11:15:27
|
smartcoop/design
|
https://api.github.com/repos/smartcoop/design
|
opened
|
Example issue
|
design-library
|
Example issue tagged with design library.
Make issues like this, when it's about improving the design library (link: https://www.figma.com/file/xzzLf5aoluyutuX6Vhpt47/SmartCoop-Library?node-id=0%3A1 ).
|
1.0
|
Example issue - Example issue tagged with design library.
Make issues like this, when it's about improving the design library (link: https://www.figma.com/file/xzzLf5aoluyutuX6Vhpt47/SmartCoop-Library?node-id=0%3A1 ).
|
non_process
|
example issue example issue tagged with design library make issues like this when it s about improving the design library link
| 0
|
145,970
| 22,836,948,139
|
IssuesEvent
|
2022-07-12 17:36:26
|
practice-uffs/programa
|
https://api.github.com/repos/practice-uffs/programa
|
closed
|
Revisão do design para o app de boas-vindas
|
interno:produção P3 equipe:con-design ajuda:con-conteúdo
|
Essa issue está relacionada com as tarefas #1079 e #1149 , e refere-se a revisão das imagens, e demais elementos visuais no conteúdo presente no aplicativo de Boas-Vindas.
Equipe responsável: @practice-uffs/con-design
|
1.0
|
Revisão do design para o app de boas-vindas - Essa issue está relacionada com as tarefas #1079 e #1149 , e refere-se a revisão das imagens, e demais elementos visuais no conteúdo presente no aplicativo de Boas-Vindas.
Equipe responsável: @practice-uffs/con-design
|
non_process
|
revisão do design para o app de boas vindas essa issue está relacionada com as tarefas e e refere se a revisão das imagens e demais elementos visuais no conteúdo presente no aplicativo de boas vindas equipe responsável practice uffs con design
| 0
|
4,128
| 7,086,154,488
|
IssuesEvent
|
2018-01-11 13:41:36
|
rogerthat-platform/rogerthat-ios-client
|
https://api.github.com/repos/rogerthat-platform/rogerthat-ios-client
|
closed
|
Apps with custom home screen
|
priority_critical process_duplicate type_feature
|
- [ ] Use homescreen_style "branding" in `build.yaml`
- [ ] After registration, show progress bar until main service, all js embeddings and home branding are available.
- [ ] Implement api to read news items from inside a branding
- Security: return all news items to the main service. Other services should only receive their news items.
- [ ] Triggering of badge number callbacks
|
1.0
|
Apps with custom home screen - - [ ] Use homescreen_style "branding" in `build.yaml`
- [ ] After registration, show progress bar until main service, all js embeddings and home branding are available.
- [ ] Implement api to read news items from inside a branding
- Security: return all news items to the main service. Other services should only receive their news items.
- [ ] Triggering of badge number callbacks
|
process
|
apps with custom home screen use homescreen style branding in build yaml after registration show progress bar until main service all js embeddings and home branding are available implement api to read news items from inside a branding security return all news items to the main service other services should only receive their news items triggering of badge number callbacks
| 1
|
13,545
| 16,088,313,038
|
IssuesEvent
|
2021-04-26 13:55:52
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
host cell apoplast question
|
multi-species process quick fix
|
host cell apoplast -> host apoplast
(it isn't a cellular thing?)
|
1.0
|
host cell apoplast question -
host cell apoplast -> host apoplast
(it isn't a cellular thing?)
|
process
|
host cell apoplast question host cell apoplast host apoplast it isn t a cellular thing
| 1
|
251,954
| 21,529,176,575
|
IssuesEvent
|
2022-04-28 21:56:00
|
osmosis-labs/osmosis
|
https://api.github.com/repos/osmosis-labs/osmosis
|
closed
|
test: switch e2e test setup to creating genesis and configs via Dockertest
|
blocked T:tests T:CI T:task ⚙️
|
**Background**
This is part of #1235 epic
The preceding tasks include #1293 and #1330. This issue is blocked on these being merged.
Assuming that both are merged, we should be able to initialize genesis and config data via:
```
docker run -v < path >:/tmp/osmo-test osmosis-e2e-chain-init:debug --data-dir=/tmp/osmo-test
All chain data is placed at the given < path > that is mounted as a volume on the container. In addition, this PR introduces documentation about the current state of the e2e tests.
```
For the next step, we need to convert `func (s *IntegrationTestSuite) configureChain(chainId string)` in `e2e_setup_test.go` to initialize genesis and configs via Dockertest by using the newly created `osmosis-e2e-chain-init:debug` and mount its output on a volume.
Then, we should mount `osmosis:debug` containers that run validators on these mounted configs.
Please note that currently the chain id in the script used by the `osmosis-e2e-chain-init:debug` container [is hardcoded](https://github.com/osmosis-labs/osmosis/blob/2763946b9fc875c8368f1f9f3414b4d5ea6dd2f4/tests/e2e/chain_init/main.go#L24). Instead, it should be passed to the container via Dockertest and read from the script as a flag.
**Acceptance Criteria**
- chain initialization in e2e tests uses Dockertest and the `osmosis-e2e-chain-init:debug` container
- e2e tests continue to pass
|
1.0
|
test: switch e2e test setup to creating genesis and configs via Dockertest - **Background**
This is part of #1235 epic
The preceding tasks include #1293 and #1330. This issue is blocked on these being merged.
Assuming that both are merged, we should be able to initialize genesis and config data via:
```
docker run -v < path >:/tmp/osmo-test osmosis-e2e-chain-init:debug --data-dir=/tmp/osmo-test
All chain data is placed at the given < path > that is mounted as a volume on the container. In addition, this PR introduces documentation about the current state of the e2e tests.
```
For the next step, we need to convert `func (s *IntegrationTestSuite) configureChain(chainId string)` in `e2e_setup_test.go` to initialize genesis and configs via Dockertest by using the newly created `osmosis-e2e-chain-init:debug` and mount its output on a volume.
Then, we should mount `osmosis:debug` containers that run validators on these mounted configs.
Please note that currently the chain id in the script used by the `osmosis-e2e-chain-init:debug` container [is hardcoded](https://github.com/osmosis-labs/osmosis/blob/2763946b9fc875c8368f1f9f3414b4d5ea6dd2f4/tests/e2e/chain_init/main.go#L24). Instead, it should be passed to the container via Dockertest and read from the script as a flag.
**Acceptance Criteria**
- chain initialization in e2e tests uses Dockertest and the `osmosis-e2e-chain-init:debug` container
- e2e tests continue to pass
|
non_process
|
test switch test setup to creating genesis and configs via dockertest background this is part of epic the preceding tasks include and this issue is blocked on these being merged assuming that both are merged we should be able to initialize genesis and config data via docker run v tmp osmo test osmosis chain init debug data dir tmp osmo test all chain data is placed at the given that is mounted as a volume on the container in addition this pr introduces documentation about the current state of the tests for the next step we need to convert func s integrationtestsuite configurechain chainid string in setup test go to initialize genesis and configs via dockertest by using the newly created osmosis chain init debug and mount its output on a volume then we should mount osmosis debug containers that run validators on these mounted configs please note that currently the chain id in the script used by the osmosis chain init debug container instead it should be passed to the container via dockertest and read from the script as a flag acceptance criteria chain initialization in tests uses dockertest and the osmosis chain init debug container tests continue to pass
| 0
|
260,573
| 22,632,864,188
|
IssuesEvent
|
2022-06-30 16:01:43
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
e2e tests are failing for older release (1.2 to 1.4) due to backlevel dependency on knative/hack
|
kind/bug area/test-and-release
|
The serving e2e *release* tests are failing due to a backlevel dependency on `knative/hack`.
With a recent infrastructure update, the Go version has been bumped to a newer version which doesn't support `go get` outside of modules anymore (deprecated in favor of `go install`). Older serving releases (branches: `release-1.2`, `release-1.3`, `release-1.4`) do have a dependency on `knative/hack` which is still using `go get` to install third-party test dependencies. Please see here for the prow job status for the serving release e2e tests: <https://prow.knative.dev/?job=*release*serving*>
The relevant `knative/hack` update that replaces the `go get` call with `go install` was done in this PR: https://github.com/knative/hack/pull/169
The serving branches `release-1.2`, `release-1.3` and `release-1.4` should be updated with the corresponding `knative/hack` dependency that includes this PR. To avoid other (unwanted) Go dependency updates, the `knative/hack` dependency should probably be cherrypicked.
## Which area is affected?
/area test-and-release
## What version of Knative?
1.2.x
1.3.x
1.4.x
## Expected Behavior
The serving e2e tests for the affected branches are running successfully.
## Actual Behavior
The serving e2e test for the affected branches are not running successfully due to a mimatch between the used Go version and the helper function sourced from the dependency `knative/hack`. See also: <https://prow.knative.dev/?job=*release*serving*>
|
1.0
|
e2e tests are failing for older release (1.2 to 1.4) due to backlevel dependency on knative/hack - The serving e2e *release* tests are failing due to a backlevel dependency on `knative/hack`.
With a recent infrastructure update, the Go version has been bumped to a newer version which doesn't support `go get` outside of modules anymore (deprecated in favor of `go install`). Older serving releases (branches: `release-1.2`, `release-1.3`, `release-1.4`) do have a dependency on `knative/hack` which is still using `go get` to install third-party test dependencies. Please see here for the prow job status for the serving release e2e tests: <https://prow.knative.dev/?job=*release*serving*>
The relevant `knative/hack` update that replaces the `go get` call with `go install` was done in this PR: https://github.com/knative/hack/pull/169
The serving branches `release-1.2`, `release-1.3` and `release-1.4` should be updated with the corresponding `knative/hack` dependency that includes this PR. To avoid other (unwanted) Go dependency updates, the `knative/hack` dependency should probably be cherrypicked.
## Which area is affected?
/area test-and-release
## What version of Knative?
1.2.x
1.3.x
1.4.x
## Expected Behavior
The serving e2e tests for the affected branches are running successfully.
## Actual Behavior
The serving e2e test for the affected branches are not running successfully due to a mimatch between the used Go version and the helper function sourced from the dependency `knative/hack`. See also: <https://prow.knative.dev/?job=*release*serving*>
|
non_process
|
tests are failing for older release to due to backlevel dependency on knative hack the serving release tests are failing due to a backlevel dependency on knative hack with a recent infrastructure update the go version has been bumped to a newer version which doesn t support go get outside of modules anymore deprecated in favor of go install older serving releases branches release release release do have a dependency on knative hack which is still using go get to install third party test dependencies please see here for the prow job status for the serving release tests the relevant knative hack update that replaces the go get call with go install was done in this pr the serving branches release release and release should be updated with the corresponding knative hack dependency that includes this pr to avoid other unwanted go dependency updates the knative hack dependency should probably be cherrypicked which area is affected area test and release what version of knative x x x expected behavior the serving tests for the affected branches are running successfully actual behavior the serving test for the affected branches are not running successfully due to a mimatch between the used go version and the helper function sourced from the dependency knative hack see also
| 0
|
291,397
| 25,143,280,434
|
IssuesEvent
|
2022-11-10 01:43:59
|
Azure/azure-sdk-tools
|
https://api.github.com/repos/Azure/azure-sdk-tools
|
opened
|
Publish "Test Template" Projects for the test-proxy
|
Test-Proxy
|
Right now, we have some `sample usage`, but that is very specifically just console apps that:
- start recording
- run single request
- stop recording
- start playback
- run single request (playback)
- stop
We need to extend these to be actually full test projects for use in production applications.
Currently requested from @xboxeer :
For `.net`:
- [x] Publish an `xunit` and `unit` test project examples that start record/playback in setup function -> run test.
For `go`:
- [x] Consult with @benbp and get the test framework. We'll probably end up ripping the go test framework almost verbatim.
|
1.0
|
Publish "Test Template" Projects for the test-proxy - Right now, we have some `sample usage`, but that is very specifically just console apps that:
- start recording
- run single request
- stop recording
- start playback
- run single request (playback)
- stop
We need to extend these to be actually full test projects for use in production applications.
Currently requested from @xboxeer :
For `.net`:
- [x] Publish an `xunit` and `unit` test project examples that start record/playback in setup function -> run test.
For `go`:
- [x] Consult with @benbp and get the test framework. We'll probably end up ripping the go test framework almost verbatim.
|
non_process
|
publish test template projects for the test proxy right now we have some sample usage but that is very specifically just console apps that start recording run single request stop recording start playback run single request playback stop we need to extend these to be actually full test projects for use in production applications currently requested from xboxeer for net publish an xunit and unit test project examples that start record playback in setup function run test for go consult with benbp and get the test framework we ll probably end up ripping the go test framework almost verbatim
| 0
|
10,213
| 13,078,800,614
|
IssuesEvent
|
2020-08-01 00:46:01
|
SethSterling/CPW213-eCommerceSite
|
https://api.github.com/repos/SethSterling/CPW213-eCommerceSite
|
closed
|
Add CI Pipeline
|
developer process
|
Add continuous integration that will check to make sure code in pull request compiles successfully.
|
1.0
|
Add CI Pipeline - Add continuous integration that will check to make sure code in pull request compiles successfully.
|
process
|
add ci pipeline add continuous integration that will check to make sure code in pull request compiles successfully
| 1
|
22,272
| 30,822,472,626
|
IssuesEvent
|
2023-08-01 17:21:44
|
lambdaclass/cairo_native
|
https://api.github.com/repos/lambdaclass/cairo_native
|
opened
|
Ensure that all benchmarks verify results
|
process
|
Assert expected and correct values are returned by the benchmarks, as incorrect benchmarks are worthless.
|
1.0
|
Ensure that all benchmarks verify results - Assert expected and correct values are returned by the benchmarks, as incorrect benchmarks are worthless.
|
process
|
ensure that all benchmarks verify results assert expected and correct values are returned by the benchmarks as incorrect benchmarks are worthless
| 1
|
6,223
| 9,161,748,334
|
IssuesEvent
|
2019-03-01 11:20:27
|
JudicialAppointmentsCommission/documentation
|
https://api.github.com/repos/JudicialAppointmentsCommission/documentation
|
closed
|
Transition to paid lastpass for the commission
|
process
|
See #48.
# Background
There are several different ways of maintaining passowords in the commission. We need to settle on a single way and ensure everyone knows how to use it.
We want a master repo from which we share necessary details with individuals, who all have their own logins.
## Done when
- [ ] We have budget approval for LastPass for all users
- [ ] All users have had minimal necessary training to use it.
- [ ] All password sources are consolidated to it.
|
1.0
|
Transition to paid lastpass for the commission - See #48.
# Background
There are several different ways of maintaining passowords in the commission. We need to settle on a single way and ensure everyone knows how to use it.
We want a master repo from which we share necessary details with individuals, who all have their own logins.
## Done when
- [ ] We have budget approval for LastPass for all users
- [ ] All users have had minimal necessary training to use it.
- [ ] All password sources are consolidated to it.
|
process
|
transition to paid lastpass for the commission see background there are several different ways of maintaining passowords in the commission we need to settle on a single way and ensure everyone knows how to use it we want a master repo from which we share necessary details with individuals who all have their own logins done when we have budget approval for lastpass for all users all users have had minimal necessary training to use it all password sources are consolidated to it
| 1
|
34,955
| 6,396,635,824
|
IssuesEvent
|
2017-08-04 15:58:58
|
intelsdi-x/snap
|
https://api.github.com/repos/intelsdi-x/snap
|
closed
|
REST API V1 documentation needs to be updated
|
tracked type/bug type/documentation-needed
|
Documentation for REST API needs a refresh:
* several methods are missing: `/plugins/:type/:name/:version/config` DELETE/ PUT,
* there's no mention of interface for loading signed plugins (while client program, `snapteld` clearly achieves it),
* format of REST responses was slightly changed, and now includes `href`, e.g.: response to GET a list of tasks:
```
{
"meta": {
"code": 200,
"message": "Scheduled tasks retrieved",
"type": "scheduled_task_list_returned",
"version": 1
},
"body": {
"ScheduledTasks": [
{
"id": "1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b",
"name": "Task-1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b",
"deadline": "5s",
"creation_timestamp": 1494330427,
"last_run_timestamp": -1,
"task_state": "Stopped",
"href": "http://localhost:8181/v1/tasks/1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b"
}
]
}
}
```
|
1.0
|
REST API V1 documentation needs to be updated - Documentation for REST API needs a refresh:
* several methods are missing: `/plugins/:type/:name/:version/config` DELETE/ PUT,
* there's no mention of interface for loading signed plugins (while client program, `snapteld` clearly achieves it),
* format of REST responses was slightly changed, and now includes `href`, e.g.: response to GET a list of tasks:
```
{
"meta": {
"code": 200,
"message": "Scheduled tasks retrieved",
"type": "scheduled_task_list_returned",
"version": 1
},
"body": {
"ScheduledTasks": [
{
"id": "1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b",
"name": "Task-1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b",
"deadline": "5s",
"creation_timestamp": 1494330427,
"last_run_timestamp": -1,
"task_state": "Stopped",
"href": "http://localhost:8181/v1/tasks/1ff5b7f0-83cc-47c1-8f74-e83fc5b9a45b"
}
]
}
}
```
|
non_process
|
rest api documentation needs to be updated documentation for rest api needs a refresh several methods are missing plugins type name version config delete put there s no mention of interface for loading signed plugins while client program snapteld clearly achieves it format of rest responses was slightly changed and now includes href e g response to get a list of tasks meta code message scheduled tasks retrieved type scheduled task list returned version body scheduledtasks id name task deadline creation timestamp last run timestamp task state stopped href
| 0
|
46,210
| 11,799,887,088
|
IssuesEvent
|
2020-03-18 16:36:21
|
googleapis/gcs-resumable-upload
|
https://api.github.com/repos/googleapis/gcs-resumable-upload
|
closed
|
Mocha Tests: should resume an interrupted upload failed
|
api: storage buildcop: issue priority: p1 type: bug
|
This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
buildID: 0f2c73c94d03168e66e7ae44fe454568b7e5863e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e08d178f-7c17-4d65-ac77-9376f7b54903), [Sponge](http://sponge2/e08d178f-7c17-4d65-ac77-9376f7b54903)
status: failed
|
1.0
|
Mocha Tests: should resume an interrupted upload failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
buildID: 0f2c73c94d03168e66e7ae44fe454568b7e5863e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e08d178f-7c17-4d65-ac77-9376f7b54903), [Sponge](http://sponge2/e08d178f-7c17-4d65-ac77-9376f7b54903)
status: failed
|
non_process
|
mocha tests should resume an interrupted upload failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting buildid buildurl status failed
| 0
|
2,699
| 5,542,042,821
|
IssuesEvent
|
2017-03-22 14:15:45
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
opened
|
make commands independent of one another
|
enhancement preprocessor
|
Allow users the possibility to bypass the current situation where an option set in one command is perpetuated into other commands when the user doesn't explicitly pass the option again. e.g. In the following case, the second call to `command` will have options 1, 2, and 3 set even though only 2 and 3 were passed:
```
command(option1, option2);
command(option1, option3);
```
Introduce a new syntax such as
```
command(option1, option2);
command!(option1, option3);
```
which would tell the preprocessor to reset all command-specific options to their defaults before writing output. To do this, every commands options must be local to a substructure of `options_` (i.e. `options_.command.option1`, `options_.command.option2`, etc.)
|
1.0
|
make commands independent of one another - Allow users the possibility to bypass the current situation where an option set in one command is perpetuated into other commands when the user doesn't explicitly pass the option again. e.g. In the following case, the second call to `command` will have options 1, 2, and 3 set even though only 2 and 3 were passed:
```
command(option1, option2);
command(option1, option3);
```
Introduce a new syntax such as
```
command(option1, option2);
command!(option1, option3);
```
which would tell the preprocessor to reset all command-specific options to their defaults before writing output. To do this, every commands options must be local to a substructure of `options_` (i.e. `options_.command.option1`, `options_.command.option2`, etc.)
|
process
|
make commands independent of one another allow users the possibility to bypass the current situation where an option set in one command is perpetuated into other commands when the user doesn t explicitly pass the option again e g in the following case the second call to command will have options and set even though only and were passed command command introduce a new syntax such as command command which would tell the preprocessor to reset all command specific options to their defaults before writing output to do this every commands options must be local to a substructure of options i e options command options command etc
| 1
|
51,813
| 6,198,478,856
|
IssuesEvent
|
2017-07-05 19:15:29
|
UV-CDAT/vcdat
|
https://api.github.com/repos/UV-CDAT/vcdat
|
closed
|
DOE Demo
|
Question Testing
|
* Script Demo with @williams13, @doutriaux1, and Jerry
* 1D plots
* 3D plots
* multiple plots
* box fill editor
* template editor
* work in progress
* Schedule Demo
|
1.0
|
DOE Demo - * Script Demo with @williams13, @doutriaux1, and Jerry
* 1D plots
* 3D plots
* multiple plots
* box fill editor
* template editor
* work in progress
* Schedule Demo
|
non_process
|
doe demo script demo with and jerry plots plots multiple plots box fill editor template editor work in progress schedule demo
| 0
|
15,921
| 20,122,071,240
|
IssuesEvent
|
2022-02-08 04:11:07
|
createwithrani/superlist
|
https://api.github.com/repos/createwithrani/superlist
|
closed
|
Should build files be excluded from version control?
|
Process
|
## The Situation
I had never considered that someone else might want to contribute to the plugin, although I know it's a possibility and I have contributed to other plugins on occasion myself! To that end, if build files are committed then there will always be a conflict (if just of the date and time in the asset files), which will make merging more problematic.
## To Ponder
So, should build files be excluded the way they are in Gutenberg? 🤔
|
1.0
|
Should build files be excluded from version control? - ## The Situation
I had never considered that someone else might want to contribute to the plugin, although I know it's a possibility and I have contributed to other plugins on occasion myself! To that end, if build files are committed then there will always be a conflict (if just of the date and time in the asset files), which will make merging more problematic.
## To Ponder
So, should build files be excluded the way they are in Gutenberg? 🤔
|
process
|
should build files be excluded from version control the situation i had never considered that someone else might want to contribute to the plugin although i know it s a possibility and i have contributed to other plugins on occasion myself to that end if build files are committed then there will always be a conflict if just of the date and time in the asset files which will make merging more problematic to ponder so should build files be excluded the way they are in gutenberg 🤔
| 1
|
14,327
| 17,362,331,338
|
IssuesEvent
|
2021-07-29 23:00:58
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Need QgsProcessing::SourceType::TypePluginLayer for custom layers Processing Algorithms
|
Feature Request Processing PyQGIS
|
Feature description.
----------------------
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
QGIS allows to create plugins and their own `QgsPluginLayer`/`QgsPluginLayerType` ...
but one cannot use such layers in the processing toolbox *(or at least I didn't find out howto)*
**In the current state, there is no way to handle a custom layer (aka `QgsPlugin`) as a `QgsProcessingAlgorithm` parameter...**
It would be awesome to get a way to *at least* add a restriction on `QgsPluginLayerTypes` to `QgsProcessingParameterMapLayer.QgsProcessingParameterLimitedDataTypes` ... and allow to emit an apropriated error/warning at runtime...
|
1.0
|
Need QgsProcessing::SourceType::TypePluginLayer for custom layers Processing Algorithms - Feature description.
----------------------
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
QGIS allows to create plugins and their own `QgsPluginLayer`/`QgsPluginLayerType` ...
but one cannot use such layers in the processing toolbox *(or at least I didn't find out howto)*
**In the current state, there is no way to handle a custom layer (aka `QgsPlugin`) as a `QgsProcessingAlgorithm` parameter...**
It would be awesome to get a way to *at least* add a restriction on `QgsPluginLayerTypes` to `QgsProcessingParameterMapLayer.QgsProcessingParameterLimitedDataTypes` ... and allow to emit an apropriated error/warning at runtime...
|
process
|
need qgsprocessing sourcetype typepluginlayer for custom layers processing algorithms feature description qgis allows to create plugins and their own qgspluginlayer qgspluginlayertype but one cannot use such layers in the processing toolbox or at least i didn t find out howto in the current state there is no way to handle a custom layer aka qgsplugin as a qgsprocessingalgorithm parameter it would be awesome to get a way to at least add a restriction on qgspluginlayertypes to qgsprocessingparametermaplayer qgsprocessingparameterlimiteddatatypes and allow to emit an apropriated error warning at runtime
| 1
|
1,220
| 3,749,917,005
|
IssuesEvent
|
2016-03-11 02:41:50
|
mapbox/mapbox-gl-js
|
https://api.github.com/repos/mapbox/mapbox-gl-js
|
opened
|
Speed up or remove "Camera flyTo ascends" test
|
testing & release process
|
This test accounts for a vast majority of the test runtime
https://github.com/mapbox/mapbox-gl-js/blob/master/test/js/ui/camera.test.js#L810
|
1.0
|
Speed up or remove "Camera flyTo ascends" test - This test accounts for a vast majority of the test runtime
https://github.com/mapbox/mapbox-gl-js/blob/master/test/js/ui/camera.test.js#L810
|
process
|
speed up or remove camera flyto ascends test this test accounts for a vast majority of the test runtime
| 1
|
58,099
| 14,279,595,011
|
IssuesEvent
|
2020-11-23 03:21:36
|
jump-dev/Ipopt.jl
|
https://api.github.com/repos/jump-dev/Ipopt.jl
|
closed
|
Error linking IPOPT with HSL built on Windows
|
Custom build
|
Following the [instruction]( https://coin-or.github.io/Ipopt/INSTALL.html) to build `Ipopt.jl` onWindows (see more details below), I wanted to run `Ipopt.jl` with the custom binaries.
But running `Pkg.build("Ipopt")` failes at https://github.com/jump-dev/Ipopt.jl/blob/a7231ce420a7616cd51278f7196b94037a41c91c/deps/build.jl#L118
with the error message
```julia
┌ Error: Error building `Ipopt`:
│ ERROR: LoadError: Could not install custom libraries from D:/Libraries/msys64/mingw64/bin and D:/Libraries/msys64/mingw64/lib.
│ To fall back to BinaryProvider call delete!(ENV,"JULIA_IPOPT_LIBRARY_PATH");delete!(ENV,"JULIA_IPOPT_EXECUTABLE_PATH") and run build
again.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:122
│ [3] include(::String) at .\client.jl:457
│ [4] top-level scope at none:5
│ in expression starting at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:116
└ @ Pkg.Operations D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Pkg\src\Operations.jl:949
```
I am a bit lost at the moment. Any pointers on how to proceed? Did I miss something?
One problem that I noticed is that if a run `Libdl.dlopen("D:/Libraries/msys64/mingw64/bin/libipopt-3.dll")`,it kills the Julia session most of the time . Except if I do it with Julia 1.3.1 then I get
```julia
julia> Libdl.dlopen("D:/Libraries/msys64/mingw64/bin/libipopt-3.dll")
Ptr{Nothing} @0x0000000065180000
```
<details>
<summary>Installation procedure:</summary>
### Install [msys2](https://www.msys2.org/)
1. `pacman -Syu`
2. `pacman -Su`
3. Install required dependencies
- `pacman -S binutils diffutils git grep make patch pkg-config`
- `pacman -S mingw-w64-x86_64-gcc`
- `pacman -S mingw-w64-x86_64-gfortran`
=> did not work, replaced by `pacman -S gcc-fortran`
- `pacman -S mingw-w64-x86_64-lapack mingw-w64-x86_64-metis`
4. Install other dependencies (might not be needed)
- `pacman -S mingw-w64-x86_64-gcc`
- `pacman -S mingw-w64-x86_64-toolchain `
- `pacman -S autoconf make libtool automake git`
### open `Mingw-w64 64 bit`
1. `cd D:/Libraries/Ipopt_msys2`
### ASL
1. `git clone https://github.com/coin-or-tools/ThirdParty-ASL.git`
2. `cd ThirdParty-ASL`
3. `./get.ASL` # adds solvers
3. `./configure`
4. `make`
5. `make install`
6. `cd ..`
### HSL
1. `git clone https://github.com/coin-or-tools/ThirdParty-HSL.git`
2. `cd ThirdParty-HSL`
2. add the `coinhsl` folder
3. `./configure`
4. `make`
5. `make install`
6. `cd ..`
### Ipopt
1. `git clone https://github.com/coin-or/Ipopt.git`
8. `cd Ipopt`
6. `mkdir build`
5. `cd build`
4. `../configure`
- `checking for package HSL... yes`
- `checking for package ASL... yes`
3. `make`
* libtool: warning: 'D:/Libraries/msys64/mingw64/lib/libcoinhsl.la' seems to be moved
* libtool: warning: '-version-info' is ignored for programs
2. `make test`
* Test passed!
1. `make install`
### Ipopt.jl
```julia
ENV["JULIA_IPOPT_LIBRARY_PATH"] = "D:/Libraries/msys64/mingw64/bin"
ENV["JULIA_IPOPT_EXECUTABLE_PATH"] = "D:/Libraries/msys64/mingw64/lib"
import Pkg; Pkg.build("Ipopt")
Ipopt → `D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.log`
┌ Error: Error building `Ipopt`:
│ ERROR: LoadError: Could not install custom libraries from and .
│ To fall back to BinaryProvider call delete!(ENV,"JULIA_IPOPT_LIBRARY_PATH");delete!(ENV,"JULIA_IPOPT_EXECUTABLE_PATH") and run build
again.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:122
│ [3] include(::String) at .\client.jl:457
│ [4] top-level scope at none:5
│ in expression starting at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:116
└ @ Pkg.Operations D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Pkg\src\Operations.jl:949
```
</details>
<details>
<summary>File Structure</summary>
- `D:\Libraries\msys64\mingw64\bin`

- `D:\Libraries\msys64\mingw64\lib`

</details>
<details>
<summary>System specifications</summary>
```julia
julia> versioninfo()
Julia Version 1.5.2
Commit 539f3ce943 (2020-09-23 23:17 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-9.0.1 (ORCJIT, skylake-avx512)
```
</details>
Thanks for any help in advance!
Side note: This compilation was done because I want to use the ma27 solver with `Ipopt.jl`
|
1.0
|
Error linking IPOPT with HSL built on Windows - Following the [instruction]( https://coin-or.github.io/Ipopt/INSTALL.html) to build `Ipopt.jl` onWindows (see more details below), I wanted to run `Ipopt.jl` with the custom binaries.
But running `Pkg.build("Ipopt")` failes at https://github.com/jump-dev/Ipopt.jl/blob/a7231ce420a7616cd51278f7196b94037a41c91c/deps/build.jl#L118
with the error message
```julia
┌ Error: Error building `Ipopt`:
│ ERROR: LoadError: Could not install custom libraries from D:/Libraries/msys64/mingw64/bin and D:/Libraries/msys64/mingw64/lib.
│ To fall back to BinaryProvider call delete!(ENV,"JULIA_IPOPT_LIBRARY_PATH");delete!(ENV,"JULIA_IPOPT_EXECUTABLE_PATH") and run build
again.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:122
│ [3] include(::String) at .\client.jl:457
│ [4] top-level scope at none:5
│ in expression starting at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:116
└ @ Pkg.Operations D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Pkg\src\Operations.jl:949
```
I am a bit lost at the moment. Any pointers on how to proceed? Did I miss something?
One problem that I noticed is that if a run `Libdl.dlopen("D:/Libraries/msys64/mingw64/bin/libipopt-3.dll")`,it kills the Julia session most of the time . Except if I do it with Julia 1.3.1 then I get
```julia
julia> Libdl.dlopen("D:/Libraries/msys64/mingw64/bin/libipopt-3.dll")
Ptr{Nothing} @0x0000000065180000
```
<details>
<summary>Installation procedure:</summary>
### Install [msys2](https://www.msys2.org/)
1. `pacman -Syu`
2. `pacman -Su`
3. Install required dependencies
- `pacman -S binutils diffutils git grep make patch pkg-config`
- `pacman -S mingw-w64-x86_64-gcc`
- `pacman -S mingw-w64-x86_64-gfortran`
=> did not work, replaced by `pacman -S gcc-fortran`
- `pacman -S mingw-w64-x86_64-lapack mingw-w64-x86_64-metis`
4. Install other dependencies (might not be needed)
- `pacman -S mingw-w64-x86_64-gcc`
- `pacman -S mingw-w64-x86_64-toolchain `
- `pacman -S autoconf make libtool automake git`
### open `Mingw-w64 64 bit`
1. `cd D:/Libraries/Ipopt_msys2`
### ASL
1. `git clone https://github.com/coin-or-tools/ThirdParty-ASL.git`
2. `cd ThirdParty-ASL`
3. `./get.ASL` # adds solvers
3. `./configure`
4. `make`
5. `make install`
6. `cd ..`
### HSL
1. `git clone https://github.com/coin-or-tools/ThirdParty-HSL.git`
2. `cd ThirdParty-HSL`
2. add the `coinhsl` folder
3. `./configure`
4. `make`
5. `make install`
6. `cd ..`
### Ipopt
1. `git clone https://github.com/coin-or/Ipopt.git`
8. `cd Ipopt`
6. `mkdir build`
5. `cd build`
4. `../configure`
- `checking for package HSL... yes`
- `checking for package ASL... yes`
3. `make`
* libtool: warning: 'D:/Libraries/msys64/mingw64/lib/libcoinhsl.la' seems to be moved
* libtool: warning: '-version-info' is ignored for programs
2. `make test`
* Test passed!
1. `make install`
### Ipopt.jl
```julia
ENV["JULIA_IPOPT_LIBRARY_PATH"] = "D:/Libraries/msys64/mingw64/bin"
ENV["JULIA_IPOPT_EXECUTABLE_PATH"] = "D:/Libraries/msys64/mingw64/lib"
import Pkg; Pkg.build("Ipopt")
Ipopt → `D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.log`
┌ Error: Error building `Ipopt`:
│ ERROR: LoadError: Could not install custom libraries from and .
│ To fall back to BinaryProvider call delete!(ENV,"JULIA_IPOPT_LIBRARY_PATH");delete!(ENV,"JULIA_IPOPT_EXECUTABLE_PATH") and run build
again.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:122
│ [3] include(::String) at .\client.jl:457
│ [4] top-level scope at none:5
│ in expression starting at D:\Libraries\Julia\v1.x\packages\Ipopt\bYzBL\deps\build.jl:116
└ @ Pkg.Operations D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\Pkg\src\Operations.jl:949
```
</details>
<details>
<summary>File Structure</summary>
- `D:\Libraries\msys64\mingw64\bin`

- `D:\Libraries\msys64\mingw64\lib`

</details>
<details>
<summary>System specifications</summary>
```julia
julia> versioninfo()
Julia Version 1.5.2
Commit 539f3ce943 (2020-09-23 23:17 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-9.0.1 (ORCJIT, skylake-avx512)
```
</details>
Thanks for any help in advance!
Side note: This compilation was done because I want to use the ma27 solver with `Ipopt.jl`
|
non_process
|
error linking ipopt with hsl built on windows following the to build ipopt jl onwindows see more details below i wanted to run ipopt jl with the custom binaries but running pkg build ipopt failes at with the error message julia ┌ error error building ipopt │ error loaderror could not install custom libraries from d libraries bin and d libraries lib │ to fall back to binaryprovider call delete env julia ipopt library path delete env julia ipopt executable path and run build again │ stacktrace │ error string at error jl │ top level scope at d libraries julia x packages ipopt byzbl deps build jl │ include string at client jl │ top level scope at none │ in expression starting at d libraries julia x packages ipopt byzbl deps build jl └ pkg operations d buildbot worker package build usr share julia stdlib pkg src operations jl i am a bit lost at the moment any pointers on how to proceed did i miss something one problem that i noticed is that if a run libdl dlopen d libraries bin libipopt dll it kills the julia session most of the time except if i do it with julia then i get julia julia libdl dlopen d libraries bin libipopt dll ptr nothing installation procedure install pacman syu pacman su install required dependencies pacman s binutils diffutils git grep make patch pkg config pacman s mingw gcc pacman s mingw gfortran did not work replaced by pacman s gcc fortran pacman s mingw lapack mingw metis install other dependencies might not be needed pacman s mingw gcc pacman s mingw toolchain pacman s autoconf make libtool automake git open mingw bit cd d libraries ipopt asl git clone cd thirdparty asl get asl adds solvers configure make make install cd hsl git clone cd thirdparty hsl add the coinhsl folder configure make make install cd ipopt git clone cd ipopt mkdir build cd build configure checking for package hsl yes checking for package asl yes make libtool warning d libraries lib libcoinhsl la seems to be moved libtool warning version info is ignored for programs make test test passed make install ipopt jl julia env d libraries bin env d libraries lib import pkg pkg build ipopt ipopt → d libraries julia x packages ipopt byzbl deps build log ┌ error error building ipopt │ error loaderror could not install custom libraries from and │ to fall back to binaryprovider call delete env julia ipopt library path delete env julia ipopt executable path and run build again │ stacktrace │ error string at error jl │ top level scope at d libraries julia x packages ipopt byzbl deps build jl │ include string at client jl │ top level scope at none │ in expression starting at d libraries julia x packages ipopt byzbl deps build jl └ pkg operations d buildbot worker package build usr share julia stdlib pkg src operations jl file structure d libraries bin d libraries lib system specifications julia julia versioninfo julia version commit utc platform info os windows cpu intel r xeon r platinum cpu word size libm libopenlibm llvm libllvm orcjit skylake thanks for any help in advance side note this compilation was done because i want to use the solver with ipopt jl
| 0
|
831
| 3,297,049,081
|
IssuesEvent
|
2015-11-02 05:14:08
|
ViDA-NYU/genotet
|
https://api.github.com/repos/ViDA-NYU/genotet
|
opened
|
Add binding data upload
|
data processing functionality server
|
The user needs to be able to upload binding data through the user interface. The system takes the file and run necessary pre-processing and stores the resulting file & data structure on the server disk.
|
1.0
|
Add binding data upload - The user needs to be able to upload binding data through the user interface. The system takes the file and run necessary pre-processing and stores the resulting file & data structure on the server disk.
|
process
|
add binding data upload the user needs to be able to upload binding data through the user interface the system takes the file and run necessary pre processing and stores the resulting file data structure on the server disk
| 1
|
12,627
| 15,016,007,725
|
IssuesEvent
|
2021-02-01 09:02:36
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Change the text from 'Email ID' to 'Email'
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
For consistency change the text from 'Email ID' to 'Email' for the following screens
1. Add new admin screen
2. Edit admin details screen
3. My account screen

|
3.0
|
Change the text from 'Email ID' to 'Email' - For consistency change the text from 'Email ID' to 'Email' for the following screens
1. Add new admin screen
2. Edit admin details screen
3. My account screen

|
process
|
change the text from email id to email for consistency change the text from email id to email for the following screens add new admin screen edit admin details screen my account screen
| 1
|
224,104
| 17,660,311,944
|
IssuesEvent
|
2021-08-21 11:05:51
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
Invalid source mapping shown in tests
|
testing output
|
Invalid source mapping shown in tests
```
2021-08-21 08:00:12,060 [ERROR] (MainProcess) PID:252247 <formatting_helper> Invalid source mapping: ['fish:history:command', 'LOG Fish History']
```
|
1.0
|
Invalid source mapping shown in tests - Invalid source mapping shown in tests
```
2021-08-21 08:00:12,060 [ERROR] (MainProcess) PID:252247 <formatting_helper> Invalid source mapping: ['fish:history:command', 'LOG Fish History']
```
|
non_process
|
invalid source mapping shown in tests invalid source mapping shown in tests mainprocess pid invalid source mapping
| 0
|
17,212
| 22,797,774,566
|
IssuesEvent
|
2022-07-11 00:09:40
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
CSS Modules stop working after adding PostCSS config
|
:bug: Bug CSS Preprocessing Stale
|
# 🐛 bug report
When setting up a starter project I've noticed that CSS Modules worked fine at first, but when I add a postcss config to the project, CSS modules file detection stops working. Adding `modules: true` to the config enforces modules for all files in that case but that still gets applied only for the files located in the product and not in the library itself.
## 🎛 Configuration (.babelrc, package.json, cli command)
No additional configuration
## 🤔 Expected Behavior
Color from index.module.css styles get applied to the rendered text
## 😯 Current Behavior
CSS Modules import returns no classNames and text is rendered with the default browser color
## 🔦 Context
I've been trying to create an example of using my component library [Arcade together with Parcel](https://github.com/arcade-design/community/tree/master/examples). It heavily relies on 2 ideas of working with CSS:
- It exposes a postcss config that should be used on the product side by re-exporting it from postcss.config.js
- It works with CSS Modules which stay in the ESM build of the library and also can be used in the product code
## 💻 Code Sample
The issue can be reproduced here:
https://github.com/blvdmitry/parcel-postcss-issue
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.1
| Node | 16.13.0
| npm/Yarn | Yarn 1.22.11
| Operating System | Mac OS Big Sur 11.6.1
|
1.0
|
CSS Modules stop working after adding PostCSS config - # 🐛 bug report
When setting up a starter project I've noticed that CSS Modules worked fine at first, but when I add a postcss config to the project, CSS modules file detection stops working. Adding `modules: true` to the config enforces modules for all files in that case but that still gets applied only for the files located in the product and not in the library itself.
## 🎛 Configuration (.babelrc, package.json, cli command)
No additional configuration
## 🤔 Expected Behavior
Color from index.module.css styles get applied to the rendered text
## 😯 Current Behavior
CSS Modules import returns no classNames and text is rendered with the default browser color
## 🔦 Context
I've been trying to create an example of using my component library [Arcade together with Parcel](https://github.com/arcade-design/community/tree/master/examples). It heavily relies on 2 ideas of working with CSS:
- It exposes a postcss config that should be used on the product side by re-exporting it from postcss.config.js
- It works with CSS Modules which stay in the ESM build of the library and also can be used in the product code
## 💻 Code Sample
The issue can be reproduced here:
https://github.com/blvdmitry/parcel-postcss-issue
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.1
| Node | 16.13.0
| npm/Yarn | Yarn 1.22.11
| Operating System | Mac OS Big Sur 11.6.1
|
process
|
css modules stop working after adding postcss config 🐛 bug report when setting up a starter project i ve noticed that css modules worked fine at first but when i add a postcss config to the project css modules file detection stops working adding modules true to the config enforces modules for all files in that case but that still gets applied only for the files located in the product and not in the library itself 🎛 configuration babelrc package json cli command no additional configuration 🤔 expected behavior color from index module css styles get applied to the rendered text 😯 current behavior css modules import returns no classnames and text is rendered with the default browser color 🔦 context i ve been trying to create an example of using my component library it heavily relies on ideas of working with css it exposes a postcss config that should be used on the product side by re exporting it from postcss config js it works with css modules which stay in the esm build of the library and also can be used in the product code 💻 code sample the issue can be reproduced here 🌍 your environment software version s parcel node npm yarn yarn operating system mac os big sur
| 1
|
15,497
| 19,703,237,156
|
IssuesEvent
|
2022-01-12 18:50:19
|
googleapis/google-cloud-php-common-protos
|
https://api.github.com/repos/googleapis/google-cloud-php-common-protos
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 client documentation must match pattern in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
571,586
| 17,023,328,613
|
IssuesEvent
|
2021-07-03 01:27:29
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Patch to allow osm2pgsql to create sqlite databases
|
Component: osm2pgsql Priority: minor Resolution: invalid Type: enhancement
|
**[Submitted to the original trac issue database at 3.41pm, Tuesday, 2nd December 2008]**
Patch to add an additional output layer to osm2pgsql which creates sqlite databases.
sqlite does not support geographic data-types (POINT/LINE/POLYGON etc.) so these are modelled loosely in seperate tables.
|
1.0
|
Patch to allow osm2pgsql to create sqlite databases - **[Submitted to the original trac issue database at 3.41pm, Tuesday, 2nd December 2008]**
Patch to add an additional output layer to osm2pgsql which creates sqlite databases.
sqlite does not support geographic data-types (POINT/LINE/POLYGON etc.) so these are modelled loosely in seperate tables.
|
non_process
|
patch to allow to create sqlite databases patch to add an additional output layer to which creates sqlite databases sqlite does not support geographic data types point line polygon etc so these are modelled loosely in seperate tables
| 0
|
17,418
| 5,401,246,913
|
IssuesEvent
|
2017-02-28 00:28:57
|
mrr0088/Python_DataClassification
|
https://api.github.com/repos/mrr0088/Python_DataClassification
|
closed
|
Subida de múltiples pdf
|
code enhancement
|
Mejorar el script que permitía adjuntar un pdf añadiéndole la posibilidad de subir múltiples pdf y, además, insertándolos en la base de datos de forma similar a las noticias de páginas web
|
1.0
|
Subida de múltiples pdf - Mejorar el script que permitía adjuntar un pdf añadiéndole la posibilidad de subir múltiples pdf y, además, insertándolos en la base de datos de forma similar a las noticias de páginas web
|
non_process
|
subida de múltiples pdf mejorar el script que permitía adjuntar un pdf añadiéndole la posibilidad de subir múltiples pdf y además insertándolos en la base de datos de forma similar a las noticias de páginas web
| 0
|
20,957
| 27,817,217,222
|
IssuesEvent
|
2023-03-18 20:31:48
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
closed
|
Implement php backend into dev branch
|
Processing Task Sprint 2
|
Test:
Locate the two php files Login.php and Register.php within the server folder in src.

Move those to your htdocs folder which can be found in your XAMPP directory.
Start up the Apache Webserver by opening up the XAMPP control panel or manager.osx as its called on mac.

Cd into the root directory of the app and run "npm start".
A website should popup like this one or make your way to localhost:3000.

Go into Login and verify the login form works as intended by following issue #51
Go into Signup and verify the signup form works as intended by following issue #51
|
1.0
|
Implement php backend into dev branch - Test:
Locate the two php files Login.php and Register.php within the server folder in src.

Move those to your htdocs folder which can be found in your XAMPP directory.
Start up the Apache Webserver by opening up the XAMPP control panel or manager.osx as its called on mac.

Cd into the root directory of the app and run "npm start".
A website should popup like this one or make your way to localhost:3000.

Go into Login and verify the login form works as intended by following issue #51
Go into Signup and verify the signup form works as intended by following issue #51
|
process
|
implement php backend into dev branch test locate the two php files login php and register php within the server folder in src move those to your htdocs folder which can be found in your xampp directory start up the apache webserver by opening up the xampp control panel or manager osx as its called on mac cd into the root directory of the app and run npm start a website should popup like this one or make your way to localhost go into login and verify the login form works as intended by following issue go into signup and verify the signup form works as intended by following issue
| 1
|
12,451
| 14,935,197,303
|
IssuesEvent
|
2021-01-25 11:35:44
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
opened
|
Detect branch misalignment issues
|
annoying processors
|
Currently if the processors that you execute within a [`branch`](https://www.benthos.dev/docs/components/processors/branch) change the number of resulting messages then the result map fails. However, if the _ordering_ of messages changes (due to `group_by`, etc), then we can potentially also catch this issue by adding context to the messages similar to the switch processor: https://github.com/Jeffail/benthos/blob/master/lib/processor/switch.go
|
1.0
|
Detect branch misalignment issues - Currently if the processors that you execute within a [`branch`](https://www.benthos.dev/docs/components/processors/branch) change the number of resulting messages then the result map fails. However, if the _ordering_ of messages changes (due to `group_by`, etc), then we can potentially also catch this issue by adding context to the messages similar to the switch processor: https://github.com/Jeffail/benthos/blob/master/lib/processor/switch.go
|
process
|
detect branch misalignment issues currently if the processors that you execute within a change the number of resulting messages then the result map fails however if the ordering of messages changes due to group by etc then we can potentially also catch this issue by adding context to the messages similar to the switch processor
| 1
|
18,609
| 24,579,175,594
|
IssuesEvent
|
2022-10-13 14:27:13
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] [Mobile Apps] Unable to withdrawn from the study in the mobile apps
|
Bug P0 iOS Android Process: Fixed Process: Tested dev
|
**Steps:**
1. Enroll into the study.
2. Submitting response or without submitting any response, Click on Leave study and Observe.
**Note:**
1. Unable to deactivate the app also
2. Can refer below study name to enroll in the mobile apps:
* demo_test_on_consent
* Copy of demo_test_on_consent
* CONSENT_1
**AR:** Getting error as attached in the below screenshots.
1.

2.

|
2.0
|
[Consent API] [Mobile Apps] Unable to withdrawn from the study in the mobile apps - **Steps:**
1. Enroll into the study.
2. Submitting response or without submitting any response, Click on Leave study and Observe.
**Note:**
1. Unable to deactivate the app also
2. Can refer below study name to enroll in the mobile apps:
* demo_test_on_consent
* Copy of demo_test_on_consent
* CONSENT_1
**AR:** Getting error as attached in the below screenshots.
1.

2.

|
process
|
unable to withdrawn from the study in the mobile apps steps enroll into the study submitting response or without submitting any response click on leave study and observe note unable to deactivate the app also can refer below study name to enroll in the mobile apps demo test on consent copy of demo test on consent consent ar getting error as attached in the below screenshots
| 1
|
105,422
| 13,184,387,279
|
IssuesEvent
|
2020-08-12 19:18:48
|
jupyterlab/jupyterlab-git
|
https://api.github.com/repos/jupyterlab/jupyterlab-git
|
closed
|
Git Init Menu
|
pkg:Frontend tag:Design and UX type:Enhancement
|
The menu has "Git Init" in it. I suggest two separate things:
1. Remove git init when inside a git repo already (its rare to have one git repo int another)
2. Allow users, via a config option, to change the default command from `git init` to something else.
The reason for 2, is I typically want `git init && setup_notebook_filters`.
What do people think?
|
1.0
|
Git Init Menu - The menu has "Git Init" in it. I suggest two separate things:
1. Remove git init when inside a git repo already (its rare to have one git repo int another)
2. Allow users, via a config option, to change the default command from `git init` to something else.
The reason for 2, is I typically want `git init && setup_notebook_filters`.
What do people think?
|
non_process
|
git init menu the menu has git init in it i suggest two separate things remove git init when inside a git repo already its rare to have one git repo int another allow users via a config option to change the default command from git init to something else the reason for is i typically want git init setup notebook filters what do people think
| 0
|
804,669
| 29,496,986,711
|
IssuesEvent
|
2023-06-02 17:52:50
|
tyler-technologies-oss/forge
|
https://api.github.com/repos/tyler-technologies-oss/forge
|
closed
|
[select] typing to filter with caps lock on is not working
|
bug priority: medium complexity: low
|
**Describe the bug:**
If a select dropdown is visible, typing the first letter of the list items does not work when CAPS lock (or shift) are on.
**To Reproduce:**
Steps to reproduce the behavior:
1. Enable caps lock on your keyboard (or hold shift while typing)
2. Focus the select field
3. Open the dropdown
4. Attempt to type the first character of the option to move focus to that option
5. Nothing happens
**Expected behavior:**
The focus activation on the list items should work with both lowercase and uppercase letters
**Please complete the following information:**
- Forge version: ^2.0.0
- I have searched existing issues before creating this report? Y
- Browser: All
- Platform: All
- OS: All
|
1.0
|
[select] typing to filter with caps lock on is not working - **Describe the bug:**
If a select dropdown is visible, typing the first letter of the list items does not work when CAPS lock (or shift) are on.
**To Reproduce:**
Steps to reproduce the behavior:
1. Enable caps lock on your keyboard (or hold shift while typing)
2. Focus the select field
3. Open the dropdown
4. Attempt to type the first character of the option to move focus to that option
5. Nothing happens
**Expected behavior:**
The focus activation on the list items should work with both lowercase and uppercase letters
**Please complete the following information:**
- Forge version: ^2.0.0
- I have searched existing issues before creating this report? Y
- Browser: All
- Platform: All
- OS: All
|
non_process
|
typing to filter with caps lock on is not working describe the bug if a select dropdown is visible typing the first letter of the list items does not work when caps lock or shift are on to reproduce steps to reproduce the behavior enable caps lock on your keyboard or hold shift while typing focus the select field open the dropdown attempt to type the first character of the option to move focus to that option nothing happens expected behavior the focus activation on the list items should work with both lowercase and uppercase letters please complete the following information forge version i have searched existing issues before creating this report y browser all platform all os all
| 0
|
9,755
| 6,981,379,286
|
IssuesEvent
|
2017-12-13 07:44:23
|
pingcap/tikv
|
https://api.github.com/repos/pingcap/tikv
|
opened
|
Reduce prometheus metrics on performance critical path
|
enhancement performance scheduler
|
Currently, the bottleneck of RawKV read is the scheduler thread, it runs 100% CPU while the raftstore runs 60%, the network, and disks are not full either.
Based on the flame graph[0] of the scheduler, we can easily find out that Prometheus consumes about 10% CPU time. 10% is not big but definitely not small, there is more room for the read performance!

[0] https://gist.github.com/overvenus/eb3033344df4fd4a2cf707708a2832b4
|
True
|
Reduce prometheus metrics on performance critical path - Currently, the bottleneck of RawKV read is the scheduler thread, it runs 100% CPU while the raftstore runs 60%, the network, and disks are not full either.
Based on the flame graph[0] of the scheduler, we can easily find out that Prometheus consumes about 10% CPU time. 10% is not big but definitely not small, there is more room for the read performance!

[0] https://gist.github.com/overvenus/eb3033344df4fd4a2cf707708a2832b4
|
non_process
|
reduce prometheus metrics on performance critical path currently the bottleneck of rawkv read is the scheduler thread it runs cpu while the raftstore runs the network and disks are not full either based on the flame graph of the scheduler we can easily find out that prometheus consumes about cpu time is not big but definitely not small there is more room for the read performance
| 0
|
17,174
| 22,747,051,194
|
IssuesEvent
|
2022-07-07 10:03:57
|
siisltd/Curiosity.Utils
|
https://api.github.com/repos/siisltd/Curiosity.Utils
|
closed
|
Make filtration options generic
|
enhancement RequestProcessing
|
1. Remove project od, client Id from options
2. Make options generic to allowed user make their own filtration rules
|
1.0
|
Make filtration options generic - 1. Remove project od, client Id from options
2. Make options generic to allowed user make their own filtration rules
|
process
|
make filtration options generic remove project od client id from options make options generic to allowed user make their own filtration rules
| 1
|
15,977
| 20,188,184,240
|
IssuesEvent
|
2022-02-11 01:16:07
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Implement Conditional Access Policies
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Authentication and authorization
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authentication#enable-conditional-access">Implement Conditional Access Policies</a>
<p><b>Why Consider This?</b></p>
Modern cloud-based applications are typically accessible over the internet, making network location-based access inflexible and single-factor passwords a liability. Conditional Access describes your authentication policy for an access decision. For example, if a user is connecting from an InTune managed corporate PC, they might not be challenged for MFA every time, but if the user suddenly connects from a different device in a different geography, MFA is required.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Implement Conditional Access Policies for this workload</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/</span></a><span /></p>
<p><b>Why Consider This?</b></p>
Authentication for all users should include measurement and enforcement of key security attributes to support a Zero Trust strategy. Although most organizations will apply these types of recommendations for privileged users first, the same recommendations apply to all user accounts. It can also reduce use of passwords by applications using Managed Identities to grant access to resources in Azure.
<p><b>Context</b></p>
<p><span>The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can utilize these identity signals as part of their access control decisions.</span></p><p><span>Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane.</span></p><p><span>Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to perform multi-factor authentication to access it.</span></p><p><span>Administrators are faced with two primary goals:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Empower users to be productive wherever and whenever</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Protect the organization's assets</span></li></ul><p><span>By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user's way when not needed.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Identify all externally accessible applications and evaluate how a conditional access strategy can be instituted.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview" target="_blank"><span>What is Conditional Access?</span></a><span /></p>
|
1.0
|
Implement Conditional Access Policies - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authentication#enable-conditional-access">Implement Conditional Access Policies</a>
<p><b>Why Consider This?</b></p>
Modern cloud-based applications are typically accessible over the internet, making network location-based access inflexible and single-factor passwords a liability. Conditional Access describes your authentication policy for an access decision. For example, if a user is connecting from an InTune managed corporate PC, they might not be challenged for MFA every time, but if the user suddenly connects from a different device in a different geography, MFA is required.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Implement Conditional Access Policies for this workload</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/</span></a><span /></p>
<p><b>Why Consider This?</b></p>
Authentication for all users should include measurement and enforcement of key security attributes to support a Zero Trust strategy. Although most organizations will apply these types of recommendations for privileged users first, the same recommendations apply to all user accounts. It can also reduce use of passwords by applications using Managed Identities to grant access to resources in Azure.
<p><b>Context</b></p>
<p><span>The modern security perimeter now extends beyond an organization's network to include user and device identity. Organizations can utilize these identity signals as part of their access control decisions.</span></p><p><span>Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane.</span></p><p><span>Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to perform multi-factor authentication to access it.</span></p><p><span>Administrators are faced with two primary goals:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Empower users to be productive wherever and whenever</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Protect the organization's assets</span></li></ul><p><span>By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user's way when not needed.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Identify all externally accessible applications and evaluate how a conditional access strategy can be instituted.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview" target="_blank"><span>What is Conditional Access?</span></a><span /></p>
|
process
|
implement conditional access policies why consider this modern cloud based applications are typically accessible over the internet making network location based access inflexible and single factor passwords a liability conditional access describes your authentication policy for an access decision for example if a user is connecting from an intune managed corporate pc they might not be challenged for mfa every time but if the user suddenly connects from a different device in a different geography mfa is required context suggested actions implement conditional access policies for this workload learn more why consider this authentication for all users should include measurement and enforcement of key security attributes to support a zero trust strategy although most organizations will apply these types of recommendations for privileged users first the same recommendations apply to all user accounts it can also reduce use of passwords by applications using managed identities to grant access to resources in azure context the modern security perimeter now extends beyond an organization s network to include user and device identity organizations can utilize these identity signals as part of their access control decisions conditional access is the tool used by azure active directory to bring signals together to make decisions and enforce organizational policies conditional access is at the heart of the new identity driven control plane conditional access policies at their simplest are if then statements if a user wants to access a resource then they must complete an action example a payroll manager wants to access the payroll application and is required to perform multi factor authentication to access it administrators are faced with two primary goals empower users to be productive wherever and whenever protect the organization s assets by using conditional access policies you can apply the right access controls when needed to keep your organization secure and stay out of your user s way when not needed suggested actions identify all externally accessible applications and evaluate how a conditional access strategy can be instituted learn more what is conditional access
| 1
|
215,456
| 16,603,868,632
|
IssuesEvent
|
2021-06-01 23:56:28
|
Agoric/agoric-sdk
|
https://api.github.com/repos/Agoric/agoric-sdk
|
opened
|
document best practices for validators to manage database size
|
documentation enhancement
|
## What is the Problem Being Solved?
#3073 and #3235 are about giving validator operators tools to control how much disk space their nodes will use. One part of their maintenance checklist will be to monitor how much disk space is being used by the swingset database. If this begins to approach the max DB size limit, they must:
1: ensure additional disk space is available (moving the validator state directory to a larger disk if necessary)
2: change the config file to enable a larger max DB size
3: restart the node
We'll need to provide some guidance and notes in the Validators Guide, to make operators aware of the config file (to be implemented in #3235) and what directory to monitor. Once we get more experience with the production chain, we should publish some notes on current DB size and predicted growth rates, to help validators anticipate usage patterns and provision enough disk space to meet their needs.
We won't be able to fill in the details until #3235 is done (which will give a name to the config file where the max DB size can be set), and our ability to provide guidance on the actual numbers will be limited until we have more experience with the chain.
cc @tyg @michaelfig @FUDCo
|
1.0
|
document best practices for validators to manage database size - ## What is the Problem Being Solved?
#3073 and #3235 are about giving validator operators tools to control how much disk space their nodes will use. One part of their maintenance checklist will be to monitor how much disk space is being used by the swingset database. If this begins to approach the max DB size limit, they must:
1: ensure additional disk space is available (moving the validator state directory to a larger disk if necessary)
2: change the config file to enable a larger max DB size
3: restart the node
We'll need to provide some guidance and notes in the Validators Guide, to make operators aware of the config file (to be implemented in #3235) and what directory to monitor. Once we get more experience with the production chain, we should publish some notes on current DB size and predicted growth rates, to help validators anticipate usage patterns and provision enough disk space to meet their needs.
We won't be able to fill in the details until #3235 is done (which will give a name to the config file where the max DB size can be set), and our ability to provide guidance on the actual numbers will be limited until we have more experience with the chain.
cc @tyg @michaelfig @FUDCo
|
non_process
|
document best practices for validators to manage database size what is the problem being solved and are about giving validator operators tools to control how much disk space their nodes will use one part of their maintenance checklist will be to monitor how much disk space is being used by the swingset database if this begins to approach the max db size limit they must ensure additional disk space is available moving the validator state directory to a larger disk if necessary change the config file to enable a larger max db size restart the node we ll need to provide some guidance and notes in the validators guide to make operators aware of the config file to be implemented in and what directory to monitor once we get more experience with the production chain we should publish some notes on current db size and predicted growth rates to help validators anticipate usage patterns and provision enough disk space to meet their needs we won t be able to fill in the details until is done which will give a name to the config file where the max db size can be set and our ability to provide guidance on the actual numbers will be limited until we have more experience with the chain cc tyg michaelfig fudco
| 0
|
10,995
| 13,785,995,814
|
IssuesEvent
|
2020-10-09 00:27:29
|
cbrennanpoole/Qualitative-Self
|
https://api.github.com/repos/cbrennanpoole/Qualitative-Self
|
closed
|
Why Design Thinking Works
|
Creative Strategy process implementation
|

## Harvard
What was America's oldest college original slogan?<br>
### Business Review
Why has the entire continent taken to secular?<br>
#### with Wind
Comparative analysis of *human - centered* **design - thinking** and its inevitable rise to *innovation supremacy * with **total - quality management** and the similar success of the 80s starting in the **manufacturing industry**.<br>
[Original Link | story by Jeanne Liedtka | UVA BBA Professor](https://hbr.org/2018/09/why-design-thinking-works)<br>
[curation with Wind](https://linkedin.com/company/the-wind)
---
**Source URL**:
[https://hbr.org/2018/09/why-design-thinking-works](https://hbr.org/2018/09/why-design-thinking-works)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
1.0
|
Why Design Thinking Works - 
## Harvard
What was America's oldest college original slogan?<br>
### Business Review
Why has the entire continent taken to secular?<br>
#### with Wind
Comparative analysis of *human - centered* **design - thinking** and its inevitable rise to *innovation supremacy * with **total - quality management** and the similar success of the 80s starting in the **manufacturing industry**.<br>
[Original Link | story by Jeanne Liedtka | UVA BBA Professor](https://hbr.org/2018/09/why-design-thinking-works)<br>
[curation with Wind](https://linkedin.com/company/the-wind)
---
**Source URL**:
[https://hbr.org/2018/09/why-design-thinking-works](https://hbr.org/2018/09/why-design-thinking-works)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
process
|
why design thinking works harvard what was america s oldest college original slogan business review why has the entire continent taken to secular with wind comparative analysis of human centered design thinking and its inevitable rise to innovation supremacy with total quality management and the similar success of the starting in the manufacturing industry source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
221,232
| 24,601,066,829
|
IssuesEvent
|
2022-10-14 12:33:30
|
samq-wsdemo/AdvanceAutoParts-datafastlane
|
https://api.github.com/repos/samq-wsdemo/AdvanceAutoParts-datafastlane
|
opened
|
CVE-2022-3171 (High) detected in protobuf-java-2.5.0.jar
|
security vulnerability
|
## CVE-2022-3171 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-2.5.0.jar</b></p></summary>
<p>Protocol Buffers are a way of encoding structured data in an efficient yet
extensible format.</p>
<p>Library home page: <a href="http://code.google.com/p/protobuf">http://code.google.com/p/protobuf</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- spark-sql_2.12-3.0.1.jar (Root Library)
- orc-core-1.5.10.jar
- :x: **protobuf-java-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/AdvanceAutoParts-datafastlane/commit/64d6742a4f73310d068953856a8d4defc8feefdb">64d6742a4f73310d068953856a8d4defc8feefdb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A parsing issue with binary data in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3171>CVE-2022-3171</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4h5-3hr4-j3g2">https://github.com/advisories/GHSA-h4h5-3hr4-j3g2</a></p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (com.google.protobuf:protobuf-java): 4.0.0-rc-1</p>
<p>Direct dependency fix Resolution (org.apache.spark:spark-sql_2.12): 3.1.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2022-3171 (High) detected in protobuf-java-2.5.0.jar - ## CVE-2022-3171 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-2.5.0.jar</b></p></summary>
<p>Protocol Buffers are a way of encoding structured data in an efficient yet
extensible format.</p>
<p>Library home page: <a href="http://code.google.com/p/protobuf">http://code.google.com/p/protobuf</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- spark-sql_2.12-3.0.1.jar (Root Library)
- orc-core-1.5.10.jar
- :x: **protobuf-java-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/AdvanceAutoParts-datafastlane/commit/64d6742a4f73310d068953856a8d4defc8feefdb">64d6742a4f73310d068953856a8d4defc8feefdb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A parsing issue with binary data in protobuf-java core and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service attack. Inputs containing multiple instances of non-repeated embedded messages with repeated or unknown fields causes objects to be converted back-n-forth between mutable and immutable forms, resulting in potentially long garbage collection pauses. We recommend updating to the versions mentioned above.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3171>CVE-2022-3171</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4h5-3hr4-j3g2">https://github.com/advisories/GHSA-h4h5-3hr4-j3g2</a></p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (com.google.protobuf:protobuf-java): 4.0.0-rc-1</p>
<p>Direct dependency fix Resolution (org.apache.spark:spark-sql_2.12): 3.1.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in protobuf java jar cve high severity vulnerability vulnerable library protobuf java jar protocol buffers are a way of encoding structured data in an efficient yet extensible format library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com google protobuf protobuf java protobuf java jar dependency hierarchy spark sql jar root library orc core jar x protobuf java jar vulnerable library found in head commit a href found in base branch master vulnerability details a parsing issue with binary data in protobuf java core and lite versions prior to and can lead to a denial of service attack inputs containing multiple instances of non repeated embedded messages with repeated or unknown fields causes objects to be converted back n forth between mutable and immutable forms resulting in potentially long garbage collection pauses we recommend updating to the versions mentioned above publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com google protobuf protobuf java rc direct dependency fix resolution org apache spark spark sql rescue worker helmet automatic remediation is available for this issue
| 0
|
161,621
| 20,154,168,110
|
IssuesEvent
|
2022-02-09 15:04:15
|
kapseliboi/mimic
|
https://api.github.com/repos/kapseliboi/mimic
|
opened
|
WS-2018-0650 (High) detected in useragent-2.1.13.tgz
|
security vulnerability
|
## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.1.13.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz">https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.0.tgz (Root Library)
- :x: **useragent-2.1.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/mimic/commit/6d4fe404335bf56c57080e4ab1425b65bbe3ac2f">6d4fe404335bf56c57080e4ab1425b65bbe3ac2f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p>
<p>Release Date: 2018-02-27</p>
<p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2018-0650 (High) detected in useragent-2.1.13.tgz - ## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.1.13.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz">https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.0.tgz (Root Library)
- :x: **useragent-2.1.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/mimic/commit/6d4fe404335bf56c57080e4ab1425b65bbe3ac2f">6d4fe404335bf56c57080e4ab1425b65bbe3ac2f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p>
<p>Release Date: 2018-02-27</p>
<p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in useragent tgz ws high severity vulnerability vulnerable library useragent tgz fastest most accurate effecient user agent string parser uses browserscope s research for parsing library home page a href path to dependency file package json path to vulnerable library node modules useragent package json dependency hierarchy karma tgz root library x useragent tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service redos vulnerability was found in useragent through publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nordron angulartemplate dotnetng template jetbrains rider midiator webclient step up your open source security game with whitesource
| 0
|
130,433
| 18,074,218,136
|
IssuesEvent
|
2021-09-21 08:02:54
|
owncloud/core
|
https://api.github.com/repos/owncloud/core
|
closed
|
Workspaces for groups
|
enhancement design status/STALE
|
It would be great if you could dedicate a shared folder to a group and assign a data quota to it.
Until now you can do that by creating a folder in the owncloud data root and mount it for a group, but without quota, not configurable from inside owncloud, no visible creator of files, also the folder is not marked different so the user can see that it is for a whole group. Creating/managing such 'workspaces' would be a great admin function.
|
1.0
|
Workspaces for groups - It would be great if you could dedicate a shared folder to a group and assign a data quota to it.
Until now you can do that by creating a folder in the owncloud data root and mount it for a group, but without quota, not configurable from inside owncloud, no visible creator of files, also the folder is not marked different so the user can see that it is for a whole group. Creating/managing such 'workspaces' would be a great admin function.
|
non_process
|
workspaces for groups it would be great if you could dedicate a shared folder to a group and assign a data quota to it until now you can do that by creating a folder in the owncloud data root and mount it for a group but without quota not configurable from inside owncloud no visible creator of files also the folder is not marked different so the user can see that it is for a whole group creating managing such workspaces would be a great admin function
| 0
|
406,492
| 27,567,541,228
|
IssuesEvent
|
2023-03-08 05:58:55
|
supabase/supabase
|
https://api.github.com/repos/supabase/supabase
|
opened
|
Update operation does not have code documentation
|
documentation
|
# Improve documentation
Update documentation is `undefined`
## Link
Add a link to the page which needs improvement (if relevant)
https://supabase.com/docs/reference/dart/v0/insert
## Describe the problem
The documentation code example is missing
## Describe the improvement
Nothing much to say. Just put the example there and that is it.
## Additional context
No context
|
1.0
|
Update operation does not have code documentation - # Improve documentation
Update documentation is `undefined`
## Link
Add a link to the page which needs improvement (if relevant)
https://supabase.com/docs/reference/dart/v0/insert
## Describe the problem
The documentation code example is missing
## Describe the improvement
Nothing much to say. Just put the example there and that is it.
## Additional context
No context
|
non_process
|
update operation does not have code documentation improve documentation update documentation is undefined link add a link to the page which needs improvement if relevant describe the problem the documentation code example is missing describe the improvement nothing much to say just put the example there and that is it additional context no context
| 0
|
10,705
| 13,501,853,645
|
IssuesEvent
|
2020-09-13 04:59:44
|
amor71/LiuAlgoTrader
|
https://api.github.com/repos/amor71/LiuAlgoTrader
|
closed
|
select data stream events in configuration file
|
in-process
|
configure in the tradeplan TOML file the kind of messages the producer would listen to.
|
1.0
|
select data stream events in configuration file - configure in the tradeplan TOML file the kind of messages the producer would listen to.
|
process
|
select data stream events in configuration file configure in the tradeplan toml file the kind of messages the producer would listen to
| 1
|
218,583
| 7,331,814,937
|
IssuesEvent
|
2018-03-05 14:40:23
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Fix URL in "revision notification" email sent to ESA moderators
|
Category: registry Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Bug
|
---
Author Name: **Jim Regetz** (Jim Regetz)
Original Redmine Issue: 4839, https://projects.ecoinformatics.org/ecoinfo/issues/4839
Original Date: 2010-02-24
Original Assignee: Michael Daigle
---
The "revise document" notification that gets sent to ESA moderators after requesting a revision contains a URL like this:
http://esa-dev.nceas.ucsb.edu/esa/cgi-bin/register-dataset.cgi?stage=modify&cfg=esa&docid=esa.65
It probably shouldn't specify stage=modify, because the moderator is unlikely to want to modify the document at this point. A simple view action for the document would be more appropriate.
Alternatively, it would probably be sufficient to omit the URL altogether, and instead just indicate the docid and include a link to the ESA registry home page as a convenience.
|
1.0
|
Fix URL in "revision notification" email sent to ESA moderators - ---
Author Name: **Jim Regetz** (Jim Regetz)
Original Redmine Issue: 4839, https://projects.ecoinformatics.org/ecoinfo/issues/4839
Original Date: 2010-02-24
Original Assignee: Michael Daigle
---
The "revise document" notification that gets sent to ESA moderators after requesting a revision contains a URL like this:
http://esa-dev.nceas.ucsb.edu/esa/cgi-bin/register-dataset.cgi?stage=modify&cfg=esa&docid=esa.65
It probably shouldn't specify stage=modify, because the moderator is unlikely to want to modify the document at this point. A simple view action for the document would be more appropriate.
Alternatively, it would probably be sufficient to omit the URL altogether, and instead just indicate the docid and include a link to the ESA registry home page as a convenience.
|
non_process
|
fix url in revision notification email sent to esa moderators author name jim regetz jim regetz original redmine issue original date original assignee michael daigle the revise document notification that gets sent to esa moderators after requesting a revision contains a url like this it probably shouldn t specify stage modify because the moderator is unlikely to want to modify the document at this point a simple view action for the document would be more appropriate alternatively it would probably be sufficient to omit the url altogether and instead just indicate the docid and include a link to the esa registry home page as a convenience
| 0
|
8,433
| 11,596,721,784
|
IssuesEvent
|
2020-02-24 19:30:06
|
mauricioaniche/predicting-refactoring-ml
|
https://api.github.com/repos/mauricioaniche/predicting-refactoring-ml
|
closed
|
Process metrics: Track renames
|
data-collection enhancement process-metrics
|
For now, we are not properly tracking renames. So, when a file is renamed, the new file has all its process metrics starting from zero again.
Keep a track of renames, so that we can "transfer" the number from the old files to the new files.
|
1.0
|
Process metrics: Track renames - For now, we are not properly tracking renames. So, when a file is renamed, the new file has all its process metrics starting from zero again.
Keep a track of renames, so that we can "transfer" the number from the old files to the new files.
|
process
|
process metrics track renames for now we are not properly tracking renames so when a file is renamed the new file has all its process metrics starting from zero again keep a track of renames so that we can transfer the number from the old files to the new files
| 1
|
826,285
| 31,563,717,134
|
IssuesEvent
|
2023-09-03 15:03:23
|
WonderfulToolchain/wf-issues
|
https://api.github.com/repos/WonderfulToolchain/wf-issues
|
opened
|
Simplify installation process
|
enhancement help wanted low priority
|
- [ ] The Windows target could have an installer tool or script written - it is much more consistent than Linux installations.
- [ ] Maybe the Linux target could have a partial script too.
|
1.0
|
Simplify installation process - - [ ] The Windows target could have an installer tool or script written - it is much more consistent than Linux installations.
- [ ] Maybe the Linux target could have a partial script too.
|
non_process
|
simplify installation process the windows target could have an installer tool or script written it is much more consistent than linux installations maybe the linux target could have a partial script too
| 0
|
6,476
| 9,551,624,508
|
IssuesEvent
|
2019-05-02 14:50:50
|
jakobib/hypertext2019
|
https://api.github.com/repos/jakobib/hypertext2019
|
closed
|
Add missing URLs in references
|
bug document processing
|
For instances https://www.wikidata.org/wiki/Q62087795 uses P856 (official website) to specify the URL, so the URL is not included in CSL JSON. This needs to be fixed in citation-js, see https://github.com/citation-js/citation-js/pull/34.
|
1.0
|
Add missing URLs in references - For instances https://www.wikidata.org/wiki/Q62087795 uses P856 (official website) to specify the URL, so the URL is not included in CSL JSON. This needs to be fixed in citation-js, see https://github.com/citation-js/citation-js/pull/34.
|
process
|
add missing urls in references for instances uses official website to specify the url so the url is not included in csl json this needs to be fixed in citation js see
| 1
|
5,690
| 8,560,128,222
|
IssuesEvent
|
2018-11-08 23:44:45
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Downgrade testing
|
area/API area/test-and-release kind/feature kind/process
|
<!--
/area API
/area test-and-release
/kind feature
/kind process
/assign @jonjohnsonjr
-->
We should have testing that verifies that downgrading from HEAD to our last release works.
@jonjohnsonjr to fill in caveats
|
1.0
|
Downgrade testing - <!--
/area API
/area test-and-release
/kind feature
/kind process
/assign @jonjohnsonjr
-->
We should have testing that verifies that downgrading from HEAD to our last release works.
@jonjohnsonjr to fill in caveats
|
process
|
downgrade testing area api area test and release kind feature kind process assign jonjohnsonjr we should have testing that verifies that downgrading from head to our last release works jonjohnsonjr to fill in caveats
| 1
|
9,232
| 12,261,013,185
|
IssuesEvent
|
2020-05-06 19:18:40
|
cranec-project/Covid-19
|
https://api.github.com/repos/cranec-project/Covid-19
|
opened
|
connectors for pone patients
|
At overwhelm stage Critical ICU process Need Priority:High Specific need Tech:Mechanics Tech:Plastics Ventilation
|
Patients with ARDS often benefit from being in a prone position for extended periods of time. the problem is that most ICU connectors are not designed for a prone position :
1. not getting disconnected when moving patient
2. monitorable
3. avoid pressure wounds to patient
|
1.0
|
connectors for pone patients - Patients with ARDS often benefit from being in a prone position for extended periods of time. the problem is that most ICU connectors are not designed for a prone position :
1. not getting disconnected when moving patient
2. monitorable
3. avoid pressure wounds to patient
|
process
|
connectors for pone patients patients with ards often benefit from being in a prone position for extended periods of time the problem is that most icu connectors are not designed for a prone position not getting disconnected when moving patient monitorable avoid pressure wounds to patient
| 1
|
179,211
| 6,622,533,488
|
IssuesEvent
|
2017-09-22 00:30:43
|
leo-project/leofs
|
https://api.github.com/repos/leo-project/leofs
|
closed
|
[leo_object_storage] Take much time to open with lots of AVS files
|
Improve Priority-MIDDLE v1.4
|
https://github.com/leo-project/leo_object_storage/blob/1.3.17/src/leo_object_storage_haystack.erl#L430 cause opening an AVS to take at least 100ms so it could be non-negligible time with lots of AVSs. Instead timer:sleep should be invoked only in abnormal cases.
|
1.0
|
[leo_object_storage] Take much time to open with lots of AVS files - https://github.com/leo-project/leo_object_storage/blob/1.3.17/src/leo_object_storage_haystack.erl#L430 cause opening an AVS to take at least 100ms so it could be non-negligible time with lots of AVSs. Instead timer:sleep should be invoked only in abnormal cases.
|
non_process
|
take much time to open with lots of avs files cause opening an avs to take at least so it could be non negligible time with lots of avss instead timer sleep should be invoked only in abnormal cases
| 0
|
40,372
| 8,781,644,216
|
IssuesEvent
|
2018-12-19 21:10:14
|
phetsims/wave-interference
|
https://api.github.com/repos/phetsims/wave-interference
|
opened
|
SlitsControlPanel is too broad
|
dev:code-review
|
Related to code review #259:
> - [ ] Is there any unnecessary coupling? (e.g., by passing large objects to constructors, or exposing unnecessary properties/functions)
SlitsControlPanel does not need the entire model. It needs scenePropery, waterScene, soundScene, and lightScene.
|
1.0
|
SlitsControlPanel is too broad - Related to code review #259:
> - [ ] Is there any unnecessary coupling? (e.g., by passing large objects to constructors, or exposing unnecessary properties/functions)
SlitsControlPanel does not need the entire model. It needs scenePropery, waterScene, soundScene, and lightScene.
|
non_process
|
slitscontrolpanel is too broad related to code review is there any unnecessary coupling e g by passing large objects to constructors or exposing unnecessary properties functions slitscontrolpanel does not need the entire model it needs scenepropery waterscene soundscene and lightscene
| 0
|
624,440
| 19,697,679,099
|
IssuesEvent
|
2022-01-12 13:49:35
|
weaveio/woll-forum
|
https://api.github.com/repos/weaveio/woll-forum
|
closed
|
Setting same "value" forcefully select all the applicable "displays "
|
Fixed New-feature Priority1
|
How can we set them to have same values but not being selected together for display purpose?
These 3 should have same value, 3 in order to be used for computing later to create score.
<img width="1183" alt="スクリーンショット 2021-10-12 16 00 45" src="https://user-images.githubusercontent.com/61481039/136907785-30c09e98-ae8b-48fc-a035-d6ab9b31de1d.png">
<br>
<br>
But setting them so forces users to choose all the applicable choices if they choose any of 3
<img width="928" alt="スクリーンショット 2021-10-12 16 01 22" src="https://user-images.githubusercontent.com/61481039/136907920-2a82736c-78ac-4bc5-a998-7aa453dcd5ff.png">
|
1.0
|
Setting same "value" forcefully select all the applicable "displays " - How can we set them to have same values but not being selected together for display purpose?
These 3 should have same value, 3 in order to be used for computing later to create score.
<img width="1183" alt="スクリーンショット 2021-10-12 16 00 45" src="https://user-images.githubusercontent.com/61481039/136907785-30c09e98-ae8b-48fc-a035-d6ab9b31de1d.png">
<br>
<br>
But setting them so forces users to choose all the applicable choices if they choose any of 3
<img width="928" alt="スクリーンショット 2021-10-12 16 01 22" src="https://user-images.githubusercontent.com/61481039/136907920-2a82736c-78ac-4bc5-a998-7aa453dcd5ff.png">
|
non_process
|
setting same value forcefully select all the applicable displays how can we set them to have same values but not being selected together for display purpose these should have same value in order to be used for computing later to create score img width alt スクリーンショット src but setting them so forces users to choose all the applicable choices if they choose any of img width alt スクリーンショット src
| 0
|
33,285
| 7,695,107,312
|
IssuesEvent
|
2018-05-18 11:02:36
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0] Investigate Dropping Caption.js in favour of native HTML5 figure tag
|
J4 Issue No Code Attached Yet
|
Follow up out of https://github.com/joomla/joomla-cms/pull/15956
This is more complicated than it seems because we probably need to offer a tool to our users to upgrade existing articles (we shouldn't try and modify articles ourselves on upgrade as it's a recipe for a disaster)
|
1.0
|
[4.0] Investigate Dropping Caption.js in favour of native HTML5 figure tag - Follow up out of https://github.com/joomla/joomla-cms/pull/15956
This is more complicated than it seems because we probably need to offer a tool to our users to upgrade existing articles (we shouldn't try and modify articles ourselves on upgrade as it's a recipe for a disaster)
|
non_process
|
investigate dropping caption js in favour of native figure tag follow up out of this is more complicated than it seems because we probably need to offer a tool to our users to upgrade existing articles we shouldn t try and modify articles ourselves on upgrade as it s a recipe for a disaster
| 0
|
8,755
| 11,874,057,428
|
IssuesEvent
|
2020-03-26 18:19:56
|
jyn514/rcc
|
https://api.github.com/repos/jyn514/rcc
|
closed
|
[ICE] More cycle detection gone wrong
|
ICE bug preprocessor
|
**Expected behavior**
RCC should detect the cycle and replace `f` only one time.
**Actual Behavior**
RCC recurses forever and segfaults.
**Code**
<!-- The code that caused the panic goes here.
This should also include the error message you got. -->
```c
#define f(a) f(1 + 2)
f(1)
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
Aborted (core dumped)
```
<details><summary>Backtrace</summary>
<!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. -->
```
(gdb) where
#0 0x00005555557f2e1a in rcc::lex::cpp::PreProcessor::replace_function (
self=<error reading variable: Cannot access memory at address 0x7fffff7feb30>,
name=<error reading variable: Cannot access memory at address 0x7fffff7feb38>,
start=<error reading variable: Cannot access memory at address 0x7fffff7feb3c>)
at src/lex/cpp.rs:595
#1 0x00005555557f2c13 in rcc::lex::cpp::PreProcessor::replace_id (
self=0x7fffffffa1f8, name=..., location=...) at src/lex/cpp.rs:593
#2 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
#3 0x00005555557edac3 in <rcc::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next (self=0x7fffffffa1f8) at src/lex/cpp.rs:141
#4 0x00005555557f38f7 in rcc::lex::cpp::PreProcessor::replace_function (
self=0x7fffffffa1f8, name=..., start=26) at src/lex/cpp.rs:657
#5 0x00005555557f2c13 in rcc::lex::cpp::PreProcessor::replace_id (
self=0x7fffffffa1f8, name=..., location=...) at src/lex/cpp.rs:593
#6 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
...
#3946 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
#3947 0x00005555557ee074 in <rcc::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next (self=0x7fffffffa1f8) at src/lex/cpp.rs:154
#3948 0x00005555557a2402 in rcc::compile (buf=..., opt=0x7fffffffd538, file=...,
files=0x7fffffffd920) at src/lib.rs:161
#3949 0x00005555555faa88 in rcc::real_main (buf=..., file_db=0x7fffffffd920,
file_id=..., opt=0x7fffffffd538, output=0x55555612cdb0) at src/main.rs:110
#3950 0x00005555555fc312 in rcc::main () at src/main.rs:176
```
</details>
|
1.0
|
[ICE] More cycle detection gone wrong - **Expected behavior**
RCC should detect the cycle and replace `f` only one time.
**Actual Behavior**
RCC recurses forever and segfaults.
**Code**
<!-- The code that caused the panic goes here.
This should also include the error message you got. -->
```c
#define f(a) f(1 + 2)
f(1)
thread 'main' has overflowed its stack
fatal runtime error: stack overflow
Aborted (core dumped)
```
<details><summary>Backtrace</summary>
<!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. -->
```
(gdb) where
#0 0x00005555557f2e1a in rcc::lex::cpp::PreProcessor::replace_function (
self=<error reading variable: Cannot access memory at address 0x7fffff7feb30>,
name=<error reading variable: Cannot access memory at address 0x7fffff7feb38>,
start=<error reading variable: Cannot access memory at address 0x7fffff7feb3c>)
at src/lex/cpp.rs:595
#1 0x00005555557f2c13 in rcc::lex::cpp::PreProcessor::replace_id (
self=0x7fffffffa1f8, name=..., location=...) at src/lex/cpp.rs:593
#2 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
#3 0x00005555557edac3 in <rcc::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next (self=0x7fffffffa1f8) at src/lex/cpp.rs:141
#4 0x00005555557f38f7 in rcc::lex::cpp::PreProcessor::replace_function (
self=0x7fffffffa1f8, name=..., start=26) at src/lex/cpp.rs:657
#5 0x00005555557f2c13 in rcc::lex::cpp::PreProcessor::replace_id (
self=0x7fffffffa1f8, name=..., location=...) at src/lex/cpp.rs:593
#6 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
...
#3946 0x00005555557ef4cd in rcc::lex::cpp::PreProcessor::handle_token (
self=0x7fffffffa1f8, token=..., location=...) at src/lex/cpp.rs:243
#3947 0x00005555557ee074 in <rcc::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next (self=0x7fffffffa1f8) at src/lex/cpp.rs:154
#3948 0x00005555557a2402 in rcc::compile (buf=..., opt=0x7fffffffd538, file=...,
files=0x7fffffffd920) at src/lib.rs:161
#3949 0x00005555555faa88 in rcc::real_main (buf=..., file_db=0x7fffffffd920,
file_id=..., opt=0x7fffffffd538, output=0x55555612cdb0) at src/main.rs:110
#3950 0x00005555555fc312 in rcc::main () at src/main.rs:176
```
</details>
|
process
|
more cycle detection gone wrong expected behavior rcc should detect the cycle and replace f only one time actual behavior rcc recurses forever and segfaults code the code that caused the panic goes here this should also include the error message you got c define f a f f thread main has overflowed its stack fatal runtime error stack overflow aborted core dumped backtrace gdb where in rcc lex cpp preprocessor replace function self name start at src lex cpp rs in rcc lex cpp preprocessor replace id self name location at src lex cpp rs in rcc lex cpp preprocessor handle token self token location at src lex cpp rs in next self at src lex cpp rs in rcc lex cpp preprocessor replace function self name start at src lex cpp rs in rcc lex cpp preprocessor replace id self name location at src lex cpp rs in rcc lex cpp preprocessor handle token self token location at src lex cpp rs in rcc lex cpp preprocessor handle token self token location at src lex cpp rs in next self at src lex cpp rs in rcc compile buf opt file files at src lib rs in rcc real main buf file db file id opt output at src main rs in rcc main at src main rs
| 1
|
73,218
| 15,252,868,785
|
IssuesEvent
|
2021-02-20 05:01:05
|
gate5/struts-2.3.20
|
https://api.github.com/repos/gate5/struts-2.3.20
|
closed
|
CVE-2013-1624 Medium Severity Vulnerability detected by WhiteSource - autoclosed
|
security vulnerability
|
## CVE-2013-1624 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk14-136.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. The package is organised so that it contains a light-weight API suitable for use in any environment (including the newly released J2ME) with the additional infrastructure to conform the algorithms to the JCE framework.</p>
<p>path: /root/.m2/repository/bouncycastle/bcprov-jdk14/136/bcprov-jdk14-136.jar</p>
<p>
<p>Library home page: <a href=http://www.bouncycastle.org/java.html>http://www.bouncycastle.org/java.html</a></p>
Dependency Hierarchy:
- jasperreports-3.1.2.jar (Root Library)
- itext-2.1.0.jar
- :x: **bcprov-jdk14-136.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The TLS implementation in the Bouncy Castle Java library before 1.48 and C# library before 1.8 does not properly consider timing side-channel attacks on a noncompliant MAC check operation during the processing of malformed CBC padding, which allows remote attackers to conduct distinguishing attacks and plaintext-recovery attacks via statistical analysis of timing data for crafted packets, a related issue to CVE-2013-0169.
<p>Publish Date: 2013-02-08
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1624>CVE-2013-1624</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-1624">https://nvd.nist.gov/vuln/detail/CVE-2013-1624</a></p>
<p>Release Date: 2013-02-08</p>
<p>Fix Resolution: 1.48,1.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2013-1624 Medium Severity Vulnerability detected by WhiteSource - autoclosed - ## CVE-2013-1624 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk14-136.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. The package is organised so that it contains a light-weight API suitable for use in any environment (including the newly released J2ME) with the additional infrastructure to conform the algorithms to the JCE framework.</p>
<p>path: /root/.m2/repository/bouncycastle/bcprov-jdk14/136/bcprov-jdk14-136.jar</p>
<p>
<p>Library home page: <a href=http://www.bouncycastle.org/java.html>http://www.bouncycastle.org/java.html</a></p>
Dependency Hierarchy:
- jasperreports-3.1.2.jar (Root Library)
- itext-2.1.0.jar
- :x: **bcprov-jdk14-136.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/struts-2.3.20/commit/1d3a9da2b49a075b9122e05e19a483fc66b5aaf4">1d3a9da2b49a075b9122e05e19a483fc66b5aaf4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The TLS implementation in the Bouncy Castle Java library before 1.48 and C# library before 1.8 does not properly consider timing side-channel attacks on a noncompliant MAC check operation during the processing of malformed CBC padding, which allows remote attackers to conduct distinguishing attacks and plaintext-recovery attacks via statistical analysis of timing data for crafted packets, a related issue to CVE-2013-0169.
<p>Publish Date: 2013-02-08
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1624>CVE-2013-1624</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-1624">https://nvd.nist.gov/vuln/detail/CVE-2013-1624</a></p>
<p>Release Date: 2013-02-08</p>
<p>Fix Resolution: 1.48,1.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium severity vulnerability detected by whitesource autoclosed cve medium severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms the package is organised so that it contains a light weight api suitable for use in any environment including the newly released with the additional infrastructure to conform the algorithms to the jce framework path root repository bouncycastle bcprov bcprov jar library home page a href dependency hierarchy jasperreports jar root library itext jar x bcprov jar vulnerable library found in head commit a href vulnerability details the tls implementation in the bouncy castle java library before and c library before does not properly consider timing side channel attacks on a noncompliant mac check operation during the processing of malformed cbc padding which allows remote attackers to conduct distinguishing attacks and plaintext recovery attacks via statistical analysis of timing data for crafted packets a related issue to cve publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
11,436
| 14,258,400,829
|
IssuesEvent
|
2020-11-20 06:12:03
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Box onboarding backend error
|
bug p0 team:data processing
|
### Describe the bug
An error occurs when trying to onboard Box, requires user to start over
### Steps to reproduce
Steps to reproduce the behavior:
1. Go to Box onboarding flow in Sources
2. Attempt to configure and set credentials for Box
3. See error- you'll be forced to Start Over
### Expected behavior
User should be able to successfully set up Box after successfully entering in right credentials.
### Environment
- Panther version or commit: v1.12.1-release-1.13-01aa240ce
### Screenshots

|
1.0
|
Box onboarding backend error - ### Describe the bug
An error occurs when trying to onboard Box, requires user to start over
### Steps to reproduce
Steps to reproduce the behavior:
1. Go to Box onboarding flow in Sources
2. Attempt to configure and set credentials for Box
3. See error- you'll be forced to Start Over
### Expected behavior
User should be able to successfully set up Box after successfully entering in right credentials.
### Environment
- Panther version or commit: v1.12.1-release-1.13-01aa240ce
### Screenshots

|
process
|
box onboarding backend error describe the bug an error occurs when trying to onboard box requires user to start over steps to reproduce steps to reproduce the behavior go to box onboarding flow in sources attempt to configure and set credentials for box see error you ll be forced to start over expected behavior user should be able to successfully set up box after successfully entering in right credentials environment panther version or commit release screenshots
| 1
|
147,998
| 5,657,140,205
|
IssuesEvent
|
2017-04-10 05:44:38
|
k0shk0sh/FastHub
|
https://api.github.com/repos/k0shk0sh/FastHub
|
closed
|
Add an mark as read action to notifications in the notification panel
|
Priority: Medium Status: Accepted Type: Enhancement Type: Feature Request
|
There are times where I don't want to have to open the notification. A mark As read action would be helpful.
|
1.0
|
Add an mark as read action to notifications in the notification panel - There are times where I don't want to have to open the notification. A mark As read action would be helpful.
|
non_process
|
add an mark as read action to notifications in the notification panel there are times where i don t want to have to open the notification a mark as read action would be helpful
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.