Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
963
| 3,421,717,607
|
IssuesEvent
|
2015-12-08 19:55:48
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Rename nextTick
|
feature request process
|
The `nextTick` function doesn't do what the name states, as it is more of like "current tick", or "from C++ into JS land". How about renaming it to something better?
|
1.0
|
Rename nextTick - The `nextTick` function doesn't do what the name states, as it is more of like "current tick", or "from C++ into JS land". How about renaming it to something better?
|
process
|
rename nexttick the nexttick function doesn t do what the name states as it is more of like current tick or from c into js land how about renaming it to something better
| 1
|
35,973
| 17,371,472,803
|
IssuesEvent
|
2021-07-30 14:32:15
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
opened
|
`TextRender` incorrectly marks `0` as an empty cell if the `placeholder` option is defined.
|
Cell type: base / text / password Type: Bug hyperformula
|
### Description
For spreadsheet-like use-cases where the cell's value depends on the used formula, the intuitive option is to use the `text` cell type. Unfortunately, if we define the `placeholder` option at the same time, `TextRenderer` in this condition:
https://github.com/handsontable/handsontable/blob/master/src/renderers/textRenderer/textRenderer.js#L23-L25
will treat `0` (number) as an empty cell.
It happens if we use mathematical/statistical formulas because the result from HyperFormula will be a number.
There is a solution for the structured data to use `type: 'numeric'` - then NumericRenderer will use placeholder only if cell's value is `null` or `undefined`.
### Steps to reproduce
1. Go to the demo
2. Compare the dataset definition with what Handsontable rendered.
### Demo
https://jsfiddle.net/6nmLyrbq/
### Your environment
* Handsontable version: 9.0.0+
* Browser Name and version: any
* Operating System: any
|
True
|
`TextRender` incorrectly marks `0` as an empty cell if the `placeholder` option is defined. - ### Description
For spreadsheet-like use-cases where the cell's value depends on the used formula, the intuitive option is to use the `text` cell type. Unfortunately, if we define the `placeholder` option at the same time, `TextRenderer` in this condition:
https://github.com/handsontable/handsontable/blob/master/src/renderers/textRenderer/textRenderer.js#L23-L25
will treat `0` (number) as an empty cell.
It happens if we use mathematical/statistical formulas because the result from HyperFormula will be a number.
There is a solution for the structured data to use `type: 'numeric'` - then NumericRenderer will use placeholder only if cell's value is `null` or `undefined`.
### Steps to reproduce
1. Go to the demo
2. Compare the dataset definition with what Handsontable rendered.
### Demo
https://jsfiddle.net/6nmLyrbq/
### Your environment
* Handsontable version: 9.0.0+
* Browser Name and version: any
* Operating System: any
|
non_process
|
textrender incorrectly marks as an empty cell if the placeholder option is defined description for spreadsheet like use cases where the cell s value depends on the used formula the intuitive option is to use the text cell type unfortunately if we define the placeholder option at the same time textrenderer in this condition will treat number as an empty cell it happens if we use mathematical statistical formulas because the result from hyperformula will be a number there is a solution for the structured data to use type numeric then numericrenderer will use placeholder only if cell s value is null or undefined steps to reproduce go to the demo compare the dataset definition with what handsontable rendered demo your environment handsontable version browser name and version any operating system any
| 0
|
4,728
| 7,571,774,505
|
IssuesEvent
|
2018-04-23 13:16:35
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
builder.launch() doesn't work inside a workchain
|
priority/important topic/JobCalculationAndProcess topic/Workflows
|
The reason for this is that ``builder.launch`` will not hook into the correct event loop. For this to work, it would need to have access to the executing workchain.
Suggested solution: Remove the ``launch`` method, and instead allow for ``run(builder)`` and ``submit(builder)``. To make this easier to do from the command line, one could add ``run`` and / or ``submit`` to the automatically imported names in the verdi shell.
Pros:
- The ``ProcessBuilder`` is now purely for inputs, and there is no name clash with the ``launch`` method.
- There are no "two ways of doing things" for running / submitting calculations -- everything goes through the same methods.
Cons:
- The discoverability of the ``run`` and ``submit`` functions is a bit worse than having the ``launch`` method, since it cannot directly be tab-completed from the builder.
- In the ``submit`` / ``run`` method, we might have to do a switch on the type of input given.
|
1.0
|
builder.launch() doesn't work inside a workchain - The reason for this is that ``builder.launch`` will not hook into the correct event loop. For this to work, it would need to have access to the executing workchain.
Suggested solution: Remove the ``launch`` method, and instead allow for ``run(builder)`` and ``submit(builder)``. To make this easier to do from the command line, one could add ``run`` and / or ``submit`` to the automatically imported names in the verdi shell.
Pros:
- The ``ProcessBuilder`` is now purely for inputs, and there is no name clash with the ``launch`` method.
- There are no "two ways of doing things" for running / submitting calculations -- everything goes through the same methods.
Cons:
- The discoverability of the ``run`` and ``submit`` functions is a bit worse than having the ``launch`` method, since it cannot directly be tab-completed from the builder.
- In the ``submit`` / ``run`` method, we might have to do a switch on the type of input given.
|
process
|
builder launch doesn t work inside a workchain the reason for this is that builder launch will not hook into the correct event loop for this to work it would need to have access to the executing workchain suggested solution remove the launch method and instead allow for run builder and submit builder to make this easier to do from the command line one could add run and or submit to the automatically imported names in the verdi shell pros the processbuilder is now purely for inputs and there is no name clash with the launch method there are no two ways of doing things for running submitting calculations everything goes through the same methods cons the discoverability of the run and submit functions is a bit worse than having the launch method since it cannot directly be tab completed from the builder in the submit run method we might have to do a switch on the type of input given
| 1
|
3,709
| 6,731,569,444
|
IssuesEvent
|
2017-10-18 08:10:46
|
nlbdev/pipeline
|
https://api.github.com/repos/nlbdev/pipeline
|
closed
|
Switch position of generated "Om boka" and TOC
|
enhancement pre-processing Priority:2 - Medium
|
*from Trello:*
TOC first, then "Om boka".
@matskober correct?
|
1.0
|
Switch position of generated "Om boka" and TOC - *from Trello:*
TOC first, then "Om boka".
@matskober correct?
|
process
|
switch position of generated om boka and toc from trello toc first then om boka matskober correct
| 1
|
1,969
| 4,790,456,356
|
IssuesEvent
|
2016-10-31 08:40:27
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Consolidate the logging format on collectors/processors
|
3. In Development Collectors Processors
|
It should be easy to query for logs on a specific collector or processor, errors, etc.
|
1.0
|
Consolidate the logging format on collectors/processors - It should be easy to query for logs on a specific collector or processor, errors, etc.
|
process
|
consolidate the logging format on collectors processors it should be easy to query for logs on a specific collector or processor errors etc
| 1
|
10,495
| 13,259,434,012
|
IssuesEvent
|
2020-08-20 16:42:32
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
opened
|
Recommend using branches in developer's fork
|
process + tools
|
I think we should make it a rule that developers use branches in their own fork for PRs rather than create personal branches in the upstream repo. It's confusing to have lots of branches in the upstream and it makes it much harder to keep the repo tidy. I would vote that (unless there's a good reason not to) all PR branches should come from a users fork.
Any objections?
|
1.0
|
Recommend using branches in developer's fork - I think we should make it a rule that developers use branches in their own fork for PRs rather than create personal branches in the upstream repo. It's confusing to have lots of branches in the upstream and it makes it much harder to keep the repo tidy. I would vote that (unless there's a good reason not to) all PR branches should come from a users fork.
Any objections?
|
process
|
recommend using branches in developer s fork i think we should make it a rule that developers use branches in their own fork for prs rather than create personal branches in the upstream repo it s confusing to have lots of branches in the upstream and it makes it much harder to keep the repo tidy i would vote that unless there s a good reason not to all pr branches should come from a users fork any objections
| 1
|
129,257
| 10,567,791,287
|
IssuesEvent
|
2019-10-06 07:53:16
|
measurement-kit/measurement-kit
|
https://api.github.com/repos/measurement-kit/measurement-kit
|
closed
|
Add HTTP keyword filtering test
|
new test wishlist
|
This test should perform HTTP requests containing potential censorship keywords on the given vantage point. Similar to the [HTTP keyword filtering test](https://github.com/TheTorProject/ooni-probe/blob/master/ooni/nettests/experimental/http_keyword_filtering.py).
|
1.0
|
Add HTTP keyword filtering test - This test should perform HTTP requests containing potential censorship keywords on the given vantage point. Similar to the [HTTP keyword filtering test](https://github.com/TheTorProject/ooni-probe/blob/master/ooni/nettests/experimental/http_keyword_filtering.py).
|
non_process
|
add http keyword filtering test this test should perform http requests containing potential censorship keywords on the given vantage point similar to the
| 0
|
8,917
| 12,017,849,924
|
IssuesEvent
|
2020-04-10 19:22:11
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Input issue GRASS v.in.lidar
|
Bug Processing
|
Hi there,
The function v.in.lidar states that it accepts both las and laz. But the file filter only show .las and the input shows an error when providing a laz file.
I might provide a fix but if someone else can do quick fix please tell me.
Thanks,
|
1.0
|
Input issue GRASS v.in.lidar - Hi there,
The function v.in.lidar states that it accepts both las and laz. But the file filter only show .las and the input shows an error when providing a laz file.
I might provide a fix but if someone else can do quick fix please tell me.
Thanks,
|
process
|
input issue grass v in lidar hi there the function v in lidar states that it accepts both las and laz but the file filter only show las and the input shows an error when providing a laz file i might provide a fix but if someone else can do quick fix please tell me thanks
| 1
|
626,928
| 19,847,330,096
|
IssuesEvent
|
2022-01-21 08:20:36
|
vincetiu8/zombie-game
|
https://api.github.com/repos/vincetiu8/zombie-game
|
closed
|
Zombies fly all over the place
|
type/bug area/network area/map size/s priority/medium
|
When objects are instantiated with `PhotonNetwork.Instantiate`, we should immediately set their coordinates instead of waiting. This is to prevent the current effect where we have zombies flying around the place as they are spawned in.
|
1.0
|
Zombies fly all over the place - When objects are instantiated with `PhotonNetwork.Instantiate`, we should immediately set their coordinates instead of waiting. This is to prevent the current effect where we have zombies flying around the place as they are spawned in.
|
non_process
|
zombies fly all over the place when objects are instantiated with photonnetwork instantiate we should immediately set their coordinates instead of waiting this is to prevent the current effect where we have zombies flying around the place as they are spawned in
| 0
|
5,976
| 8,795,669,739
|
IssuesEvent
|
2018-12-22 18:36:15
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
How to get the CPU usage of the background process on the windows platform.
|
os:windows package:process
|
The 'process.CPUPercent()' method works for front process, but it does not works for background process. The program prompts 'could not get CreationDate: Access is denied'.
How to get the CPU usage of the background process on the windows platform.
|
1.0
|
How to get the CPU usage of the background process on the windows platform. - The 'process.CPUPercent()' method works for front process, but it does not works for background process. The program prompts 'could not get CreationDate: Access is denied'.
How to get the CPU usage of the background process on the windows platform.
|
process
|
how to get the cpu usage of the background process on the windows platform the process cpupercent method works for front process but it does not works for background process the program prompts could not get creationdate access is denied how to get the cpu usage of the background process on the windows platform
| 1
|
272,012
| 23,646,342,208
|
IssuesEvent
|
2022-08-25 22:48:22
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
opened
|
[test-triage] chip_sw_otbn_randomness
|
Component:TestTriage
|
### Hierarchy of regression failure
Chip Level
### Failure Description
Test chip_sw_otbn_randomness has 1 failures.
0.chip_sw_otbn_randomness.3823033481
Line 401, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/0.chip_sw_otbn_randomness/latest/run.log
Offending '((req_i & $past(req_i)) == $past(req_i))'
UVM_ERROR @ 3701.039724 us: (prim_arbiter_ppc.sv:171) [ASSERT FAILED] ReqStaysHighUntilGranted0_M
UVM_INFO @ 3701.039724 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- Commit hash where failure was observed [514f19969](https://github.com/lowrisc/opentitan/tree/514f199692cad015dd32e3d313c228324380b8b0)
- ./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_lc_ctrl_transition --fixed-seed 3823033481 --build-seed 408324269 --waves -v h
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_
|
1.0
|
[test-triage] chip_sw_otbn_randomness - ### Hierarchy of regression failure
Chip Level
### Failure Description
Test chip_sw_otbn_randomness has 1 failures.
0.chip_sw_otbn_randomness.3823033481
Line 401, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/0.chip_sw_otbn_randomness/latest/run.log
Offending '((req_i & $past(req_i)) == $past(req_i))'
UVM_ERROR @ 3701.039724 us: (prim_arbiter_ppc.sv:171) [ASSERT FAILED] ReqStaysHighUntilGranted0_M
UVM_INFO @ 3701.039724 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- Commit hash where failure was observed [514f19969](https://github.com/lowrisc/opentitan/tree/514f199692cad015dd32e3d313c228324380b8b0)
- ./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_lc_ctrl_transition --fixed-seed 3823033481 --build-seed 408324269 --waves -v h
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_
|
non_process
|
chip sw otbn randomness hierarchy of regression failure chip level failure description test chip sw otbn randomness has failures chip sw otbn randomness line in log container opentitan public scratch os regression chip earlgrey asic sim vcs chip sw otbn randomness latest run log offending req i past req i past req i uvm error us prim arbiter ppc sv m uvm info us uvm report catcher svh uvm report catcher summary steps to reproduce commit hash where failure was observed util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i chip sw lc ctrl transition fixed seed build seed waves v h kokoro build number if applicable tests with similar or related failures no response
| 0
|
185,988
| 15,039,406,730
|
IssuesEvent
|
2021-02-02 18:36:55
|
Calmanning/so_thirsty
|
https://api.github.com/repos/Calmanning/so_thirsty
|
closed
|
"/:user/plant/:plant" PAGE FIX - Plant images
|
documentation
|
As a User, when I add a photo, the most recent photo shows up next to the previous photos.
Currently the photos append beneath each other.
- [x] New photos on the page will append next to the previous photo
- [x] add the date to the photo card with Handlebars notation (do we need to modify the time-stamp format?).
- [x] ask group if we want the "I have enough water" phrase in each of the photo cards or if it should go somewhere else (like as text in the "just watered" button that will toggle based on the "needs water" condition?)
|
1.0
|
"/:user/plant/:plant" PAGE FIX - Plant images - As a User, when I add a photo, the most recent photo shows up next to the previous photos.
Currently the photos append beneath each other.
- [x] New photos on the page will append next to the previous photo
- [x] add the date to the photo card with Handlebars notation (do we need to modify the time-stamp format?).
- [x] ask group if we want the "I have enough water" phrase in each of the photo cards or if it should go somewhere else (like as text in the "just watered" button that will toggle based on the "needs water" condition?)
|
non_process
|
user plant plant page fix plant images as a user when i add a photo the most recent photo shows up next to the previous photos currently the photos append beneath each other new photos on the page will append next to the previous photo add the date to the photo card with handlebars notation do we need to modify the time stamp format ask group if we want the i have enough water phrase in each of the photo cards or if it should go somewhere else like as text in the just watered button that will toggle based on the needs water condition
| 0
|
18,778
| 24,680,519,100
|
IssuesEvent
|
2022-10-18 20:52:01
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Add donotannotate to renal system process
|
organism-level process do_not_annotate_flag
|
See #16143 where we decided to do this for "renal system development"
surely we also want this for
```yaml
id: GO:0003014
name: renal system process
namespace: biological_process
def: "A organ system process carried out by any of the organs or tissues of the renal system. The renal system maintains fluid balance, and contributes to electrolyte balance, acid/base balance, and disposal of nitrogenous waste products. In humans, the renal system comprises a pair of kidneys, a pair of ureters, urinary bladder, urethra, sphincter muscle and associated blood vessels; in other species, the renal system may comprise related structures (e.g., nephrocytes and malpighian tubules in Drosophila)." [GOC:cjm, GOC:mtg_cardio, GOC:mtg_kidney_jan10]
synonym: "excretory system process" EXACT []
synonym: "kidney system process" RELATED []
is_a: GO:0003008 ! system process
```
note that the ones below likely refer to the vertebrate renal system, possibly specifically the upper urinary tract
```yaml
id: GO:0072048
name: renal system pattern specification
namespace: biological_process
def: "Any developmental process that results in the creation of defined areas or spaces within an organism to which cells respond and eventually are instructed to differentiate into the anatomical structures of the renal system." [GOC:mtg_kidney_jan10]
synonym: "renal system pattern formation" RELATED [GOC:mtg_kidney_jan10]
is_a: GO:0007389 ! pattern specification process
relationship: part_of GO:0072001 ! renal system development
id: GO:0001977
name: renal system process involved in regulation of blood volume
namespace: biological_process
def: "A slow mechanism of blood pressure regulation that responds to changes in pressure resulting from fluid and salt intake by modulating the quantity of blood in the circulatory system." [GOC:dph, GOC:tb, ISBN:0721643949]
synonym: "renal blood volume control of blood pressure" RELATED []
synonym: "renal regulation of blood volume" RELATED [GOC:dph, GOC:tb]
is_a: GO:0003071 ! renal system process involved in regulation of systemic arterial blood pressure
relationship: part_of GO:0050878 ! regulation of body fluid levels
```
|
1.0
|
Add donotannotate to renal system process - See #16143 where we decided to do this for "renal system development"
surely we also want this for
```yaml
id: GO:0003014
name: renal system process
namespace: biological_process
def: "A organ system process carried out by any of the organs or tissues of the renal system. The renal system maintains fluid balance, and contributes to electrolyte balance, acid/base balance, and disposal of nitrogenous waste products. In humans, the renal system comprises a pair of kidneys, a pair of ureters, urinary bladder, urethra, sphincter muscle and associated blood vessels; in other species, the renal system may comprise related structures (e.g., nephrocytes and malpighian tubules in Drosophila)." [GOC:cjm, GOC:mtg_cardio, GOC:mtg_kidney_jan10]
synonym: "excretory system process" EXACT []
synonym: "kidney system process" RELATED []
is_a: GO:0003008 ! system process
```
note that the ones below likely refer to the vertebrate renal system, possibly specifically the upper urinary tract
```yaml
id: GO:0072048
name: renal system pattern specification
namespace: biological_process
def: "Any developmental process that results in the creation of defined areas or spaces within an organism to which cells respond and eventually are instructed to differentiate into the anatomical structures of the renal system." [GOC:mtg_kidney_jan10]
synonym: "renal system pattern formation" RELATED [GOC:mtg_kidney_jan10]
is_a: GO:0007389 ! pattern specification process
relationship: part_of GO:0072001 ! renal system development
id: GO:0001977
name: renal system process involved in regulation of blood volume
namespace: biological_process
def: "A slow mechanism of blood pressure regulation that responds to changes in pressure resulting from fluid and salt intake by modulating the quantity of blood in the circulatory system." [GOC:dph, GOC:tb, ISBN:0721643949]
synonym: "renal blood volume control of blood pressure" RELATED []
synonym: "renal regulation of blood volume" RELATED [GOC:dph, GOC:tb]
is_a: GO:0003071 ! renal system process involved in regulation of systemic arterial blood pressure
relationship: part_of GO:0050878 ! regulation of body fluid levels
```
|
process
|
add donotannotate to renal system process see where we decided to do this for renal system development surely we also want this for yaml id go name renal system process namespace biological process def a organ system process carried out by any of the organs or tissues of the renal system the renal system maintains fluid balance and contributes to electrolyte balance acid base balance and disposal of nitrogenous waste products in humans the renal system comprises a pair of kidneys a pair of ureters urinary bladder urethra sphincter muscle and associated blood vessels in other species the renal system may comprise related structures e g nephrocytes and malpighian tubules in drosophila synonym excretory system process exact synonym kidney system process related is a go system process note that the ones below likely refer to the vertebrate renal system possibly specifically the upper urinary tract yaml id go name renal system pattern specification namespace biological process def any developmental process that results in the creation of defined areas or spaces within an organism to which cells respond and eventually are instructed to differentiate into the anatomical structures of the renal system synonym renal system pattern formation related is a go pattern specification process relationship part of go renal system development id go name renal system process involved in regulation of blood volume namespace biological process def a slow mechanism of blood pressure regulation that responds to changes in pressure resulting from fluid and salt intake by modulating the quantity of blood in the circulatory system synonym renal blood volume control of blood pressure related synonym renal regulation of blood volume related is a go renal system process involved in regulation of systemic arterial blood pressure relationship part of go regulation of body fluid levels
| 1
|
21,799
| 30,312,473,643
|
IssuesEvent
|
2023-07-10 13:39:21
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
BP: loader bug
|
Batch Processor
|
Follow-up to #1615 (missed this bug)
Steps to recreate:
1. Do not select a State / Region
2. Check "Compute Flow Statistics" and "Compute Basin Characteristics"
3. The loading spinners appear in the "Select Flow Statistics" and "Select Basin Characteristics" areas

When a State / Region is not selected:
- Hide the "Select All Flow Statistics" and "Select All Basin Characteristics" buttons
- Hide the spinner
- Show text that says "Please select a State / Region to select Flow Statistics" and "Please select a State / Region to select Basin Characteristics"
|
1.0
|
BP: loader bug - Follow-up to #1615 (missed this bug)
Steps to recreate:
1. Do not select a State / Region
2. Check "Compute Flow Statistics" and "Compute Basin Characteristics"
3. The loading spinners appear in the "Select Flow Statistics" and "Select Basin Characteristics" areas

When a State / Region is not selected:
- Hide the "Select All Flow Statistics" and "Select All Basin Characteristics" buttons
- Hide the spinner
- Show text that says "Please select a State / Region to select Flow Statistics" and "Please select a State / Region to select Basin Characteristics"
|
process
|
bp loader bug follow up to missed this bug steps to recreate do not select a state region check compute flow statistics and compute basin characteristics the loading spinners appear in the select flow statistics and select basin characteristics areas when a state region is not selected hide the select all flow statistics and select all basin characteristics buttons hide the spinner show text that says please select a state region to select flow statistics and please select a state region to select basin characteristics
| 1
|
80,639
| 23,266,998,312
|
IssuesEvent
|
2022-08-04 18:25:27
|
angular/angular-cli
|
https://api.github.com/repos/angular/angular-cli
|
closed
|
Double quoted url() arguments are parsed incorrectly
|
type: bug/fix freq1: low severity3: broken comp: devkit/build-angular devkit/build-angular: browser
|
# 🐞 Bug report
### Command (mark with an `x`)
<!-- Can you pin-point the command or commands that are effected by this bug? -->
<!-- ✍️edit: -->
- [ ] new
- [x] build
- [ ] serve
- [ ] test
- [ ] e2e
- [ ] generate
- [ ] add
- [ ] update
- [ ] lint
- [ ] extract-i18n
- [ ] run
- [ ] config
- [ ] help
- [ ] version
- [ ] doc
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
unknown
### Description
<!-- ✍️--> A clear and concise description of the problem...
The issue was discovered [here](https://github.com/swagger-api/swagger-ui/issues/8116) and explored [here](https://github.com/webpack-contrib/postcss-loader/issues/595) with `swagger-ui-dist` package
A change in the npm package swagger-ui-dist changed the quotes around its url() arguments in swagger-ui.css from single to double. For example:
v4.12.0
`background:url('data:image/svg+xml;charset=utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="16" height="15" aria-hidden="true"><path fill="%23fff" fill-rule="evenodd" d="M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z"/></svg>') 50% no-repeat;`
v4.13.0
`background:url("data:image/svg+xml; charset=utf-8,<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"16\" height=\"15\" aria-hidden=\"true\"><path fill=\"%23fff\" fill-rule=\"evenodd\" d=\"M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z\"/></svg>") 50% no-repeat;`
## 🔬 Minimal Reproduction
Here is a demo project: https://github.com/nspire909/swagger-test
<!--
Simple steps to reproduce this bug.
Please include: commands run (including args), packages added, related code changes.
If reproduction steps are not enough for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular-cli/blob/main/CONTRIBUTING.md#-submitting-an-issue
-->
## 🔥 Exception or Error
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
<!-- ✍️-->
```
./node_modules/swagger-ui-dist/swagger-ui.css - Error: Module Error (from ./node_modules/@angular-devkit/build-angular/node_modules/postcss-loader/dist/
cjs.js):
<css input>:34:8: Can't resolve '"data:image/svg+xml;charset=utf-8,<svg xmlns=/"http://www.w3.org/2000/svg/" width=/"16/" height=/"15/" aria-hidden=/"tr
ue/"><path fill=/"%23fff/" fill-rule=/"evenodd/" d=/"M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.
02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-
.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z/"/></svg>"'
```
## 🌍 Your Environment
<pre><code>
<!-- run `ng version` and paste output below -->
<!-- ✍️-->
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 13.3.9
Node: 16.14.2
Package Manager: npm 7.14.0
OS: win32 x64
Angular: 13.3.11
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router, service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1303.9
@angular-devkit/build-angular 13.3.9
@angular-devkit/core 13.3.9
@angular-devkit/schematics 13.3.9
@angular/cli 13.3.9
@schematics/angular 13.3.9
rxjs 7.5.6
typescript 4.6.2
</code></pre>
|
2.0
|
Double quoted url() arguments are parsed incorrectly - # 🐞 Bug report
### Command (mark with an `x`)
<!-- Can you pin-point the command or commands that are effected by this bug? -->
<!-- ✍️edit: -->
- [ ] new
- [x] build
- [ ] serve
- [ ] test
- [ ] e2e
- [ ] generate
- [ ] add
- [ ] update
- [ ] lint
- [ ] extract-i18n
- [ ] run
- [ ] config
- [ ] help
- [ ] version
- [ ] doc
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
unknown
### Description
<!-- ✍️--> A clear and concise description of the problem...
The issue was discovered [here](https://github.com/swagger-api/swagger-ui/issues/8116) and explored [here](https://github.com/webpack-contrib/postcss-loader/issues/595) with `swagger-ui-dist` package
A change in the npm package swagger-ui-dist changed the quotes around its url() arguments in swagger-ui.css from single to double. For example:
v4.12.0
`background:url('data:image/svg+xml;charset=utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="16" height="15" aria-hidden="true"><path fill="%23fff" fill-rule="evenodd" d="M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z"/></svg>') 50% no-repeat;`
v4.13.0
`background:url("data:image/svg+xml; charset=utf-8,<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"16\" height=\"15\" aria-hidden=\"true\"><path fill=\"%23fff\" fill-rule=\"evenodd\" d=\"M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z\"/></svg>") 50% no-repeat;`
## 🔬 Minimal Reproduction
Here is a demo project: https://github.com/nspire909/swagger-test
<!--
Simple steps to reproduce this bug.
Please include: commands run (including args), packages added, related code changes.
If reproduction steps are not enough for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular-cli/blob/main/CONTRIBUTING.md#-submitting-an-issue
-->
## 🔥 Exception or Error
<!-- If the issue is accompanied by an exception or an error, please share it below: -->
<!-- ✍️-->
```
./node_modules/swagger-ui-dist/swagger-ui.css - Error: Module Error (from ./node_modules/@angular-devkit/build-angular/node_modules/postcss-loader/dist/
cjs.js):
<css input>:34:8: Can't resolve '"data:image/svg+xml;charset=utf-8,<svg xmlns=/"http://www.w3.org/2000/svg/" width=/"16/" height=/"15/" aria-hidden=/"tr
ue/"><path fill=/"%23fff/" fill-rule=/"evenodd/" d=/"M4 12h4v1H4v-1zm5-6H4v1h5V6zm2 3V7l-3 3 3 3v-2h5V9h-5zM6.5 8H4v1h2.5V8zM4 11h2.5v-1H4v1zm9 1h1v2c-.
02.28-.11.52-.3.7-.19.18-.42.28-.7.3H3c-.55 0-1-.45-1-1V3c0-.55.45-1 1-1h3c0-1.11.89-2 2-2 1.11 0 2 .89 2 2h3c.55 0 1 .45 1 1v5h-1V5H3v9h10v-2zM4 4h8c0-
.55-.45-1-1-1h-1c-.55 0-1-.45-1-1s-.45-1-1-1-1 .45-1 1-.45 1-1 1H5c-.55 0-1 .45-1 1z/"/></svg>"'
```
## 🌍 Your Environment
<pre><code>
<!-- run `ng version` and paste output below -->
<!-- ✍️-->
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 13.3.9
Node: 16.14.2
Package Manager: npm 7.14.0
OS: win32 x64
Angular: 13.3.11
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router, service-worker
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1303.9
@angular-devkit/build-angular 13.3.9
@angular-devkit/core 13.3.9
@angular-devkit/schematics 13.3.9
@angular/cli 13.3.9
@schematics/angular 13.3.9
rxjs 7.5.6
typescript 4.6.2
</code></pre>
|
non_process
|
double quoted url arguments are parsed incorrectly 🐞 bug report command mark with an x new build serve test generate add update lint extract run config help version doc is this a regression unknown description a clear and concise description of the problem the issue was discovered and explored with swagger ui dist package a change in the npm package swagger ui dist changed the quotes around its url arguments in swagger ui css from single to double for example background url data image svg xml charset utf no repeat background url data image svg xml charset utf no repeat 🔬 minimal reproduction here is a demo project simple steps to reproduce this bug please include commands run including args packages added related code changes if reproduction steps are not enough for reproduction of your issue please create a minimal github repository with the reproduction of the issue a good way to make a minimal reproduction is to create a new app via ng new repro app and add the minimum possible code to show the problem share the link to the repo below along with step by step instructions to reproduce the problem as well as expected and actual behavior issues that don t have enough info and can t be reproduced will be closed you can read more about issue submission guidelines here 🔥 exception or error node modules swagger ui dist swagger ui css error module error from node modules angular devkit build angular node modules postcss loader dist cjs js can t resolve data image svg xml charset utf svg xmlns width height aria hidden tr ue path fill fill rule evenodd d 🌍 your environment △ angular cli node package manager npm os angular animations common compiler compiler cli core forms language service platform browser platform browser dynamic router service worker package version angular devkit architect angular devkit build angular angular devkit core angular devkit schematics angular cli schematics angular rxjs typescript
| 0
|
13,703
| 16,459,028,685
|
IssuesEvent
|
2021-05-21 16:07:56
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
closed
|
test: add end 2 end authentication test for getAuthenticatedOctokit
|
priority: p2 type: process
|
Background is #1825
My guess is the recent gcf-utils are broken for scheduler authentication.
It's better if we have end 2 end test for getAuthenticatedOctokit to prevent similar things to happen.
|
1.0
|
test: add end 2 end authentication test for getAuthenticatedOctokit - Background is #1825
My guess is the recent gcf-utils are broken for scheduler authentication.
It's better if we have end 2 end test for getAuthenticatedOctokit to prevent similar things to happen.
|
process
|
test add end end authentication test for getauthenticatedoctokit background is my guess is the recent gcf utils are broken for scheduler authentication it s better if we have end end test for getauthenticatedoctokit to prevent similar things to happen
| 1
|
558,098
| 16,526,208,144
|
IssuesEvent
|
2021-05-26 20:28:35
|
penrose/penrose
|
https://api.github.com/repos/penrose/penrose
|
closed
|
Look into migrating from Haskell to browser-native language
|
kind:engineering priority:open-ended
|
Candidates: Elm, Purescript, Typescript, Javascript, maybe using d3, Svelte, Snap, React.
Figure out the state of web support in the following:
- Sophisticated static type systems and typechecking
- Autodifferentiation
- Optimization, linear algebra, and numerical computation
- Parser combinators and generators
- DSL definition, design, metaprogramming, reflection, language workbenches
- Property-based testing and unit testing
- UI libraries for prodirect manipulation
Figure out which parts of the backend to leave in Haskell.
|
1.0
|
Look into migrating from Haskell to browser-native language - Candidates: Elm, Purescript, Typescript, Javascript, maybe using d3, Svelte, Snap, React.
Figure out the state of web support in the following:
- Sophisticated static type systems and typechecking
- Autodifferentiation
- Optimization, linear algebra, and numerical computation
- Parser combinators and generators
- DSL definition, design, metaprogramming, reflection, language workbenches
- Property-based testing and unit testing
- UI libraries for prodirect manipulation
Figure out which parts of the backend to leave in Haskell.
|
non_process
|
look into migrating from haskell to browser native language candidates elm purescript typescript javascript maybe using svelte snap react figure out the state of web support in the following sophisticated static type systems and typechecking autodifferentiation optimization linear algebra and numerical computation parser combinators and generators dsl definition design metaprogramming reflection language workbenches property based testing and unit testing ui libraries for prodirect manipulation figure out which parts of the backend to leave in haskell
| 0
|
52,158
| 13,724,722,579
|
IssuesEvent
|
2020-10-03 15:24:25
|
ScratchAddons/ScratchAddons
|
https://api.github.com/repos/ScratchAddons/ScratchAddons
|
closed
|
We will reject addons using innerHTML or triple curly from now on
|
addon specific announcement discussion security
|
Please do not use innerHTML. Use appendChild or textContent or innerText. Existing addons will be changed to use these methods. This includes Vue `{{{`.
|
True
|
We will reject addons using innerHTML or triple curly from now on - Please do not use innerHTML. Use appendChild or textContent or innerText. Existing addons will be changed to use these methods. This includes Vue `{{{`.
|
non_process
|
we will reject addons using innerhtml or triple curly from now on please do not use innerhtml use appendchild or textcontent or innertext existing addons will be changed to use these methods this includes vue
| 0
|
310,855
| 23,357,690,202
|
IssuesEvent
|
2022-08-10 08:54:26
|
software-mansion/protostar
|
https://api.github.com/repos/software-mansion/protostar
|
opened
|
Document various flavours of testing
|
documentation
|
Add a section to our documentation:
Testing
- Flavours (01, move other pages further)
- Unit testing (01)
- Integration testing (02)
- E2E testing (03)
In each page describe the theory of each approach and then showcase on an example how to do it in Protostar.
The flavours page should simply be an intro of what's going on here and put this there:
```markdown
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
```
|
1.0
|
Document various flavours of testing - Add a section to our documentation:
Testing
- Flavours (01, move other pages further)
- Unit testing (01)
- Integration testing (02)
- E2E testing (03)
In each page describe the theory of each approach and then showcase on an example how to do it in Protostar.
The flavours page should simply be an intro of what's going on here and put this there:
```markdown
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
```
|
non_process
|
document various flavours of testing add a section to our documentation testing flavours move other pages further unit testing integration testing testing in each page describe the theory of each approach and then showcase on an example how to do it in protostar the flavours page should simply be an intro of what s going on here and put this there markdown mdx code block import doccardlist from theme doccardlist import usecurrentsidebarcategory from docusaurus theme common
| 0
|
2,560
| 12,280,206,112
|
IssuesEvent
|
2020-05-08 13:43:59
|
ibm-cloud-architecture/cloudpak8s
|
https://api.github.com/repos/ibm-cloud-architecture/cloudpak8s
|
closed
|
Hide unfinished sections of CP for Automation
|
cp4automation
|
Request from Tech Sales leader. I have just commented out the unfinished parts, so they are hidden but not deleted.
|
1.0
|
Hide unfinished sections of CP for Automation - Request from Tech Sales leader. I have just commented out the unfinished parts, so they are hidden but not deleted.
|
non_process
|
hide unfinished sections of cp for automation request from tech sales leader i have just commented out the unfinished parts so they are hidden but not deleted
| 0
|
1,554
| 4,155,939,710
|
IssuesEvent
|
2016-06-16 16:20:43
|
altoxml/schema
|
https://api.github.com/repos/altoxml/schema
|
closed
|
Add Processing to replace OCRProcessing
|
2 discussion processing history
|
The current process recording elements are fixed with OCR and on the other hand bit redundand. I think it would make sense to change *OCRProcessing* to *Processing* and the *preProcessingStep*,*ocrProcessingStep*, *postProcessingStep* to generic *processingStep* with *processingStepType* element to record the type of processing performed.
Currently:
```XML
<OCRProcessing ID="OCRPROCESSING_1">
<preProcessingStep>
<processingDateTime>2009-10-19</processingDateTime>
<processingAgency>CCS Content Conversion Specialists GmbH,
</processingAgency>
<processingStepDescription>align</processingStepDescription>
<processingStepSettings>CCS OCR Processing Filter</processingStepSettings>
<processingSoftware>
<softwareCreator>CCS Content Conversion Specialists GmbH,Germany</softwareCreator>
<softwareName>CCS docWORKS</softwareName>
<softwareVersion>6.3-0.91</softwareVersion>
<applicationDescription/>
</processingSoftware>
</preProcessingStep>
<ocrProcessingStep>
<processingSoftware>
<softwareCreator>ABBYY (BIT Software), Russia</softwareCreator>
<softwareName>FineReader</softwareName>
<softwareVersion>8.1</softwareVersion>
</processingSoftware>
</ocrProcessingStep>
</OCRProcessing>
```
Suggestion
```XML
<Processing>
<ProcessingStep ID="01">
<processingDateTime>2009-10-19T10:10:10+05:00</processingDateTime>
<processingStepType>image processing</processingStepType>
<processingAgency>ACME Processing</processingAgency>
<processingStepDescription>align</processingStepDescription>
<processingStepSettings>ACME OCR Processing Filter</processingStepSettings>
<processingSoftware>
<softwareCreator>CCS Content Conversion Specialists GmbH, Germany</softwareCreator>
<softwareName>CCS docWORKS</softwareName>
<softwareVersion>6.3-0.91</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
<ProcessingStep ID="02">
<processingDateTime>2009-10-19T10:21:14+05:00</processingDateTime>
<processingStepType>OCR</processingStepType>
<processingAgency>CCS Content Conversion Specialists GmbH, www.content-conversion.com</processingAgency>
<processingStepDescription></processingStepDescription>
<processingStepSettings></processingStepSettings>
<processingSoftware>
<softwareCreator>ABBYY (BIT Software), Russia</softwareCreator>
<softwareName>FineReader</softwareName>
<softwareVersion>8.1</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
<ProcessingStep ID="03">
<processingDateTime>2009-10-19T15:28:30+05:00</processingDateTime>
<processingStepType>Proofreading</processingStepType>
<processingAgency>ACME Corp.</processingAgency>
<processingStepDescription></processingStepDescription>
<processingStepSettings></processingStepSettings>
<processingSoftware>
<softwareCreator>ACME</softwareCreator>
<softwareName>Proofreader</softwareName>
<softwareVersion>9.9</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
</Processing>
```
Schema changes:
```XML
<xsd:element name="OCRProcessing" minOccurs="0" maxOccurs="unbounded">
+ <xsd:annotation>
+ <xsd:documentation>DEPRECATED: Processing element should be used instead.
+ </xsd:documentation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base="ocrProcessingType">
<xsd:attribute name="ID" type="xsd:ID" use="required"/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
+<xsd:element name="Processing" minOccurs="0" maxOccurs="unbounded">
+ <xsd:complexType>
+ <xsd:complexContent>
+ <xsd:extension base="ProcessingStepType">
+ <xsd:attribute name="ID" type="xsd:ID" use="required"/>
+ </xsd:extension>
+ </xsd:complexContent>
+ </xsd:complexType>
<xsd:complexType name="ProcessingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
+ <xsd:element name="processingStepType" type="xsd:string" minOccurs="0">
+ <xsd:annotation>
+ <xsd:documentation>Type of processing step</xsd:documentation>
+ </xsd:annotation>
+ </xsd:element>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Date or DateTime the image was processed.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingAgency" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Identifies the organizationlevel producer(s) of the
processed image.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>An ordinal listing of the image processing steps performed.
For example, "image despeckling."</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
```
|
1.0
|
Add Processing to replace OCRProcessing - The current process recording elements are fixed with OCR and on the other hand bit redundand. I think it would make sense to change *OCRProcessing* to *Processing* and the *preProcessingStep*,*ocrProcessingStep*, *postProcessingStep* to generic *processingStep* with *processingStepType* element to record the type of processing performed.
Currently:
```XML
<OCRProcessing ID="OCRPROCESSING_1">
<preProcessingStep>
<processingDateTime>2009-10-19</processingDateTime>
<processingAgency>CCS Content Conversion Specialists GmbH,
</processingAgency>
<processingStepDescription>align</processingStepDescription>
<processingStepSettings>CCS OCR Processing Filter</processingStepSettings>
<processingSoftware>
<softwareCreator>CCS Content Conversion Specialists GmbH,Germany</softwareCreator>
<softwareName>CCS docWORKS</softwareName>
<softwareVersion>6.3-0.91</softwareVersion>
<applicationDescription/>
</processingSoftware>
</preProcessingStep>
<ocrProcessingStep>
<processingSoftware>
<softwareCreator>ABBYY (BIT Software), Russia</softwareCreator>
<softwareName>FineReader</softwareName>
<softwareVersion>8.1</softwareVersion>
</processingSoftware>
</ocrProcessingStep>
</OCRProcessing>
```
Suggestion
```XML
<Processing>
<ProcessingStep ID="01">
<processingDateTime>2009-10-19T10:10:10+05:00</processingDateTime>
<processingStepType>image processing</processingStepType>
<processingAgency>ACME Processing</processingAgency>
<processingStepDescription>align</processingStepDescription>
<processingStepSettings>ACME OCR Processing Filter</processingStepSettings>
<processingSoftware>
<softwareCreator>CCS Content Conversion Specialists GmbH, Germany</softwareCreator>
<softwareName>CCS docWORKS</softwareName>
<softwareVersion>6.3-0.91</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
<ProcessingStep ID="02">
<processingDateTime>2009-10-19T10:21:14+05:00</processingDateTime>
<processingStepType>OCR</processingStepType>
<processingAgency>CCS Content Conversion Specialists GmbH, www.content-conversion.com</processingAgency>
<processingStepDescription></processingStepDescription>
<processingStepSettings></processingStepSettings>
<processingSoftware>
<softwareCreator>ABBYY (BIT Software), Russia</softwareCreator>
<softwareName>FineReader</softwareName>
<softwareVersion>8.1</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
<ProcessingStep ID="03">
<processingDateTime>2009-10-19T15:28:30+05:00</processingDateTime>
<processingStepType>Proofreading</processingStepType>
<processingAgency>ACME Corp.</processingAgency>
<processingStepDescription></processingStepDescription>
<processingStepSettings></processingStepSettings>
<processingSoftware>
<softwareCreator>ACME</softwareCreator>
<softwareName>Proofreader</softwareName>
<softwareVersion>9.9</softwareVersion>
<softwareDescription/>
</processingSoftware>
</ProcessingStep>
</Processing>
```
Schema changes:
```XML
<xsd:element name="OCRProcessing" minOccurs="0" maxOccurs="unbounded">
+ <xsd:annotation>
+ <xsd:documentation>DEPRECATED: Processing element should be used instead.
+ </xsd:documentation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base="ocrProcessingType">
<xsd:attribute name="ID" type="xsd:ID" use="required"/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
+<xsd:element name="Processing" minOccurs="0" maxOccurs="unbounded">
+ <xsd:complexType>
+ <xsd:complexContent>
+ <xsd:extension base="ProcessingStepType">
+ <xsd:attribute name="ID" type="xsd:ID" use="required"/>
+ </xsd:extension>
+ </xsd:complexContent>
+ </xsd:complexType>
<xsd:complexType name="ProcessingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
+ <xsd:element name="processingStepType" type="xsd:string" minOccurs="0">
+ <xsd:annotation>
+ <xsd:documentation>Type of processing step</xsd:documentation>
+ </xsd:annotation>
+ </xsd:element>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Date or DateTime the image was processed.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingAgency" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Identifies the organizationlevel producer(s) of the
processed image.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>An ordinal listing of the image processing steps performed.
For example, "image despeckling."</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
```
|
process
|
add processing to replace ocrprocessing the current process recording elements are fixed with ocr and on the other hand bit redundand i think it would make sense to change ocrprocessing to processing and the preprocessingstep ocrprocessingstep postprocessingstep to generic processingstep with processingsteptype element to record the type of processing performed currently xml ccs content conversion specialists gmbh align ccs ocr processing filter ccs content conversion specialists gmbh germany ccs docworks abbyy bit software russia finereader suggestion xml image processing acme processing align acme ocr processing filter ccs content conversion specialists gmbh germany ccs docworks ocr ccs content conversion specialists gmbh abbyy bit software russia finereader proofreading acme corp acme proofreader schema changes xml deprecated processing element should be used instead a processing step type of processing step date or datetime the image was processed identifies the organizationlevel producer s of the processed image an ordinal listing of the image processing steps performed for example image despeckling a description of any setting of the processing application for example for a multi engine ocr application this might include the engines which were used ideally this description should be adequate so that someone else using the same application can produce identical results
| 1
|
17,380
| 23,200,399,092
|
IssuesEvent
|
2022-08-01 20:48:37
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] `bre.is`
|
whitelisting process
|
**Domains or links**
`bre.is`
**More Information**
It's a short URL service(Homepage: `https://bre.is/`), like other short URL services, very common to be leveraged to spread some bad things behind, not sure if it's the case can or should be whitelisted here?
I found this domain blocked when someone shared a short URL with me.
References:
- https://otx.alienvault.com/indicator/domain/bre.is
- https://www.urlvoid.com/scan/bre.is/
**Have you requested removal from other sources?**
https://github.com/mitchellkrogza/Phishing.Database/issues/288
|
1.0
|
[FALSE-POSITIVE?] `bre.is` - **Domains or links**
`bre.is`
**More Information**
It's a short URL service(Homepage: `https://bre.is/`), like other short URL services, very common to be leveraged to spread some bad things behind, not sure if it's the case can or should be whitelisted here?
I found this domain blocked when someone shared a short URL with me.
References:
- https://otx.alienvault.com/indicator/domain/bre.is
- https://www.urlvoid.com/scan/bre.is/
**Have you requested removal from other sources?**
https://github.com/mitchellkrogza/Phishing.Database/issues/288
|
process
|
bre is domains or links bre is more information it s a short url service homepage like other short url services very common to be leveraged to spread some bad things behind not sure if it s the case can or should be whitelisted here i found this domain blocked when someone shared a short url with me references have you requested removal from other sources
| 1
|
270
| 2,700,367,357
|
IssuesEvent
|
2015-04-04 02:49:42
|
tomchristie/django-rest-framework
|
https://api.github.com/repos/tomchristie/django-rest-framework
|
closed
|
Maintainers for Q2, 2015.
|
Process
|
This issue is for determining the maintenance team for the Q2, 2015 period.
Please see the [Project management](http://www.django-rest-framework.org/topics/project-management/) section of our documentation for more details.
---
#### Renewing existing members.
The following people are the current maintenance team. Please checkmark your name if you wish to continue to have write permission on the repository for the Q2, 2015 period.
- [x] @tomchristie
- [x] @xordoquy
- [x] @carltongibson
- [x] @kevin-brown
- [x] @jpadilla
---
#### New members.
If you wish to be considered for this or a future date, please comment against this or subsequent issues.
To modify this process for future maintenance cycles make a pull request to the [project management](http://www.django-rest-framework.org/topics/project-management/) documentation.
|
1.0
|
Maintainers for Q2, 2015. - This issue is for determining the maintenance team for the Q2, 2015 period.
Please see the [Project management](http://www.django-rest-framework.org/topics/project-management/) section of our documentation for more details.
---
#### Renewing existing members.
The following people are the current maintenance team. Please checkmark your name if you wish to continue to have write permission on the repository for the Q2, 2015 period.
- [x] @tomchristie
- [x] @xordoquy
- [x] @carltongibson
- [x] @kevin-brown
- [x] @jpadilla
---
#### New members.
If you wish to be considered for this or a future date, please comment against this or subsequent issues.
To modify this process for future maintenance cycles make a pull request to the [project management](http://www.django-rest-framework.org/topics/project-management/) documentation.
|
process
|
maintainers for this issue is for determining the maintenance team for the period please see the section of our documentation for more details renewing existing members the following people are the current maintenance team please checkmark your name if you wish to continue to have write permission on the repository for the period tomchristie xordoquy carltongibson kevin brown jpadilla new members if you wish to be considered for this or a future date please comment against this or subsequent issues to modify this process for future maintenance cycles make a pull request to the documentation
| 1
|
4,244
| 7,187,135,894
|
IssuesEvent
|
2018-02-02 03:09:29
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
lastBlock.txt in monitors is fragile. Should just read to the end of teh file and find the last block
|
monitors-all status-inprocess type-enhancement
|
Fix this when we fix the binary file issue #226
|
1.0
|
lastBlock.txt in monitors is fragile. Should just read to the end of teh file and find the last block - Fix this when we fix the binary file issue #226
|
process
|
lastblock txt in monitors is fragile should just read to the end of teh file and find the last block fix this when we fix the binary file issue
| 1
|
577,093
| 17,103,336,923
|
IssuesEvent
|
2021-07-09 14:17:39
|
jpmorganchase/modular
|
https://api.github.com/repos/jpmorganchase/modular
|
closed
|
Global coverage vs local package coverage.
|
discussion medium priority
|
Since adding my partial test change back to Credit, times have improved a lot, (a recent build took 12mins as oppose to 1h 30mins) but now instead of reporting an average coverage in the changed package, we now report the average coverage of the changed file(s).
First complaint from someone with a failing build, "but I didn't create this file, I just needed to make a small change, now you say I have to get the coverage above 90%?!?!"
Devs were relying on these averages by testing some files really high (above 90%) so they could get around other files (probably ones they find/are actually tricky to test).
Now, for example, if they have a branch with a change to just this uncovered file, it will fail against the global jest.config.js coverage threshold.
With this in mind and the Sonar requirements for a repo:
Sonar only supports a coverage at the repo level.
50% coverage report for a develop/master.
70% on 90 day old code and 50% on rest.
I imagine in the short-term we will need a global jest to be sure we meet the above requirements with no single package dragging the rest down.
@NMinhNguyen pinged me some threads that will help the discussion here:
https://github.com/nrwl/nx/issues/622
https://github.com/facebook/jest/issues/2418
Quote form the nx issue: **"Having one report doesn't really work for most organizations. Imagine you have two teams building two apps in the same repo. The fact that one team failed to meet a certain threshold shouldn't affect the other team.
So we can have a report per team (app+all its libs), but then the situation tricky if you have a lot of shared libs."**
In relation to above, we are stuck at the moment with one report and if we enforce a global level coverage prior to PR merge ,teams can't impact each other.
So, I was wondering if we have already discussed ideas around this? and if not, maybe we could bake something in here so projects in the repo are certain to meet the tollgate requirements.
|
1.0
|
Global coverage vs local package coverage. - Since adding my partial test change back to Credit, times have improved a lot, (a recent build took 12mins as oppose to 1h 30mins) but now instead of reporting an average coverage in the changed package, we now report the average coverage of the changed file(s).
First complaint from someone with a failing build, "but I didn't create this file, I just needed to make a small change, now you say I have to get the coverage above 90%?!?!"
Devs were relying on these averages by testing some files really high (above 90%) so they could get around other files (probably ones they find/are actually tricky to test).
Now, for example, if they have a branch with a change to just this uncovered file, it will fail against the global jest.config.js coverage threshold.
With this in mind and the Sonar requirements for a repo:
Sonar only supports a coverage at the repo level.
50% coverage report for a develop/master.
70% on 90 day old code and 50% on rest.
I imagine in the short-term we will need a global jest to be sure we meet the above requirements with no single package dragging the rest down.
@NMinhNguyen pinged me some threads that will help the discussion here:
https://github.com/nrwl/nx/issues/622
https://github.com/facebook/jest/issues/2418
Quote form the nx issue: **"Having one report doesn't really work for most organizations. Imagine you have two teams building two apps in the same repo. The fact that one team failed to meet a certain threshold shouldn't affect the other team.
So we can have a report per team (app+all its libs), but then the situation tricky if you have a lot of shared libs."**
In relation to above, we are stuck at the moment with one report and if we enforce a global level coverage prior to PR merge ,teams can't impact each other.
So, I was wondering if we have already discussed ideas around this? and if not, maybe we could bake something in here so projects in the repo are certain to meet the tollgate requirements.
|
non_process
|
global coverage vs local package coverage since adding my partial test change back to credit times have improved a lot a recent build took as oppose to but now instead of reporting an average coverage in the changed package we now report the average coverage of the changed file s first complaint from someone with a failing build but i didn t create this file i just needed to make a small change now you say i have to get the coverage above devs were relying on these averages by testing some files really high above so they could get around other files probably ones they find are actually tricky to test now for example if they have a branch with a change to just this uncovered file it will fail against the global jest config js coverage threshold with this in mind and the sonar requirements for a repo sonar only supports a coverage at the repo level coverage report for a develop master on day old code and on rest i imagine in the short term we will need a global jest to be sure we meet the above requirements with no single package dragging the rest down nminhnguyen pinged me some threads that will help the discussion here quote form the nx issue having one report doesn t really work for most organizations imagine you have two teams building two apps in the same repo the fact that one team failed to meet a certain threshold shouldn t affect the other team so we can have a report per team app all its libs but then the situation tricky if you have a lot of shared libs in relation to above we are stuck at the moment with one report and if we enforce a global level coverage prior to pr merge teams can t impact each other so i was wondering if we have already discussed ideas around this and if not maybe we could bake something in here so projects in the repo are certain to meet the tollgate requirements
| 0
|
34,881
| 12,304,215,875
|
IssuesEvent
|
2020-05-11 20:06:10
|
three11/vuejs-template
|
https://api.github.com/repos/three11/vuejs-template
|
closed
|
WS-2020-0070 (High) detected in lodash-4.17.15.tgz
|
security vulnerability
|
## WS-2020-0070 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuejs-template/package.json</p>
<p>Path to vulnerable library: /vuejs-template/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/three11/vuejs-template/commit/2a98588429acc4ab7e986d66a4c49d3ce2b56a50">2a98588429acc4ab7e986d66a4c49d3ce2b56a50</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
a prototype pollution vulnerability in lodash. It allows an attacker to inject properties on Object.prototype
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://hackerone.com/reports/712065>WS-2020-0070</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0070 (High) detected in lodash-4.17.15.tgz - ## WS-2020-0070 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuejs-template/package.json</p>
<p>Path to vulnerable library: /vuejs-template/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/three11/vuejs-template/commit/2a98588429acc4ab7e986d66a4c49d3ce2b56a50">2a98588429acc4ab7e986d66a4c49d3ce2b56a50</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
a prototype pollution vulnerability in lodash. It allows an attacker to inject properties on Object.prototype
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://hackerone.com/reports/712065>WS-2020-0070</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in lodash tgz ws high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm vuejs template package json path to vulnerable library vuejs template node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability in lodash it allows an attacker to inject properties on object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
17,929
| 23,923,950,957
|
IssuesEvent
|
2022-09-09 20:00:38
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Can Multiple Filter Processors Be Used For POD Level Metrics
|
question processor/filter
|
Hi,
Can **multiple** `Filter Processors` be used for POD level metrics? What I want to do is to include metrics `pod_network_rx_bytes` for the POD `mysql-db` and metrics `pod_memory_utilization_over_pod_limit` for POD `microService`.
I am using the following config:
```
processors:
filter/db-pod-net-metrics:
# any names NOT matching filters are excluded from remainder of pipeline
metrics:
include:
match_type: regexp
metric_names:
# re2 regexp patterns
- ^pod_network_rx_bytes
resource_attributes:
- Key: PodName
Value: mysql-db
filter/ms-pod-mem-metrics:
# any names NOT matching filters are excluded from remainder of pipeline
metrics:
include:
match_type: regexp
metric_names:
# re2 regexp patterns
- ^pod_memory_utilization_over_pod_limit
resource_attributes:
- Key: PodName
Value: microService
```
This configuration does not work. When I remove one of the above filters, it starts scraping metrics nicely.
I am using [AWS Distro for OpenTelemetry](https://aws.amazon.com/blogs/containers/cost-savings-by-customizing-metrics-sent-by-container-insights-in-amazon-eks/) (ADOT) for sending metrics to CloudWatch from my EKS cluster.
If anyone could assist me with this, that would be great. Are there any other workarounds if this is not supported.
|
1.0
|
Can Multiple Filter Processors Be Used For POD Level Metrics - Hi,
Can **multiple** `Filter Processors` be used for POD level metrics? What I want to do is to include metrics `pod_network_rx_bytes` for the POD `mysql-db` and metrics `pod_memory_utilization_over_pod_limit` for POD `microService`.
I am using the following config:
```
processors:
filter/db-pod-net-metrics:
# any names NOT matching filters are excluded from remainder of pipeline
metrics:
include:
match_type: regexp
metric_names:
# re2 regexp patterns
- ^pod_network_rx_bytes
resource_attributes:
- Key: PodName
Value: mysql-db
filter/ms-pod-mem-metrics:
# any names NOT matching filters are excluded from remainder of pipeline
metrics:
include:
match_type: regexp
metric_names:
# re2 regexp patterns
- ^pod_memory_utilization_over_pod_limit
resource_attributes:
- Key: PodName
Value: microService
```
This configuration does not work. When I remove one of the above filters, it starts scraping metrics nicely.
I am using [AWS Distro for OpenTelemetry](https://aws.amazon.com/blogs/containers/cost-savings-by-customizing-metrics-sent-by-container-insights-in-amazon-eks/) (ADOT) for sending metrics to CloudWatch from my EKS cluster.
If anyone could assist me with this, that would be great. Are there any other workarounds if this is not supported.
|
process
|
can multiple filter processors be used for pod level metrics hi can multiple filter processors be used for pod level metrics what i want to do is to include metrics pod network rx bytes for the pod mysql db and metrics pod memory utilization over pod limit for pod microservice i am using the following config processors filter db pod net metrics any names not matching filters are excluded from remainder of pipeline metrics include match type regexp metric names regexp patterns pod network rx bytes resource attributes key podname value mysql db filter ms pod mem metrics any names not matching filters are excluded from remainder of pipeline metrics include match type regexp metric names regexp patterns pod memory utilization over pod limit resource attributes key podname value microservice this configuration does not work when i remove one of the above filters it starts scraping metrics nicely i am using adot for sending metrics to cloudwatch from my eks cluster if anyone could assist me with this that would be great are there any other workarounds if this is not supported
| 1
|
5,178
| 7,960,904,690
|
IssuesEvent
|
2018-07-13 08:56:08
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
filter_daterange: ISO8601 date/time ranges for temporal extents
|
data discovery processes
|
All temporal references should simply follow (a subset of) ISO8601.
For example, the time attribute for datasets should be changed from
```
"time": {
"from": "2016-01-01",
"to": "2017-10-01"
},
```
to
` "time": "2016-01-01/2017-10-01",`
and so on...
For filtering, we probably need to think about an extension to the ISO standard, e.g. get all files starting from 2016-01-01 (could be `2016-01-01/`?) or get all files prior to 2017-10-01 (could be `/2017-10-01`). STAC is thinking about something similar, maybe we can check what they use, but at the moment it seems to be something like a literal, e.g. gt, eq, lte, ... Related to #62 .
Tempotal extents should not be restricted on days, but could also be in some other unit
|
1.0
|
filter_daterange: ISO8601 date/time ranges for temporal extents - All temporal references should simply follow (a subset of) ISO8601.
For example, the time attribute for datasets should be changed from
```
"time": {
"from": "2016-01-01",
"to": "2017-10-01"
},
```
to
` "time": "2016-01-01/2017-10-01",`
and so on...
For filtering, we probably need to think about an extension to the ISO standard, e.g. get all files starting from 2016-01-01 (could be `2016-01-01/`?) or get all files prior to 2017-10-01 (could be `/2017-10-01`). STAC is thinking about something similar, maybe we can check what they use, but at the moment it seems to be something like a literal, e.g. gt, eq, lte, ... Related to #62 .
Tempotal extents should not be restricted on days, but could also be in some other unit
|
process
|
filter daterange date time ranges for temporal extents all temporal references should simply follow a subset of for example the time attribute for datasets should be changed from time from to to time and so on for filtering we probably need to think about an extension to the iso standard e g get all files starting from could be or get all files prior to could be stac is thinking about something similar maybe we can check what they use but at the moment it seems to be something like a literal e g gt eq lte related to tempotal extents should not be restricted on days but could also be in some other unit
| 1
|
7,563
| 10,681,963,450
|
IssuesEvent
|
2019-10-22 03:10:07
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Process.Start() stuck forever on Alpine Linux when UserName is set in StartInfo
|
area-System.Diagnostics.Process
|
Code example:
```csharp
using System;
using System.Diagnostics;
namespace test
{
class Program
{
static void Main(string[] args)
{
var p = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "/bin/sh",
UserName = "root",
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
UseShellExecute = false
},
EnableRaisingEvents = true
};
Console.WriteLine("Trying to start /bin/sh");
p.Start();
Console.WriteLine("/bin/sh started successfully");
}
}
}
```
Publish the project with `dotnet publish -c Release -r linux-musl-x64 --self-contained` and copy the output to a fresh-installed Alpine Linux VM.
```
localhost:~/test# ./test
Trying to start /bin/sh
```
(output stuck here)
The process tree looks like this

```
localhost:~# cat /proc/2421/stack
[<0>] _do_fork+0x21c/0x2fe
[<0>] do_syscall_64+0x50/0xeb
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff
localhost:~# cat /proc/2429/stack
[<0>] futex_wait_queue_me+0xbc/0x101
[<0>] futex_wait+0xd7/0x1f1
[<0>] do_futex+0x131/0x9a5
[<0>] __se_sys_futex+0x139/0x15e
[<0>] do_syscall_64+0x50/0xeb
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff
```
It seems that the parent process already called vfork() but child has not yet exec or exit, leaving the parent in the uninterruptible sleep state. Related framework code could possibly be [here](https://github.com/dotnet/corefx/blob/5c83394112febe1b481ab1c0b61a45c850677165/src/Native/Unix/System.Native/pal_process.c#L342)
Environment info
.NET Core SDK version: 3.0.100
OS: Alpine Linux 3.10 latest
|
1.0
|
Process.Start() stuck forever on Alpine Linux when UserName is set in StartInfo - Code example:
```csharp
using System;
using System.Diagnostics;
namespace test
{
class Program
{
static void Main(string[] args)
{
var p = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "/bin/sh",
UserName = "root",
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
UseShellExecute = false
},
EnableRaisingEvents = true
};
Console.WriteLine("Trying to start /bin/sh");
p.Start();
Console.WriteLine("/bin/sh started successfully");
}
}
}
```
Publish the project with `dotnet publish -c Release -r linux-musl-x64 --self-contained` and copy the output to a fresh-installed Alpine Linux VM.
```
localhost:~/test# ./test
Trying to start /bin/sh
```
(output stuck here)
The process tree looks like this

```
localhost:~# cat /proc/2421/stack
[<0>] _do_fork+0x21c/0x2fe
[<0>] do_syscall_64+0x50/0xeb
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff
localhost:~# cat /proc/2429/stack
[<0>] futex_wait_queue_me+0xbc/0x101
[<0>] futex_wait+0xd7/0x1f1
[<0>] do_futex+0x131/0x9a5
[<0>] __se_sys_futex+0x139/0x15e
[<0>] do_syscall_64+0x50/0xeb
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[<0>] 0xffffffffffffffff
```
It seems that the parent process already called vfork() but child has not yet exec or exit, leaving the parent in the uninterruptible sleep state. Related framework code could possibly be [here](https://github.com/dotnet/corefx/blob/5c83394112febe1b481ab1c0b61a45c850677165/src/Native/Unix/System.Native/pal_process.c#L342)
Environment info
.NET Core SDK version: 3.0.100
OS: Alpine Linux 3.10 latest
|
process
|
process start stuck forever on alpine linux when username is set in startinfo code example csharp using system using system diagnostics namespace test class program static void main string args var p new process startinfo new processstartinfo filename bin sh username root redirectstandardinput true redirectstandardoutput true redirectstandarderror true useshellexecute false enableraisingevents true console writeline trying to start bin sh p start console writeline bin sh started successfully publish the project with dotnet publish c release r linux musl self contained and copy the output to a fresh installed alpine linux vm localhost test test trying to start bin sh output stuck here the process tree looks like this localhost cat proc stack do fork do syscall entry syscall after hwframe localhost cat proc stack futex wait queue me futex wait do futex se sys futex do syscall entry syscall after hwframe it seems that the parent process already called vfork but child has not yet exec or exit leaving the parent in the uninterruptible sleep state related framework code could possibly be environment info net core sdk version os alpine linux latest
| 1
|
70,684
| 23,282,639,990
|
IssuesEvent
|
2022-08-05 13:34:08
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
closed
|
Test engine continues after a interrupted exception is hit, leaving Jenkins job running indefinitely
|
defect
|
For example, run ID j17263 hits an interrupted exception:
`20/05/2021 14:42:09.258 ERROR c.i.c.c.b.m.i.HBankManagerImpl - Terminal signoff failed
dev.galasa.zos3270.TextNotFoundException: Unable to find a field containing 'Bank of Hursley Park Main Menu'
at dev.galasa.zos3270.spi.Screen.waitForTextInField(Screen.java:945)
at dev.galasa.zos3270.spi.Screen.waitForTextInField(Screen.java:938)
at dev.galasa.zos3270.spi.Terminal.waitForTextInField(Terminal.java:211)
at com.ibm.cics.cip.bank.manager.internal.HBankTerminalImpl.goToMainMenu(HBankTerminalImpl.java:125)
at com.ibm.cics.cip.bank.manager.internal.HBankTerminalImpl.signOff(HBankTerminalImpl.java:80)
at com.ibm.cics.cip.bank.manager.internal.HBankManagerImpl.provisionDiscard(HBankManagerImpl.java:346)
at dev.galasa.framework.TestRunManagers.provisionDiscard(TestRunManagers.java:501)
at dev.galasa.framework.TestRunner.discardEnvironment(TestRunner.java:481)
at dev.galasa.framework.TestRunner.createEnvironment(TestRunner.java:470)
at dev.galasa.framework.TestRunner.generateEnvironment(TestRunner.java:438)
at dev.galasa.framework.TestRunner.runTest(TestRunner.java:356)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:219)
at dev.galasa.boot.Launcher.launch(Launcher.java:163)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
20/05/2021 14:42:09.317 ERROR dev.galasa.boot.Launcher.launch - Unable to run test class
dev.galasa.boot.LauncherException: dev.galasa.framework.TestRunException: Failed to update status
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:221)
at dev.galasa.boot.Launcher.launch(Launcher.java:163)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
Caused by: dev.galasa.framework.TestRunException: Failed to update status
at dev.galasa.framework.TestRunner.updateStatus(TestRunner.java:595)
at dev.galasa.framework.TestRunner.runTest(TestRunner.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:219)
... 2 more
Caused by: dev.galasa.framework.spi.DynamicStatusStoreException: Could not put key-value
at dev.galasa.cps.etcd.internal.Etcd3DynamicStatusStore.put(Etcd3DynamicStatusStore.java:101)
at dev.galasa.framework.internal.dss.FrameworkDynamicStoreKeyAccess.put(FrameworkDynamicStoreKeyAccess.java:60)
at dev.galasa.framework.TestRunner.updateStatus(TestRunner.java:590)
... 8 more
Caused by: java.lang.InterruptedException
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at dev.galasa.cps.etcd.internal.Etcd3DynamicStatusStore.put(Etcd3DynamicStatusStore.java:98)
... 10 more
20/05/2021 14:42:09.318 DEBUG dev.galasa.boot.felix.FelixFramework.stopFramework - Stopping Felix framework
Exception in thread "main" java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at org.apache.felix.framework.util.ThreadGate.await(ThreadGate.java:81)
at org.apache.felix.framework.Felix.waitForStop(Felix.java:1222)
at dev.galasa.boot.felix.FelixFramework.stopFramework(FelixFramework.java:740)
at dev.galasa.boot.Launcher.launch(Launcher.java:191)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
20/05/2021 14:42:09.338 INFO d.g.f.Framework - Framework service deactivated`
I cancelled my Jenkins job manually after noticing it had taken about 30 minutes to run.
|
1.0
|
Test engine continues after a interrupted exception is hit, leaving Jenkins job running indefinitely - For example, run ID j17263 hits an interrupted exception:
`20/05/2021 14:42:09.258 ERROR c.i.c.c.b.m.i.HBankManagerImpl - Terminal signoff failed
dev.galasa.zos3270.TextNotFoundException: Unable to find a field containing 'Bank of Hursley Park Main Menu'
at dev.galasa.zos3270.spi.Screen.waitForTextInField(Screen.java:945)
at dev.galasa.zos3270.spi.Screen.waitForTextInField(Screen.java:938)
at dev.galasa.zos3270.spi.Terminal.waitForTextInField(Terminal.java:211)
at com.ibm.cics.cip.bank.manager.internal.HBankTerminalImpl.goToMainMenu(HBankTerminalImpl.java:125)
at com.ibm.cics.cip.bank.manager.internal.HBankTerminalImpl.signOff(HBankTerminalImpl.java:80)
at com.ibm.cics.cip.bank.manager.internal.HBankManagerImpl.provisionDiscard(HBankManagerImpl.java:346)
at dev.galasa.framework.TestRunManagers.provisionDiscard(TestRunManagers.java:501)
at dev.galasa.framework.TestRunner.discardEnvironment(TestRunner.java:481)
at dev.galasa.framework.TestRunner.createEnvironment(TestRunner.java:470)
at dev.galasa.framework.TestRunner.generateEnvironment(TestRunner.java:438)
at dev.galasa.framework.TestRunner.runTest(TestRunner.java:356)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:219)
at dev.galasa.boot.Launcher.launch(Launcher.java:163)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
20/05/2021 14:42:09.317 ERROR dev.galasa.boot.Launcher.launch - Unable to run test class
dev.galasa.boot.LauncherException: dev.galasa.framework.TestRunException: Failed to update status
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:221)
at dev.galasa.boot.Launcher.launch(Launcher.java:163)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
Caused by: dev.galasa.framework.TestRunException: Failed to update status
at dev.galasa.framework.TestRunner.updateStatus(TestRunner.java:595)
at dev.galasa.framework.TestRunner.runTest(TestRunner.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at dev.galasa.boot.felix.FelixFramework.runTest(FelixFramework.java:219)
... 2 more
Caused by: dev.galasa.framework.spi.DynamicStatusStoreException: Could not put key-value
at dev.galasa.cps.etcd.internal.Etcd3DynamicStatusStore.put(Etcd3DynamicStatusStore.java:101)
at dev.galasa.framework.internal.dss.FrameworkDynamicStoreKeyAccess.put(FrameworkDynamicStoreKeyAccess.java:60)
at dev.galasa.framework.TestRunner.updateStatus(TestRunner.java:590)
... 8 more
Caused by: java.lang.InterruptedException
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at dev.galasa.cps.etcd.internal.Etcd3DynamicStatusStore.put(Etcd3DynamicStatusStore.java:98)
... 10 more
20/05/2021 14:42:09.318 DEBUG dev.galasa.boot.felix.FelixFramework.stopFramework - Stopping Felix framework
Exception in thread "main" java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at org.apache.felix.framework.util.ThreadGate.await(ThreadGate.java:81)
at org.apache.felix.framework.Felix.waitForStop(Felix.java:1222)
at dev.galasa.boot.felix.FelixFramework.stopFramework(FelixFramework.java:740)
at dev.galasa.boot.Launcher.launch(Launcher.java:191)
at dev.galasa.boot.Launcher.main(Launcher.java:117)
20/05/2021 14:42:09.338 INFO d.g.f.Framework - Framework service deactivated`
I cancelled my Jenkins job manually after noticing it had taken about 30 minutes to run.
|
non_process
|
test engine continues after a interrupted exception is hit leaving jenkins job running indefinitely for example run id hits an interrupted exception error c i c c b m i hbankmanagerimpl terminal signoff failed dev galasa textnotfoundexception unable to find a field containing bank of hursley park main menu at dev galasa spi screen waitfortextinfield screen java at dev galasa spi screen waitfortextinfield screen java at dev galasa spi terminal waitfortextinfield terminal java at com ibm cics cip bank manager internal hbankterminalimpl gotomainmenu hbankterminalimpl java at com ibm cics cip bank manager internal hbankterminalimpl signoff hbankterminalimpl java at com ibm cics cip bank manager internal hbankmanagerimpl provisiondiscard hbankmanagerimpl java at dev galasa framework testrunmanagers provisiondiscard testrunmanagers java at dev galasa framework testrunner discardenvironment testrunner java at dev galasa framework testrunner createenvironment testrunner java at dev galasa framework testrunner generateenvironment testrunner java at dev galasa framework testrunner runtest testrunner java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at dev galasa boot felix felixframework runtest felixframework java at dev galasa boot launcher launch launcher java at dev galasa boot launcher main launcher java error dev galasa boot launcher launch unable to run test class dev galasa boot launcherexception dev galasa framework testrunexception failed to update status at dev galasa boot felix felixframework runtest felixframework java at dev galasa boot launcher launch launcher java at dev galasa boot launcher main launcher java caused by dev galasa framework testrunexception failed to update status at dev galasa framework testrunner updatestatus testrunner java at dev galasa framework testrunner runtest testrunner java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at dev galasa boot felix felixframework runtest felixframework java more caused by dev galasa framework spi dynamicstatusstoreexception could not put key value at dev galasa cps etcd internal put java at dev galasa framework internal dss frameworkdynamicstorekeyaccess put frameworkdynamicstorekeyaccess java at dev galasa framework testrunner updatestatus testrunner java more caused by java lang interruptedexception at java util concurrent completablefuture reportget completablefuture java at java util concurrent completablefuture get completablefuture java at dev galasa cps etcd internal put java more debug dev galasa boot felix felixframework stopframework stopping felix framework exception in thread main java lang interruptedexception at java lang object wait native method at org apache felix framework util threadgate await threadgate java at org apache felix framework felix waitforstop felix java at dev galasa boot felix felixframework stopframework felixframework java at dev galasa boot launcher launch launcher java at dev galasa boot launcher main launcher java info d g f framework framework service deactivated i cancelled my jenkins job manually after noticing it had taken about minutes to run
| 0
|
5,188
| 7,966,420,302
|
IssuesEvent
|
2018-07-14 21:53:46
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Build result when @conkeyref attribute to reference external content is possibly incorrect for the html5 transform
|
bug preprocess/conref
|
## Overview
DITA builds using the html5 transform do not handle @conkeyref attribute references outside of the directory hierarchy underneath the map referenced by args.input. The path to this file is miscalculated. This problem has few follow-on consequences:
- Content in t-prompt.dita is not pulled in to the output.
- @conkeyref references for shared content in a file in the same directory as the map is not pulled into the output.
- Paths to the CSS file are miscalculated.
Below is an overview of the content source setup. File src/shared_content/t-prompt.dita will not be seen if this is built:
```
src/
shared_content/
t-prompt.dita
topicset/
m-map.ditamap
c-top_doc.dita
t-conkeyref-test.dita
to-data.dita
```
In the example, m-map.ditamap has keyrefs:
<keyref keys="prompts" href="../shared_content/t-prompt.dita" processing-role="resource-only"/>
<keyref keys="ts" href="to-data.dita" processing-role="resource-only"/>
The file t-conkeyref-test.dita has a @conkeyref set like this:
<systemoutput conkeyref="prompts/python"/>
The result is that the path to shared content calculated by the transform appears to be incorrect. Impacts include path to to-data.dita being miscalculated, path to CSS file being miscalculated as well as path to t-prompt.dita being miscalculated.
## Steps to Replicate
1. Unzip the attached file.
2. Start a Windows command shell with the appropriate path setup.
3. Navigate to Example/DPAD_component.
4. Build the content by running the render.cmd script.
5. Find the errors in output.log.
6. Look at the source code for t-conkeyref-test.html, noting the path descrepencies to the CSS file.
## Actual Results
. Titles are missing
. Items referenced through @conkeyref are missing
. CSS path in HTML metadata is incorrect.
## Expected Results
To my wave of thinking, @conkeyrefs would be handled as they are in 1.x.
## Platform and Build
Windows 7
DITA 3.0.3
## Additional Platforms and Builds
I have seen this with all of the 2.x versions as well, also with Windows 7.
## Additional Information
I have tried xhtml and eclipsehelp both of which have the same problem as the html5 transform.
Notably this problem does not occur with the pdf transform.
|
1.0
|
Build result when @conkeyref attribute to reference external content is possibly incorrect for the html5 transform - ## Overview
DITA builds using the html5 transform do not handle @conkeyref attribute references outside of the directory hierarchy underneath the map referenced by args.input. The path to this file is miscalculated. This problem has few follow-on consequences:
- Content in t-prompt.dita is not pulled in to the output.
- @conkeyref references for shared content in a file in the same directory as the map is not pulled into the output.
- Paths to the CSS file are miscalculated.
Below is an overview of the content source setup. File src/shared_content/t-prompt.dita will not be seen if this is built:
```
src/
shared_content/
t-prompt.dita
topicset/
m-map.ditamap
c-top_doc.dita
t-conkeyref-test.dita
to-data.dita
```
In the example, m-map.ditamap has keyrefs:
<keyref keys="prompts" href="../shared_content/t-prompt.dita" processing-role="resource-only"/>
<keyref keys="ts" href="to-data.dita" processing-role="resource-only"/>
The file t-conkeyref-test.dita has a @conkeyref set like this:
<systemoutput conkeyref="prompts/python"/>
The result is that the path to shared content calculated by the transform appears to be incorrect. Impacts include path to to-data.dita being miscalculated, path to CSS file being miscalculated as well as path to t-prompt.dita being miscalculated.
## Steps to Replicate
1. Unzip the attached file.
2. Start a Windows command shell with the appropriate path setup.
3. Navigate to Example/DPAD_component.
4. Build the content by running the render.cmd script.
5. Find the errors in output.log.
6. Look at the source code for t-conkeyref-test.html, noting the path descrepencies to the CSS file.
## Actual Results
. Titles are missing
. Items referenced through @conkeyref are missing
. CSS path in HTML metadata is incorrect.
## Expected Results
To my wave of thinking, @conkeyrefs would be handled as they are in 1.x.
## Platform and Build
Windows 7
DITA 3.0.3
## Additional Platforms and Builds
I have seen this with all of the 2.x versions as well, also with Windows 7.
## Additional Information
I have tried xhtml and eclipsehelp both of which have the same problem as the html5 transform.
Notably this problem does not occur with the pdf transform.
|
process
|
build result when conkeyref attribute to reference external content is possibly incorrect for the transform overview dita builds using the transform do not handle conkeyref attribute references outside of the directory hierarchy underneath the map referenced by args input the path to this file is miscalculated this problem has few follow on consequences content in t prompt dita is not pulled in to the output conkeyref references for shared content in a file in the same directory as the map is not pulled into the output paths to the css file are miscalculated below is an overview of the content source setup file src shared content t prompt dita will not be seen if this is built src shared content t prompt dita topicset m map ditamap c top doc dita t conkeyref test dita to data dita in the example m map ditamap has keyrefs the file t conkeyref test dita has a conkeyref set like this the result is that the path to shared content calculated by the transform appears to be incorrect impacts include path to to data dita being miscalculated path to css file being miscalculated as well as path to t prompt dita being miscalculated steps to replicate unzip the attached file start a windows command shell with the appropriate path setup navigate to example dpad component build the content by running the render cmd script find the errors in output log look at the source code for t conkeyref test html noting the path descrepencies to the css file actual results titles are missing items referenced through conkeyref are missing css path in html metadata is incorrect expected results to my wave of thinking conkeyrefs would be handled as they are in x platform and build windows dita additional platforms and builds i have seen this with all of the x versions as well also with windows additional information i have tried xhtml and eclipsehelp both of which have the same problem as the transform notably this problem does not occur with the pdf transform
| 1
|
1,511
| 4,103,769,741
|
IssuesEvent
|
2016-06-04 22:28:51
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
closed
|
lvmcreate waiting for interaction
|
bug component:data processing priority: high
|
lvcreate want to interact with the user when there was a volume in this vg already before
```
WARNING: iso9660 signature detected on /dev/test/test2 at offset 32769. Wipe it? [y/n]:
```
|
1.0
|
lvmcreate waiting for interaction - lvcreate want to interact with the user when there was a volume in this vg already before
```
WARNING: iso9660 signature detected on /dev/test/test2 at offset 32769. Wipe it? [y/n]:
```
|
process
|
lvmcreate waiting for interaction lvcreate want to interact with the user when there was a volume in this vg already before warning signature detected on dev test at offset wipe it
| 1
|
17,758
| 23,672,980,499
|
IssuesEvent
|
2022-08-27 16:54:24
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Unreleased Object Store Fails To Compile With Only GCP Feature
|
bug development-process
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Following #2509 the object_store crate no longer compiles with just the GCP feature enabled
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
It should compile and this should have been caught in CI
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
Unreleased Object Store Fails To Compile With Only GCP Feature - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Following #2509 the object_store crate no longer compiles with just the GCP feature enabled
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
It should compile and this should have been caught in CI
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
unreleased object store fails to compile with only gcp feature describe the bug a clear and concise description of what the bug is following the object store crate no longer compiles with just the gcp feature enabled to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen it should compile and this should have been caught in ci additional context add any other context about the problem here
| 1
|
111,169
| 9,516,072,059
|
IssuesEvent
|
2019-04-26 07:52:53
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] :plugins:discovery-gce:qa:gce:integTestCluster#wait failing
|
:Distributed/Discovery-Plugins >test-failure
|
Failed on my PR build here:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-1/12781/
Primary cause seems to be:
```
| [2019-04-25T19:29:58,824][WARN ][o.e.c.g.GceInstancesServiceImpl] [node-0] Problem fetching instance list for zone test-zone
| com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error
| /var/lib/jenkins/workspace/elastic+elasticsearch+pull-request-1/plugins/discovery-gce/qa/gce/build/generated-resources/nodes.uri
| at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-1.23.0.jar:1.23.0]
| at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113) ~[google-api-client-1.23.0.
```
Found a previous similar 6.x only issue here, unsure if related: #34272.
|
1.0
|
[CI] :plugins:discovery-gce:qa:gce:integTestCluster#wait failing - Failed on my PR build here:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-1/12781/
Primary cause seems to be:
```
| [2019-04-25T19:29:58,824][WARN ][o.e.c.g.GceInstancesServiceImpl] [node-0] Problem fetching instance list for zone test-zone
| com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error
| /var/lib/jenkins/workspace/elastic+elasticsearch+pull-request-1/plugins/discovery-gce/qa/gce/build/generated-resources/nodes.uri
| at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-1.23.0.jar:1.23.0]
| at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113) ~[google-api-client-1.23.0.
```
Found a previous similar 6.x only issue here, unsure if related: #34272.
|
non_process
|
plugins discovery gce qa gce integtestcluster wait failing failed on my pr build here primary cause seems to be problem fetching instance list for zone test zone com google api client googleapis json googlejsonresponseexception internal server error var lib jenkins workspace elastic elasticsearch pull request plugins discovery gce qa gce build generated resources nodes uri at com google api client googleapis json googlejsonresponseexception from googlejsonresponseexception java at com google api client googleapis services json abstractgooglejsonclientrequest newexceptiononerror abstractgooglejsonclientrequest java google api client found a previous similar x only issue here unsure if related
| 0
|
19,032
| 25,041,589,051
|
IssuesEvent
|
2022-11-04 21:28:32
|
nerfstudio-project/nerfstudio
|
https://api.github.com/repos/nerfstudio-project/nerfstudio
|
closed
|
Horizontal and vertical camera treated as different by colmap
|
bug data processing
|
**Describe the bug**
If you create a dataset with vertical and horizontal images, colmap will treat this as two different cameras. This breaks the dataloader.
**To Reproduce**
Take vertical and horizontal images and process them with `ns-process-data`. The model can not train on the resulting dataset.
**Expected behavior**
The method should be invariant to camera capture orientation.
|
1.0
|
Horizontal and vertical camera treated as different by colmap - **Describe the bug**
If you create a dataset with vertical and horizontal images, colmap will treat this as two different cameras. This breaks the dataloader.
**To Reproduce**
Take vertical and horizontal images and process them with `ns-process-data`. The model can not train on the resulting dataset.
**Expected behavior**
The method should be invariant to camera capture orientation.
|
process
|
horizontal and vertical camera treated as different by colmap describe the bug if you create a dataset with vertical and horizontal images colmap will treat this as two different cameras this breaks the dataloader to reproduce take vertical and horizontal images and process them with ns process data the model can not train on the resulting dataset expected behavior the method should be invariant to camera capture orientation
| 1
|
19,681
| 26,032,277,266
|
IssuesEvent
|
2022-12-21 22:51:24
|
GoogleCloudPlatform/cloud-sql-go-connector
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-go-connector
|
closed
|
Connection failure due to Cloud SQL Admin API per user/minute quota
|
type: docs priority: p0 type: process
|
Hi,
we're experiencing an issue with the go connector that I just can't make sense of.
At some point, we hit the Cloud SQL Admin API per user/minute quota (in our case 180) for no apparent reason. Of course, once the quota is reached, refresh attempts do not succeed and no connection to cloudsql can be established, because the calls to `google.cloud.sql.v1beta4.SqlConnectService.GenerateEphemeralCert` and `google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings` fail with a 429 error. The only way to resolve this seems to be to kill the deployment for at least 1 minute, so that the quota resets. Then everything is fine again, but who know for how long.
We use IAM authentication and authorization. The service account is only used for one service, so the api calls all originate from that one specific service using the `cloud-sql-go-connector`.
Has anyone else experienced this kind of behavior?
Thanks!
|
1.0
|
Connection failure due to Cloud SQL Admin API per user/minute quota - Hi,
we're experiencing an issue with the go connector that I just can't make sense of.
At some point, we hit the Cloud SQL Admin API per user/minute quota (in our case 180) for no apparent reason. Of course, once the quota is reached, refresh attempts do not succeed and no connection to cloudsql can be established, because the calls to `google.cloud.sql.v1beta4.SqlConnectService.GenerateEphemeralCert` and `google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings` fail with a 429 error. The only way to resolve this seems to be to kill the deployment for at least 1 minute, so that the quota resets. Then everything is fine again, but who know for how long.
We use IAM authentication and authorization. The service account is only used for one service, so the api calls all originate from that one specific service using the `cloud-sql-go-connector`.
Has anyone else experienced this kind of behavior?
Thanks!
|
process
|
connection failure due to cloud sql admin api per user minute quota hi we re experiencing an issue with the go connector that i just can t make sense of at some point we hit the cloud sql admin api per user minute quota in our case for no apparent reason of course once the quota is reached refresh attempts do not succeed and no connection to cloudsql can be established because the calls to google cloud sql sqlconnectservice generateephemeralcert and google cloud sql sqlconnectservice getconnectsettings fail with a error the only way to resolve this seems to be to kill the deployment for at least minute so that the quota resets then everything is fine again but who know for how long we use iam authentication and authorization the service account is only used for one service so the api calls all originate from that one specific service using the cloud sql go connector has anyone else experienced this kind of behavior thanks
| 1
|
29,647
| 11,765,905,499
|
IssuesEvent
|
2020-03-14 19:35:47
|
Zeus-HelpDesk/Zeus
|
https://api.github.com/repos/Zeus-HelpDesk/Zeus
|
opened
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/Zeus/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /Zeus/node_modules/vm-browserify/example/run/index.html,/Zeus/node_modules/sockjs/examples/echo/index.html,/Zeus/node_modules/sockjs/examples/express-3.x/index.html,/Zeus/node_modules/sockjs/examples/multiplex/index.html,/Zeus/node_modules/sockjs/examples/express/index.html,/Zeus/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Zeus-HelpDesk/Zeus/commit/9f78b5d2815d2585fb44264d62ed8cf125b2d90d">9f78b5d2815d2585fb44264d62ed8cf125b2d90d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/Zeus/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /Zeus/node_modules/vm-browserify/example/run/index.html,/Zeus/node_modules/sockjs/examples/echo/index.html,/Zeus/node_modules/sockjs/examples/express-3.x/index.html,/Zeus/node_modules/sockjs/examples/multiplex/index.html,/Zeus/node_modules/sockjs/examples/express/index.html,/Zeus/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Zeus-HelpDesk/Zeus/commit/9f78b5d2815d2585fb44264d62ed8cf125b2d90d">9f78b5d2815d2585fb44264d62ed8cf125b2d90d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm zeus node modules vm browserify example run index html path to vulnerable library zeus node modules vm browserify example run index html zeus node modules sockjs examples echo index html zeus node modules sockjs examples express x index html zeus node modules sockjs examples multiplex index html zeus node modules sockjs examples express index html zeus node modules sockjs examples hapi html index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
18,508
| 24,551,455,230
|
IssuesEvent
|
2022-10-12 12:55:36
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Participants are navigating to change password screen, when signed in with the temporary password in the following scenario
|
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Sign in with the temporary password
2. Reload / Refresh the set up password screen and Verify
**AR:** Participants are navigating to change password screen in the following scenario
**ER:** Participants should not navigate to any other screen in the following scenario
[screen-capture (90).webm](https://user-images.githubusercontent.com/86007179/183926471-59a84477-e610-40e2-9516-8e067f0ea396.webm)
|
3.0
|
[PM] Participants are navigating to change password screen, when signed in with the temporary password in the following scenario - **Steps:**
1. Sign in with the temporary password
2. Reload / Refresh the set up password screen and Verify
**AR:** Participants are navigating to change password screen in the following scenario
**ER:** Participants should not navigate to any other screen in the following scenario
[screen-capture (90).webm](https://user-images.githubusercontent.com/86007179/183926471-59a84477-e610-40e2-9516-8e067f0ea396.webm)
|
process
|
participants are navigating to change password screen when signed in with the temporary password in the following scenario steps sign in with the temporary password reload refresh the set up password screen and verify ar participants are navigating to change password screen in the following scenario er participants should not navigate to any other screen in the following scenario
| 1
|
175,574
| 13,564,274,244
|
IssuesEvent
|
2020-09-18 09:47:47
|
LittleWhoDev/HelpingAngel-Web
|
https://api.github.com/repos/LittleWhoDev/HelpingAngel-Web
|
opened
|
Flow: view nearby posts
|
testing
|
- [ ] Select the "donor" option and be able to log in or proceed anonymously
- [ ] Select the map view or view it by default if not logged in #4 #12
- [ ] Use map filters (distance, category) #6
- [ ] View post details #8
|
1.0
|
Flow: view nearby posts - - [ ] Select the "donor" option and be able to log in or proceed anonymously
- [ ] Select the map view or view it by default if not logged in #4 #12
- [ ] Use map filters (distance, category) #6
- [ ] View post details #8
|
non_process
|
flow view nearby posts select the donor option and be able to log in or proceed anonymously select the map view or view it by default if not logged in use map filters distance category view post details
| 0
|
677,826
| 23,176,952,063
|
IssuesEvent
|
2022-07-31 15:04:39
|
Amulet-Team/Amulet-Map-Editor
|
https://api.github.com/repos/Amulet-Team/Amulet-Map-Editor
|
closed
|
[Bug Report] Import .construction into 1.13.2 World
|
type: bug priority: critical
|
**Describe the bug**
Importing a `.construction` file into a 1.13.2 world causes chunks to reset/regenerate to their original generator state.
**Expected behavior**
The `.construction` file successfully imports into world and does not cause chunks to reset/regenerate.
**To Reproduce**
Steps to reproduce the behavior:
1. Open 1.13.2 world named _city_ in Amulet
2. Create selection using 3D Editor and export as _building.construction_
3. Open 1.13.2 world named _test_ in Amulet
4. Import file _building.construction_ and save/close world
5. Re-open world _test_ in Amulet to confirm the import was successful
6. Launch Minecraft, load world _test_
7. Minecraft logs errors about not being able to read chunks and regenerates chunks where .construction file was imported
**Screenshots**
3D Selection being exported

.construction file being imported to world _test_

Reopen world _test_ to confirm import succeeded

Open world _test_ in minecraft to see chunks have been reset/regenerated

**Environment**
OS: Windows 10, Update 21H1
Minecraft Platform: Java
Minecraft Version: 1.13.2
Amulet Version: 0.9.6
**Additional context**
My client log can be found [here](https://cdn.itsdevil.com/uploads/error-log.txt)
The relevant portion is the following stacktrace:
```
[07:05:55] [Server thread/ERROR]: Couldn't load chunk
java.lang.IllegalArgumentException: The value 0 is not in the specified inclusive range of 1 to 32
at org.apache.commons.lang3.Validate.inclusiveBetween(Validate.java:1032) ~[commons-lang3-3.5.jar:3.5]
at xd.<init>(SourceFile:19) ~[1.13.2.jar:?]
at bnq.a(SourceFile:177) ~[1.13.2.jar:?]
at bnv.a(SourceFile:680) ~[1.13.2.jar:?]
at bnv.a(SourceFile:496) ~[1.13.2.jar:?]
at bnv.a(SourceFile:189) ~[1.13.2.jar:?]
at bnv.a(SourceFile:141) ~[1.13.2.jar:?]
at tc.a(SourceFile:102) [1.13.2.jar:?]
at tc.a(SourceFile:146) [1.13.2.jar:?]
at net.minecraft.server.MinecraftServer.a(SourceFile:429) [1.13.2.jar:?]
at dgh.a(SourceFile:92) [1.13.2.jar:?]
at dgh.d(SourceFile:108) [1.13.2.jar:?]
at net.minecraft.server.MinecraftServer.run(SourceFile:566) [1.13.2.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
```
Another similar stacktrace happens later on in the log file:
```
[07:05:55] [WorldGen-Scheduler-1/ERROR]: Couldn't load protochunk
java.lang.IllegalArgumentException: The value 0 is not in the specified inclusive range of 1 to 32
at org.apache.commons.lang3.Validate.inclusiveBetween(Validate.java:1032) ~[commons-lang3-3.5.jar:3.5]
at xd.<init>(SourceFile:19) ~[1.13.2.jar:?]
at bnq.a(SourceFile:177) ~[1.13.2.jar:?]
at bnv.a(SourceFile:680) ~[1.13.2.jar:?]
at bnv.a(SourceFile:496) ~[1.13.2.jar:?]
at bnv.a(SourceFile:189) ~[1.13.2.jar:?]
at bnv.b(SourceFile:208) ~[1.13.2.jar:?]
at bnv.b(SourceFile:165) ~[1.13.2.jar:?]
at tx.a(SourceFile:59) [1.13.2.jar:?]
at tx$$Lambda$1592/1318121054.apply(Unknown Source) [1.13.2.jar:?]
at it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap.computeIfAbsent(Long2ObjectOpenHashMap.java:479) [fastutil-8.2.1.jar:?]
at tx.a(SourceFile:54) [1.13.2.jar:?]
at tx.a(SourceFile:25) [1.13.2.jar:?]
at acu.b(SourceFile:80) [1.13.2.jar:?]
at acu.a(SourceFile:61) [1.13.2.jar:?]
at acu$$Lambda$1588/746203069.get(Unknown Source) [1.13.2.jar:?]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1582) [?:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
```
**World Files**
[city](https://cdn.itsdevil.com/uploads/city.zip)
[test](https://cdn.itsdevil.com/uploads/test.zip)
|
1.0
|
[Bug Report] Import .construction into 1.13.2 World - **Describe the bug**
Importing a `.construction` file into a 1.13.2 world causes chunks to reset/regenerate to their original generator state.
**Expected behavior**
The `.construction` file successfully imports into world and does not cause chunks to reset/regenerate.
**To Reproduce**
Steps to reproduce the behavior:
1. Open 1.13.2 world named _city_ in Amulet
2. Create selection using 3D Editor and export as _building.construction_
3. Open 1.13.2 world named _test_ in Amulet
4. Import file _building.construction_ and save/close world
5. Re-open world _test_ in Amulet to confirm the import was successful
6. Launch Minecraft, load world _test_
7. Minecraft logs errors about not being able to read chunks and regenerates chunks where .construction file was imported
**Screenshots**
3D Selection being exported

.construction file being imported to world _test_

Reopen world _test_ to confirm import succeeded

Open world _test_ in minecraft to see chunks have been reset/regenerated

**Environment**
OS: Windows 10, Update 21H1
Minecraft Platform: Java
Minecraft Version: 1.13.2
Amulet Version: 0.9.6
**Additional context**
My client log can be found [here](https://cdn.itsdevil.com/uploads/error-log.txt)
The relevant portion is the following stacktrace:
```
[07:05:55] [Server thread/ERROR]: Couldn't load chunk
java.lang.IllegalArgumentException: The value 0 is not in the specified inclusive range of 1 to 32
at org.apache.commons.lang3.Validate.inclusiveBetween(Validate.java:1032) ~[commons-lang3-3.5.jar:3.5]
at xd.<init>(SourceFile:19) ~[1.13.2.jar:?]
at bnq.a(SourceFile:177) ~[1.13.2.jar:?]
at bnv.a(SourceFile:680) ~[1.13.2.jar:?]
at bnv.a(SourceFile:496) ~[1.13.2.jar:?]
at bnv.a(SourceFile:189) ~[1.13.2.jar:?]
at bnv.a(SourceFile:141) ~[1.13.2.jar:?]
at tc.a(SourceFile:102) [1.13.2.jar:?]
at tc.a(SourceFile:146) [1.13.2.jar:?]
at net.minecraft.server.MinecraftServer.a(SourceFile:429) [1.13.2.jar:?]
at dgh.a(SourceFile:92) [1.13.2.jar:?]
at dgh.d(SourceFile:108) [1.13.2.jar:?]
at net.minecraft.server.MinecraftServer.run(SourceFile:566) [1.13.2.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
```
Another similar stacktrace happens later on in the log file:
```
[07:05:55] [WorldGen-Scheduler-1/ERROR]: Couldn't load protochunk
java.lang.IllegalArgumentException: The value 0 is not in the specified inclusive range of 1 to 32
at org.apache.commons.lang3.Validate.inclusiveBetween(Validate.java:1032) ~[commons-lang3-3.5.jar:3.5]
at xd.<init>(SourceFile:19) ~[1.13.2.jar:?]
at bnq.a(SourceFile:177) ~[1.13.2.jar:?]
at bnv.a(SourceFile:680) ~[1.13.2.jar:?]
at bnv.a(SourceFile:496) ~[1.13.2.jar:?]
at bnv.a(SourceFile:189) ~[1.13.2.jar:?]
at bnv.b(SourceFile:208) ~[1.13.2.jar:?]
at bnv.b(SourceFile:165) ~[1.13.2.jar:?]
at tx.a(SourceFile:59) [1.13.2.jar:?]
at tx$$Lambda$1592/1318121054.apply(Unknown Source) [1.13.2.jar:?]
at it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap.computeIfAbsent(Long2ObjectOpenHashMap.java:479) [fastutil-8.2.1.jar:?]
at tx.a(SourceFile:54) [1.13.2.jar:?]
at tx.a(SourceFile:25) [1.13.2.jar:?]
at acu.b(SourceFile:80) [1.13.2.jar:?]
at acu.a(SourceFile:61) [1.13.2.jar:?]
at acu$$Lambda$1588/746203069.get(Unknown Source) [1.13.2.jar:?]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1582) [?:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
```
**World Files**
[city](https://cdn.itsdevil.com/uploads/city.zip)
[test](https://cdn.itsdevil.com/uploads/test.zip)
|
non_process
|
import construction into world describe the bug importing a construction file into a world causes chunks to reset regenerate to their original generator state expected behavior the construction file successfully imports into world and does not cause chunks to reset regenerate to reproduce steps to reproduce the behavior open world named city in amulet create selection using editor and export as building construction open world named test in amulet import file building construction and save close world re open world test in amulet to confirm the import was successful launch minecraft load world test minecraft logs errors about not being able to read chunks and regenerates chunks where construction file was imported screenshots selection being exported construction file being imported to world test reopen world test to confirm import succeeded open world test in minecraft to see chunks have been reset regenerated environment os windows update minecraft platform java minecraft version amulet version additional context my client log can be found the relevant portion is the following stacktrace couldn t load chunk java lang illegalargumentexception the value is not in the specified inclusive range of to at org apache commons validate inclusivebetween validate java at xd sourcefile at bnq a sourcefile at bnv a sourcefile at bnv a sourcefile at bnv a sourcefile at bnv a sourcefile at tc a sourcefile at tc a sourcefile at net minecraft server minecraftserver a sourcefile at dgh a sourcefile at dgh d sourcefile at net minecraft server minecraftserver run sourcefile at java lang thread run thread java another similar stacktrace happens later on in the log file couldn t load protochunk java lang illegalargumentexception the value is not in the specified inclusive range of to at org apache commons validate inclusivebetween validate java at xd sourcefile at bnq a sourcefile at bnv a sourcefile at bnv a sourcefile at bnv a sourcefile at bnv b sourcefile at bnv b sourcefile at tx a sourcefile at tx lambda apply unknown source at it unimi dsi fastutil longs computeifabsent java at tx a sourcefile at tx a sourcefile at acu b sourcefile at acu a sourcefile at acu lambda get unknown source at java util concurrent completablefuture asyncsupply run completablefuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java world files
| 0
|
101,102
| 16,490,895,237
|
IssuesEvent
|
2021-05-25 03:32:38
|
ekediala/honest-parrot
|
https://api.github.com/repos/ekediala/honest-parrot
|
opened
|
WS-2019-0424 (Medium) detected in elliptic-6.5.0.tgz
|
security vulnerability
|
## WS-2019-0424 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz</a></p>
<p>Path to dependency file: /honest-parrot/package.json</p>
<p>Path to vulnerable library: honest-parrot/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- nuxt-2.8.1.tgz (Root Library)
- webpack-2.8.1.tgz
- webpack-4.39.1.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.5.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
all versions of elliptic are vulnerable to Timing Attack through side-channels.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0424 (Medium) detected in elliptic-6.5.0.tgz - ## WS-2019-0424 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.0.tgz</a></p>
<p>Path to dependency file: /honest-parrot/package.json</p>
<p>Path to vulnerable library: honest-parrot/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- nuxt-2.8.1.tgz (Root Library)
- webpack-2.8.1.tgz
- webpack-4.39.1.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.5.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
all versions of elliptic are vulnerable to Timing Attack through side-channels.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in elliptic tgz ws medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file honest parrot package json path to vulnerable library honest parrot node modules elliptic package json dependency hierarchy nuxt tgz root library webpack tgz webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library vulnerability details all versions of elliptic are vulnerable to timing attack through side channels publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href step up your open source security game with whitesource
| 0
|
249,404
| 26,927,883,586
|
IssuesEvent
|
2023-02-07 14:57:22
|
ManageIQ/kubeclient
|
https://api.github.com/repos/ManageIQ/kubeclient
|
closed
|
CVE-2022-44571 (High) detected in rack-2.2.3.gem - autoclosed
|
security vulnerability
|
## CVE-2022-44571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.2.3.gem</b></p></summary>
<p>Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.3.gem">https://rubygems.org/gems/rack-2.2.3.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.3.gem</p>
<p>
Dependency Hierarchy:
- openid_connect-1.3.0.gem (Root Library)
- rack-oauth2-1.19.0.gem
- :x: **rack-2.2.3.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a denial of service vulnerability in the Content-Disposition parsing component of Rack. Carefully crafted input can cause Content-Disposition header parsing in Rack to take an unexpected amount of time, possibly resulting in a denial of service attack vector. This header is used typically used in multipart parsing. Fixed Versions: 2.0.9.2, 2.1.4.2, 2.2.6.2, 3.0.4.1.
<p>Publish Date: 2022-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-44571>CVE-2022-44571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-93pm-5p5f-3ghx">https://github.com/advisories/GHSA-93pm-5p5f-3ghx</a></p>
<p>Release Date: 2022-11-02</p>
<p>Fix Resolution: rack - 2.0.9.2,2.1.4.2,2.2.6.2,3.0.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-44571 (High) detected in rack-2.2.3.gem - autoclosed - ## CVE-2022-44571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.2.3.gem</b></p></summary>
<p>Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.3.gem">https://rubygems.org/gems/rack-2.2.3.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.3.gem</p>
<p>
Dependency Hierarchy:
- openid_connect-1.3.0.gem (Root Library)
- rack-oauth2-1.19.0.gem
- :x: **rack-2.2.3.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a denial of service vulnerability in the Content-Disposition parsing component of Rack. Carefully crafted input can cause Content-Disposition header parsing in Rack to take an unexpected amount of time, possibly resulting in a denial of service attack vector. This header is used typically used in multipart parsing. Fixed Versions: 2.0.9.2, 2.1.4.2, 2.2.6.2, 3.0.4.1.
<p>Publish Date: 2022-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-44571>CVE-2022-44571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-93pm-5p5f-3ghx">https://github.com/advisories/GHSA-93pm-5p5f-3ghx</a></p>
<p>Release Date: 2022-11-02</p>
<p>Fix Resolution: rack - 2.0.9.2,2.1.4.2,2.2.6.2,3.0.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in rack gem autoclosed cve high severity vulnerability vulnerable library rack gem rack provides a minimal modular and adaptable interface for developing web applications in ruby by wrapping http requests and responses in the simplest way possible it unifies and distills the api for web servers web frameworks and software in between the so called middleware into a single method call library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache rack gem dependency hierarchy openid connect gem root library rack gem x rack gem vulnerable library found in base branch master vulnerability details there is a denial of service vulnerability in the content disposition parsing component of rack carefully crafted input can cause content disposition header parsing in rack to take an unexpected amount of time possibly resulting in a denial of service attack vector this header is used typically used in multipart parsing fixed versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rack step up your open source security game with mend
| 0
|
831,994
| 32,068,314,911
|
IssuesEvent
|
2023-09-25 05:59:27
|
Greenstand/treetracker-admin-client
|
https://api.github.com/repos/Greenstand/treetracker-admin-client
|
opened
|
Bug: When listing less than 24 images the default view does not show images.
|
type: bug tool: Verify priority tool: Captures
|
When trying to show less than 24 images In verify and in captures module filter the result comes up empty. When filtering (verify tool) with 96 images or more it works again.

|
1.0
|
Bug: When listing less than 24 images the default view does not show images. - When trying to show less than 24 images In verify and in captures module filter the result comes up empty. When filtering (verify tool) with 96 images or more it works again.

|
non_process
|
bug when listing less than images the default view does not show images when trying to show less than images in verify and in captures module filter the result comes up empty when filtering verify tool with images or more it works again
| 0
|
239,776
| 7,800,015,940
|
IssuesEvent
|
2018-06-09 03:30:25
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0006716:
default favorite "me" is not resolved properly
|
Bug Mantis Tinebase high priority
|
**Reported by cweiss on 9 Jul 2012 12:04**
**Version:** Milan (2012.03.5)
e.g. Calendar: I'm Organizer
Server sends id of organizer but not the organizers contact data.
|
1.0
|
0006716:
default favorite "me" is not resolved properly - **Reported by cweiss on 9 Jul 2012 12:04**
**Version:** Milan (2012.03.5)
e.g. Calendar: I'm Organizer
Server sends id of organizer but not the organizers contact data.
|
non_process
|
default favorite me is not resolved properly reported by cweiss on jul version milan e g calendar i m organizer server sends id of organizer but not the organizers contact data
| 0
|
11,302
| 14,105,823,976
|
IssuesEvent
|
2020-11-06 14:04:23
|
paul-buerkner/brms
|
https://api.github.com/repos/paul-buerkner/brms
|
closed
|
Programmatically check diagnostics
|
feature post-processing
|
I would like to programmatically check all diagnostics for models fit with `brms` in unit tests. Unless I missed it, I found nothing relevant inside the `brmsfit` object or an existing function in the manual that does this. Of course, [`rstan` has all the necessary functions to do that](http://mc-stan.org/rstan/reference/check_hmc_diagnostics.html), but it would be more convenient if `brms` had a handy little function that checks all of them on behalf of the user on demand. I have in mind something as simple as this:
https://github.com/betanalpha/knitr_case_studies/blob/master/rstan_workflow/stan_utility.R
I could submit a PR if you are interested unless you want to do it better and faster yourself.
|
1.0
|
Programmatically check diagnostics - I would like to programmatically check all diagnostics for models fit with `brms` in unit tests. Unless I missed it, I found nothing relevant inside the `brmsfit` object or an existing function in the manual that does this. Of course, [`rstan` has all the necessary functions to do that](http://mc-stan.org/rstan/reference/check_hmc_diagnostics.html), but it would be more convenient if `brms` had a handy little function that checks all of them on behalf of the user on demand. I have in mind something as simple as this:
https://github.com/betanalpha/knitr_case_studies/blob/master/rstan_workflow/stan_utility.R
I could submit a PR if you are interested unless you want to do it better and faster yourself.
|
process
|
programmatically check diagnostics i would like to programmatically check all diagnostics for models fit with brms in unit tests unless i missed it i found nothing relevant inside the brmsfit object or an existing function in the manual that does this of course but it would be more convenient if brms had a handy little function that checks all of them on behalf of the user on demand i have in mind something as simple as this i could submit a pr if you are interested unless you want to do it better and faster yourself
| 1
|
20,356
| 27,014,167,988
|
IssuesEvent
|
2023-02-10 17:46:52
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
closed
|
[cuegui] Upgrade to PySide6
|
process
|
Subissue of #1204.
**Describe the process**
PySide2 is deprecated and no longer publishes wheels for newer platforms. I'm unable to `pip install PySide2` on my M1 macbook for example. This is blocking me from working on various other issues.
We should upgrade to PySide6, which is the currently supported version.
|
1.0
|
[cuegui] Upgrade to PySide6 - Subissue of #1204.
**Describe the process**
PySide2 is deprecated and no longer publishes wheels for newer platforms. I'm unable to `pip install PySide2` on my M1 macbook for example. This is blocking me from working on various other issues.
We should upgrade to PySide6, which is the currently supported version.
|
process
|
upgrade to subissue of describe the process is deprecated and no longer publishes wheels for newer platforms i m unable to pip install on my macbook for example this is blocking me from working on various other issues we should upgrade to which is the currently supported version
| 1
|
82,146
| 3,603,460,687
|
IssuesEvent
|
2016-02-03 19:08:45
|
umts/pvta-multiplatform
|
https://api.github.com/repos/umts/pvta-multiplatform
|
closed
|
Search Drops Routes
|
bug high-priority
|
Somewhere, in the last couple of updates to master, a bug as been introduced that is isolated to the SearchController.
When it must download all routes, it does so (and properly so, according to my testing). At some time between when the ```$resource``` callback fires and when we enter the for loop in ```prepareRoutes```, a few routes get lost, namely the **30 and 33** (and one or two others I believe).
There should be 44 routes returned from ```[avail]/getallroutes```, and my initial testing shows that they are all present at the moment the callback fires, but are suddenly not once we get a few lines into ```prepareRoutes```. At the moment, I'm dumbfounded as to why.
Interestingly, if I introduce more than one call to ```prepareRoutes``` in the ```$resource``` callback, all calls **after the first** seem to properly handle every route in ```prepareRoutes```, which first lead me to believe that it's a asynch error where the callback hasn't populated the entire ```routes``` variable before we send it off to ```prepareRoutes```. But, then I remember that testing the number of routes on the **first** line in the callback produces the correct result, so I'm confused once more.
**Steps to reproduce**
1. Checkout master and be sure you're up to date.
2. Fire up a server.
3. Navigate from My Buses to Search.
4. Attempt to search for the 30 or 33.
|
1.0
|
Search Drops Routes - Somewhere, in the last couple of updates to master, a bug as been introduced that is isolated to the SearchController.
When it must download all routes, it does so (and properly so, according to my testing). At some time between when the ```$resource``` callback fires and when we enter the for loop in ```prepareRoutes```, a few routes get lost, namely the **30 and 33** (and one or two others I believe).
There should be 44 routes returned from ```[avail]/getallroutes```, and my initial testing shows that they are all present at the moment the callback fires, but are suddenly not once we get a few lines into ```prepareRoutes```. At the moment, I'm dumbfounded as to why.
Interestingly, if I introduce more than one call to ```prepareRoutes``` in the ```$resource``` callback, all calls **after the first** seem to properly handle every route in ```prepareRoutes```, which first lead me to believe that it's a asynch error where the callback hasn't populated the entire ```routes``` variable before we send it off to ```prepareRoutes```. But, then I remember that testing the number of routes on the **first** line in the callback produces the correct result, so I'm confused once more.
**Steps to reproduce**
1. Checkout master and be sure you're up to date.
2. Fire up a server.
3. Navigate from My Buses to Search.
4. Attempt to search for the 30 or 33.
|
non_process
|
search drops routes somewhere in the last couple of updates to master a bug as been introduced that is isolated to the searchcontroller when it must download all routes it does so and properly so according to my testing at some time between when the resource callback fires and when we enter the for loop in prepareroutes a few routes get lost namely the and and one or two others i believe there should be routes returned from getallroutes and my initial testing shows that they are all present at the moment the callback fires but are suddenly not once we get a few lines into prepareroutes at the moment i m dumbfounded as to why interestingly if i introduce more than one call to prepareroutes in the resource callback all calls after the first seem to properly handle every route in prepareroutes which first lead me to believe that it s a asynch error where the callback hasn t populated the entire routes variable before we send it off to prepareroutes but then i remember that testing the number of routes on the first line in the callback produces the correct result so i m confused once more steps to reproduce checkout master and be sure you re up to date fire up a server navigate from my buses to search attempt to search for the or
| 0
|
22,579
| 31,805,417,816
|
IssuesEvent
|
2023-09-13 13:41:23
|
GSA/EDX
|
https://api.github.com/repos/GSA/EDX
|
closed
|
Update personal access token for GitHub Workflow (September 2023)
|
process
|
For the EDXPROJECT_TOKEN to automate the issue workflow (adding it to EDX's Inbox in its Kanban board)
Instructions:
- Click on your user icon at the top right
- Click settings
- Scroll to bottom, click "Developer Settings"
- Under personal access tokens, click tokens classic
- You want to update the EDXPROJECT_TOKEN one with your updated API Key
- Go back to the EDX GitHub repo
- Go to settings tab --> secrets and variables --> actions --> update the EDXPROJECT_TOKEN --> paste in the API key
|
1.0
|
Update personal access token for GitHub Workflow (September 2023) - For the EDXPROJECT_TOKEN to automate the issue workflow (adding it to EDX's Inbox in its Kanban board)
Instructions:
- Click on your user icon at the top right
- Click settings
- Scroll to bottom, click "Developer Settings"
- Under personal access tokens, click tokens classic
- You want to update the EDXPROJECT_TOKEN one with your updated API Key
- Go back to the EDX GitHub repo
- Go to settings tab --> secrets and variables --> actions --> update the EDXPROJECT_TOKEN --> paste in the API key
|
process
|
update personal access token for github workflow september for the edxproject token to automate the issue workflow adding it to edx s inbox in its kanban board instructions click on your user icon at the top right click settings scroll to bottom click developer settings under personal access tokens click tokens classic you want to update the edxproject token one with your updated api key go back to the edx github repo go to settings tab secrets and variables actions update the edxproject token paste in the api key
| 1
|
86,657
| 8,042,451,967
|
IssuesEvent
|
2018-07-31 08:12:01
|
ClassicWoW/Nefarian_1.12.1_Bugtracker
|
https://api.github.com/repos/ClassicWoW/Nefarian_1.12.1_Bugtracker
|
closed
|
Fallschaden ohne fallen
|
Core Sonstiges Mehr Input/Recherche/Tests nötig
|
nachdem ich im ts kürzlich 2 mal davon gehört hab (1 x davon war ein Spieler irgendwo in Mulgore, nachdem er sich den Jahrmarktbuff geholt hatte), dass leute aus dem heiteren himmel fallschaden bekommen haben, habe ich nun auch eine solche stellte gefunden.
Im Ödland bei 45,67 bin ich heute innerhalb weniger (7-8) runden 2 mal gestorben.
somit ist der bug dort, vermutlich mit etwas glück, reproduzierbar.
die runden dort habe ich in den letzten monaten schon dutzendfach gedreht beim elementarerde farmen ohne zu sterben.
Auch ansonsten ist mir dieses Problemchen in den letzten 2,5 Jahren nirgends vorgekommen.
http://imgur.com/a/gTQr3
ohne nach quellen dazu zu suchen, gehe ich davon aus, dass es sich so nicht verhalten sollte.
|
1.0
|
Fallschaden ohne fallen - nachdem ich im ts kürzlich 2 mal davon gehört hab (1 x davon war ein Spieler irgendwo in Mulgore, nachdem er sich den Jahrmarktbuff geholt hatte), dass leute aus dem heiteren himmel fallschaden bekommen haben, habe ich nun auch eine solche stellte gefunden.
Im Ödland bei 45,67 bin ich heute innerhalb weniger (7-8) runden 2 mal gestorben.
somit ist der bug dort, vermutlich mit etwas glück, reproduzierbar.
die runden dort habe ich in den letzten monaten schon dutzendfach gedreht beim elementarerde farmen ohne zu sterben.
Auch ansonsten ist mir dieses Problemchen in den letzten 2,5 Jahren nirgends vorgekommen.
http://imgur.com/a/gTQr3
ohne nach quellen dazu zu suchen, gehe ich davon aus, dass es sich so nicht verhalten sollte.
|
non_process
|
fallschaden ohne fallen nachdem ich im ts kürzlich mal davon gehört hab x davon war ein spieler irgendwo in mulgore nachdem er sich den jahrmarktbuff geholt hatte dass leute aus dem heiteren himmel fallschaden bekommen haben habe ich nun auch eine solche stellte gefunden im ödland bei bin ich heute innerhalb weniger runden mal gestorben somit ist der bug dort vermutlich mit etwas glück reproduzierbar die runden dort habe ich in den letzten monaten schon dutzendfach gedreht beim elementarerde farmen ohne zu sterben auch ansonsten ist mir dieses problemchen in den letzten jahren nirgends vorgekommen ohne nach quellen dazu zu suchen gehe ich davon aus dass es sich so nicht verhalten sollte
| 0
|
154,013
| 5,907,066,037
|
IssuesEvent
|
2017-05-19 16:39:53
|
18F/web-design-standards
|
https://api.github.com/repos/18F/web-design-standards
|
closed
|
Interview the team at archives.gov about WDS adoption
|
[Priority] Minor [Skill] Content [Skill] User Experience [Type] Communication
|
Interview the team at archives.gov about their recent launch using the Standards, and post it on the 18F blog and the Standards News and Updates.
## Description
- [x] Reach out to them to see if they're game
- [x] Set an interview date and time
- [ ] Interview and take notes
- [ ] Send to them for their edits/comms approval (A future task will be created for final publishing after they return the q&a)
|
1.0
|
Interview the team at archives.gov about WDS adoption - Interview the team at archives.gov about their recent launch using the Standards, and post it on the 18F blog and the Standards News and Updates.
## Description
- [x] Reach out to them to see if they're game
- [x] Set an interview date and time
- [ ] Interview and take notes
- [ ] Send to them for their edits/comms approval (A future task will be created for final publishing after they return the q&a)
|
non_process
|
interview the team at archives gov about wds adoption interview the team at archives gov about their recent launch using the standards and post it on the blog and the standards news and updates description reach out to them to see if they re game set an interview date and time interview and take notes send to them for their edits comms approval a future task will be created for final publishing after they return the q a
| 0
|
21,471
| 29,504,362,802
|
IssuesEvent
|
2023-06-03 05:35:57
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release X.Y.Z - $MONTH $YEAR
|
P1 type: process release team-OSS
|
# Status of Bazel X.Y.Z
- Expected first release candidate date: [date]
- Expected release date: [date]
- [List of release blockers](link-to-milestone)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into X.Y.Z, simply send a PR against the `release-X.Y.Z` branch.
**Task list:**
<!-- The first item is only needed for major releases (X.0.0) -->
- [x] Pick release baseline: [link to base commit]
- [x] Create release candidate: X.Y.Zrc1
- [x] Check downstream projects
- [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [x] Send the release announcement PR for review: [link to bazel-blog PR] <!-- Only for major releases. -->
- [x] Push the release and notify package maintainers: [link to comment notifying package maintainers]
- [x] Update the documentation
- [x] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release X.Y.Z - $MONTH $YEAR - # Status of Bazel X.Y.Z
- Expected first release candidate date: [date]
- Expected release date: [date]
- [List of release blockers](link-to-milestone)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into X.Y.Z, simply send a PR against the `release-X.Y.Z` branch.
**Task list:**
<!-- The first item is only needed for major releases (X.0.0) -->
- [x] Pick release baseline: [link to base commit]
- [x] Create release candidate: X.Y.Zrc1
- [x] Check downstream projects
- [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [x] Send the release announcement PR for review: [link to bazel-blog PR] <!-- Only for major releases. -->
- [x] Push the release and notify package maintainers: [link to comment notifying package maintainers]
- [x] Update the documentation
- [x] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release x y z month year status of bazel x y z expected first release candidate date expected release date link to milestone to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into x y z simply send a pr against the release x y z branch task list pick release baseline create release candidate x y check downstream projects create send the release announcement pr for review push the release and notify package maintainers update the documentation push the blog post update the
| 1
|
626,831
| 19,844,590,266
|
IssuesEvent
|
2022-01-21 03:38:35
|
openmsupply/mobile
|
https://api.github.com/repos/openmsupply/mobile
|
opened
|
Tracking mobile last login date/time
|
Feature Priority: unconfirmed
|
## Is your feature request related to a problem? Please describe.
Just putting this in here as it's been requested/asked about (although not confirmed as needed for anyone just yet).
User(s) have requested to see user logins on Dashboard, which includes both Desktop and mobile.
However - it doesn't look like we currently track/log this anywhere for mobile.
## Describe the solution you'd like
TBD
## Implementation
TBD
## Describe alternatives you've considered
Not implementing
## Additional context
- There is a lastLogin field for users, although this currently isn't populated and doesn't get synced to Desktop.
- Mobile does both online and offline logins (in the case of connection failure with Desktop) - though this probably doesn't matter to end users...
|
1.0
|
Tracking mobile last login date/time - ## Is your feature request related to a problem? Please describe.
Just putting this in here as it's been requested/asked about (although not confirmed as needed for anyone just yet).
User(s) have requested to see user logins on Dashboard, which includes both Desktop and mobile.
However - it doesn't look like we currently track/log this anywhere for mobile.
## Describe the solution you'd like
TBD
## Implementation
TBD
## Describe alternatives you've considered
Not implementing
## Additional context
- There is a lastLogin field for users, although this currently isn't populated and doesn't get synced to Desktop.
- Mobile does both online and offline logins (in the case of connection failure with Desktop) - though this probably doesn't matter to end users...
|
non_process
|
tracking mobile last login date time is your feature request related to a problem please describe just putting this in here as it s been requested asked about although not confirmed as needed for anyone just yet user s have requested to see user logins on dashboard which includes both desktop and mobile however it doesn t look like we currently track log this anywhere for mobile describe the solution you d like tbd implementation tbd describe alternatives you ve considered not implementing additional context there is a lastlogin field for users although this currently isn t populated and doesn t get synced to desktop mobile does both online and offline logins in the case of connection failure with desktop though this probably doesn t matter to end users
| 0
|
19,238
| 25,390,364,055
|
IssuesEvent
|
2022-11-22 03:06:38
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[Mirror] zulu jdk 17s
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-win_aarch64.zip
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-linux_x64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-linux_aarch64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-macosx_x64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-macosx_aarch64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-win_x64.zip
|
1.0
|
[Mirror] zulu jdk 17s - ### Please list the URLs of the archives you'd like to mirror:
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-win_aarch64.zip
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-linux_x64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-linux_aarch64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-macosx_x64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-macosx_aarch64.tar.gz
https://cdn.azul.com/zulu/bin/zulu17.38.21-ca-jdk17.0.5-win_x64.zip
|
process
|
zulu jdk please list the urls of the archives you d like to mirror
| 1
|
94,059
| 10,789,824,242
|
IssuesEvent
|
2019-11-05 12:49:32
|
xvitaly/srcrepair
|
https://api.github.com/repos/xvitaly/srcrepair
|
closed
|
Add documentation for cleanup.xml
|
documentation
|
Describe whatever you want to be implemented in SRC Repair in future:
Add documentation for cleanup.xml.
|
1.0
|
Add documentation for cleanup.xml - Describe whatever you want to be implemented in SRC Repair in future:
Add documentation for cleanup.xml.
|
non_process
|
add documentation for cleanup xml describe whatever you want to be implemented in src repair in future add documentation for cleanup xml
| 0
|
255,541
| 19,306,813,201
|
IssuesEvent
|
2021-12-13 12:26:32
|
localstack/localstack
|
https://api.github.com/repos/localstack/localstack
|
closed
|
Documentation Update for WebUI env var
|
documentation pr-requested
|
<!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: Documentation
[x] bug report
[ ] feature request
# Detailed description
The following in [README](https://github.com/localstack/localstack/blob/master/README.md)
```
PORT_WEB_UI: Port for the Web user interface / dashboard (default: 8080). Note that the Web UI is now deprecated, and requires to use the localstack/localstack-full Docker image.
```
States about web UI, for while now a different image has to be used
and
the following
```
START_WEB: Flag to control whether the Web UI should be started in Docker (values: 0/1; default: 1).
```
this line forms some ambiguity.
## Expected behavior
Documentation may require an updation for the above change, as in the env var `START_WEB` is it even required when we aren't using localstack-ful
## Actual behavior
# Steps to reproduce NA
## Command used to start LocalStack
NA
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
## Note - Would like to heartily thank all the devs and contributors you guys rock cheers
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-290) by [Unito](https://www.unito.io/learn-more)
|
1.0
|
Documentation Update for WebUI env var - <!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: Documentation
[x] bug report
[ ] feature request
# Detailed description
The following in [README](https://github.com/localstack/localstack/blob/master/README.md)
```
PORT_WEB_UI: Port for the Web user interface / dashboard (default: 8080). Note that the Web UI is now deprecated, and requires to use the localstack/localstack-full Docker image.
```
States about web UI, for while now a different image has to be used
and
the following
```
START_WEB: Flag to control whether the Web UI should be started in Docker (values: 0/1; default: 1).
```
this line forms some ambiguity.
## Expected behavior
Documentation may require an updation for the above change, as in the env var `START_WEB` is it even required when we aren't using localstack-ful
## Actual behavior
# Steps to reproduce NA
## Command used to start LocalStack
NA
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
## Note - Would like to heartily thank all the devs and contributors you guys rock cheers
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-290) by [Unito](https://www.unito.io/learn-more)
|
non_process
|
documentation update for webui env var love localstack please consider supporting our collective point right type of request documentation bug report feature request detailed description the following in port web ui port for the web user interface dashboard default note that the web ui is now deprecated and requires to use the localstack localstack full docker image states about web ui for while now a different image has to be used and the following start web flag to control whether the web ui should be started in docker values default this line forms some ambiguity expected behavior documentation may require an updation for the above change as in the env var start web is it even required when we aren t using localstack ful actual behavior steps to reproduce na command used to start localstack na client code aws sdk code snippet or sequence of awslocal commands note would like to heartily thank all the devs and contributors you guys rock cheers ┆issue is synchronized with this by
| 0
|
10,792
| 13,609,042,300
|
IssuesEvent
|
2020-09-23 04:06:15
|
googleapis/java-cloud-bom
|
https://api.github.com/repos/googleapis/java-cloud-bom
|
closed
|
Dependency Dashboard
|
type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.googlejavaformat-google-java-format-1.x -->deps: update dependency com.google.googlejavaformat:google-java-format to v1.9
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.googlejavaformat-google-java-format-1.x -->deps: update dependency com.google.googlejavaformat:google-java-format to v1.9
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any deps update dependency com google googlejavaformat google java format to check this box to trigger a request for renovate to run again on this repository
| 1
|
6,508
| 9,595,510,491
|
IssuesEvent
|
2019-05-09 16:13:11
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Sharing GPU tensors among different processes using billiard
|
awaiting response (this tag is deprecated) module: multiprocessing triaged
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Currenlty, pytorch only supports sharing tensors among multiple GPUs using `torch.multiprocessing`. Could you extend this support to `billiard` ( since the taskpool of `billiard` is often found to be more sophisticated and used widely -- ex. `celery` )?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Being able to use `celery` -- which uses `billiard` -- s.t. processes in the task pool read from shared CUDA tensor, instead of each having a unique copy.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
1.0
|
Sharing GPU tensors among different processes using billiard - ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Currenlty, pytorch only supports sharing tensors among multiple GPUs using `torch.multiprocessing`. Could you extend this support to `billiard` ( since the taskpool of `billiard` is often found to be more sophisticated and used widely -- ex. `celery` )?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Being able to use `celery` -- which uses `billiard` -- s.t. processes in the task pool read from shared CUDA tensor, instead of each having a unique copy.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
process
|
sharing gpu tensors among different processes using billiard 🚀 feature currenlty pytorch only supports sharing tensors among multiple gpus using torch multiprocessing could you extend this support to billiard since the taskpool of billiard is often found to be more sophisticated and used widely ex celery motivation being able to use celery which uses billiard s t processes in the task pool read from shared cuda tensor instead of each having a unique copy pitch alternatives additional context
| 1
|
20,086
| 26,586,391,152
|
IssuesEvent
|
2023-01-23 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 23 Jan 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Unsupervised Light Field Depth Estimation via Multi-view Feature Matching with Occlusion Prediction
- **Authors:** Shansi Zhang, Nan Meng, Edmund Y. Lam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.08433
- **Pdf link:** https://arxiv.org/pdf/2301.08433
- **Abstract**
Depth estimation from light field (LF) images is a fundamental step for some applications. Recently, learning-based methods have achieved higher accuracy and efficiency than the traditional methods. However, it is costly to obtain sufficient depth labels for supervised training. In this paper, we propose an unsupervised framework to estimate depth from LF images. First, we design a disparity estimation network (DispNet) with a coarse-to-fine structure to predict disparity maps from different view combinations by performing multi-view feature matching to learn the correspondences more effectively. As occlusions may cause the violation of photo-consistency, we design an occlusion prediction network (OccNet) to predict the occlusion maps, which are used as the element-wise weights of photometric loss to solve the occlusion issue and assist the disparity learning. With the disparity maps estimated by multiple input combinations, we propose a disparity fusion strategy based on the estimated errors with effective occlusion handling to obtain the final disparity map. Experimental results demonstrate that our method achieves superior performance on both the dense and sparse LF images, and also has better generalization ability to the real-world LF images.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 23 Jan 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Unsupervised Light Field Depth Estimation via Multi-view Feature Matching with Occlusion Prediction
- **Authors:** Shansi Zhang, Nan Meng, Edmund Y. Lam
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.08433
- **Pdf link:** https://arxiv.org/pdf/2301.08433
- **Abstract**
Depth estimation from light field (LF) images is a fundamental step for some applications. Recently, learning-based methods have achieved higher accuracy and efficiency than the traditional methods. However, it is costly to obtain sufficient depth labels for supervised training. In this paper, we propose an unsupervised framework to estimate depth from LF images. First, we design a disparity estimation network (DispNet) with a coarse-to-fine structure to predict disparity maps from different view combinations by performing multi-view feature matching to learn the correspondences more effectively. As occlusions may cause the violation of photo-consistency, we design an occlusion prediction network (OccNet) to predict the occlusion maps, which are used as the element-wise weights of photometric loss to solve the occlusion issue and assist the disparity learning. With the disparity maps estimated by multiple input combinations, we propose a disparity fusion strategy based on the estimated errors with effective occlusion handling to obtain the final disparity map. Experimental results demonstrate that our method achieves superior performance on both the dense and sparse LF images, and also has better generalization ability to the real-world LF images.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
process
|
new submissions for mon jan keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp unsupervised light field depth estimation via multi view feature matching with occlusion prediction authors shansi zhang nan meng edmund y lam subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract depth estimation from light field lf images is a fundamental step for some applications recently learning based methods have achieved higher accuracy and efficiency than the traditional methods however it is costly to obtain sufficient depth labels for supervised training in this paper we propose an unsupervised framework to estimate depth from lf images first we design a disparity estimation network dispnet with a coarse to fine structure to predict disparity maps from different view combinations by performing multi view feature matching to learn the correspondences more effectively as occlusions may cause the violation of photo consistency we design an occlusion prediction network occnet to predict the occlusion maps which are used as the element wise weights of photometric loss to solve the occlusion issue and assist the disparity learning with the disparity maps estimated by multiple input combinations we propose a disparity fusion strategy based on the estimated errors with effective occlusion handling to obtain the final disparity map experimental results demonstrate that our method achieves superior performance on both the dense and sparse lf images and also has better generalization ability to the real world lf images keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw there is no result keyword raw image there is no result
| 1
|
5,261
| 8,056,267,844
|
IssuesEvent
|
2018-08-02 12:09:41
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Implement an automatic exponential backoff retry mechanism for transport tasks
|
priority/critical-blocking topic/JobCalculationAndProcess topic/Workflows type/accepted feature
|
This is a sub issue of making `JobProcess` more robust with respect to exceptions occurring in tasks that run over a transport (see #1814).
|
1.0
|
Implement an automatic exponential backoff retry mechanism for transport tasks - This is a sub issue of making `JobProcess` more robust with respect to exceptions occurring in tasks that run over a transport (see #1814).
|
process
|
implement an automatic exponential backoff retry mechanism for transport tasks this is a sub issue of making jobprocess more robust with respect to exceptions occurring in tasks that run over a transport see
| 1
|
288,648
| 31,861,534,778
|
IssuesEvent
|
2023-09-15 11:17:31
|
nidhi7598/linux-v4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
|
opened
|
WS-2021-0334 (High) detected in linuxlinux-4.19.294
|
Mend: dependency security vulnerability
|
## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0334 (High) detected in linuxlinux-4.19.294 - ## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in linuxlinux ws high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net netfilter nf synproxy core c net netfilter nf synproxy core c vulnerability details linux kernel in versions to is vulnerable to out of bounds when parsing tcp options publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
191,981
| 14,597,911,793
|
IssuesEvent
|
2020-12-20 22:19:08
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
knative/serving-operator: pkg/reconciler/knativeserving/common/certs_test.go; 5 LoC
|
fresh test tiny
|
Found a possible issue in [knative/serving-operator](https://www.github.com/knative/serving-operator) at [pkg/reconciler/knativeserving/common/certs_test.go](https://github.com/knative/serving-operator/blob/1699811963d09d6597ec335222e28d539fa555d7/pkg/reconciler/knativeserving/common/certs_test.go#L111-L115)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to tt at line 113 may start a goroutine
[Click here to see the code in its original context.](https://github.com/knative/serving-operator/blob/1699811963d09d6597ec335222e28d539fa555d7/pkg/reconciler/knativeserving/common/certs_test.go#L111-L115)
<details>
<summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary>
```go
for _, tt := range customCertsTests {
t.Run(tt.name, func(t *testing.T) {
runCustomCertsTransformTest(t, &tt)
})
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 1699811963d09d6597ec335222e28d539fa555d7
|
1.0
|
knative/serving-operator: pkg/reconciler/knativeserving/common/certs_test.go; 5 LoC -
Found a possible issue in [knative/serving-operator](https://www.github.com/knative/serving-operator) at [pkg/reconciler/knativeserving/common/certs_test.go](https://github.com/knative/serving-operator/blob/1699811963d09d6597ec335222e28d539fa555d7/pkg/reconciler/knativeserving/common/certs_test.go#L111-L115)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to tt at line 113 may start a goroutine
[Click here to see the code in its original context.](https://github.com/knative/serving-operator/blob/1699811963d09d6597ec335222e28d539fa555d7/pkg/reconciler/knativeserving/common/certs_test.go#L111-L115)
<details>
<summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary>
```go
for _, tt := range customCertsTests {
t.Run(tt.name, func(t *testing.T) {
runCustomCertsTransformTest(t, &tt)
})
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 1699811963d09d6597ec335222e28d539fa555d7
|
non_process
|
knative serving operator pkg reconciler knativeserving common certs test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to tt at line may start a goroutine click here to show the line s of go which triggered the analyzer go for tt range customcertstests t run tt name func t testing t runcustomcertstransformtest t tt leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
3,757
| 6,733,486,620
|
IssuesEvent
|
2017-10-18 14:58:06
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
opened
|
Update contributing with instructions on how to run tests in the driver package
|
first-timers-only process: contributing
|
*General note:* The best source of truth in figuring out how to run tests for each package is our `circle.yml` file found in the main `cypress` directory. The tasks defined in our `cypress.yml` are all run before anything is deployed.
*To run end-to-end tests for the `driver` package from the Cypress Test Runner:*
- In the `cypress` directory, run `npm install` & `npm start`.
- When the Cypress Test Runner opens, manually add the directory `cypress/packages/driver/test`.
- In the `cypress/packages/driver` directory, run `npm start`.
- Click into the `test` directory from the Cypress Test Runner.
- Select any test file you want to run.
*To run end-to-end tests in the `driver` package from the terminal:*
- In the `cypress` directory: run `npm install`.
- In the `cypress/packages/driver` directory, run `npm start` & `npm run test-integration`.
- The Cypress Test Runner should spawn and run through each test file individually.
|
1.0
|
Update contributing with instructions on how to run tests in the driver package - *General note:* The best source of truth in figuring out how to run tests for each package is our `circle.yml` file found in the main `cypress` directory. The tasks defined in our `cypress.yml` are all run before anything is deployed.
*To run end-to-end tests for the `driver` package from the Cypress Test Runner:*
- In the `cypress` directory, run `npm install` & `npm start`.
- When the Cypress Test Runner opens, manually add the directory `cypress/packages/driver/test`.
- In the `cypress/packages/driver` directory, run `npm start`.
- Click into the `test` directory from the Cypress Test Runner.
- Select any test file you want to run.
*To run end-to-end tests in the `driver` package from the terminal:*
- In the `cypress` directory: run `npm install`.
- In the `cypress/packages/driver` directory, run `npm start` & `npm run test-integration`.
- The Cypress Test Runner should spawn and run through each test file individually.
|
process
|
update contributing with instructions on how to run tests in the driver package general note the best source of truth in figuring out how to run tests for each package is our circle yml file found in the main cypress directory the tasks defined in our cypress yml are all run before anything is deployed to run end to end tests for the driver package from the cypress test runner in the cypress directory run npm install npm start when the cypress test runner opens manually add the directory cypress packages driver test in the cypress packages driver directory run npm start click into the test directory from the cypress test runner select any test file you want to run to run end to end tests in the driver package from the terminal in the cypress directory run npm install in the cypress packages driver directory run npm start npm run test integration the cypress test runner should spawn and run through each test file individually
| 1
|
11,354
| 14,172,858,732
|
IssuesEvent
|
2020-11-12 17:30:32
|
googleapis/python-asset
|
https://api.github.com/repos/googleapis/python-asset
|
closed
|
Asset: 'test_export_assets' flakes
|
api: cloudasset type: process
|
/cc @gaogaogiraffe (test added in PR googleapis/google-cloud-python#8613, configuration updated in googleapis/google-cloud-python#8627)
From [this Kokoro failure today](https://source.cloud.google.com/results/invocations/0ddf2264-a3dd-4e44-956b-594c87bb1add/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fasset/log):
```python
___________________ TestVPCServiceControl.test_export_assets ___________________
self = <test_vpcsc.TestVPCServiceControl object at 0x7fe8f53fa9d0>
@pytest.mark.skipif(
PROJECT_INSIDE is None, reason="Missing environment variable: PROJECT_ID"
)
@pytest.mark.skipif(
PROJECT_OUTSIDE is None,
reason="Missing environment variable: GOOGLE_CLOUD_TESTS_VPCSC_OUTSIDE_PERIMETER_PROJECT",
)
def test_export_assets(self):
client = asset_v1.AssetServiceClient()
output_config = {}
parent_inside = "projects/" + PROJECT_INSIDE
delayed_inside = lambda: client.export_assets(parent_inside, output_config)
parent_outside = "projects/" + PROJECT_OUTSIDE
delayed_outside = lambda: client.export_assets(parent_outside, output_config)
> TestVPCServiceControl._do_test(delayed_inside, delayed_outside)
tests/system/test_vpcsc.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
delayed_inside = <function <lambda> at 0x7fe8f5348b90>
delayed_outside = <function <lambda> at 0x7fe8f5348c08>
@staticmethod
def _do_test(delayed_inside, delayed_outside):
if IS_INSIDE_VPCSC.lower() == "true":
> assert TestVPCServiceControl._is_rejected(delayed_outside)
E assert False
E + where False = <function _is_rejected at 0x7fe8f3031de8>(<function <lambda> at 0x7fe8f5348c08>)
E + where <function _is_rejected at 0x7fe8f3031de8> = TestVPCServiceControl._is_rejected
tests/system/test_vpcsc.py:47: AssertionError
```
|
1.0
|
Asset: 'test_export_assets' flakes - /cc @gaogaogiraffe (test added in PR googleapis/google-cloud-python#8613, configuration updated in googleapis/google-cloud-python#8627)
From [this Kokoro failure today](https://source.cloud.google.com/results/invocations/0ddf2264-a3dd-4e44-956b-594c87bb1add/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fasset/log):
```python
___________________ TestVPCServiceControl.test_export_assets ___________________
self = <test_vpcsc.TestVPCServiceControl object at 0x7fe8f53fa9d0>
@pytest.mark.skipif(
PROJECT_INSIDE is None, reason="Missing environment variable: PROJECT_ID"
)
@pytest.mark.skipif(
PROJECT_OUTSIDE is None,
reason="Missing environment variable: GOOGLE_CLOUD_TESTS_VPCSC_OUTSIDE_PERIMETER_PROJECT",
)
def test_export_assets(self):
client = asset_v1.AssetServiceClient()
output_config = {}
parent_inside = "projects/" + PROJECT_INSIDE
delayed_inside = lambda: client.export_assets(parent_inside, output_config)
parent_outside = "projects/" + PROJECT_OUTSIDE
delayed_outside = lambda: client.export_assets(parent_outside, output_config)
> TestVPCServiceControl._do_test(delayed_inside, delayed_outside)
tests/system/test_vpcsc.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
delayed_inside = <function <lambda> at 0x7fe8f5348b90>
delayed_outside = <function <lambda> at 0x7fe8f5348c08>
@staticmethod
def _do_test(delayed_inside, delayed_outside):
if IS_INSIDE_VPCSC.lower() == "true":
> assert TestVPCServiceControl._is_rejected(delayed_outside)
E assert False
E + where False = <function _is_rejected at 0x7fe8f3031de8>(<function <lambda> at 0x7fe8f5348c08>)
E + where <function _is_rejected at 0x7fe8f3031de8> = TestVPCServiceControl._is_rejected
tests/system/test_vpcsc.py:47: AssertionError
```
|
process
|
asset test export assets flakes cc gaogaogiraffe test added in pr googleapis google cloud python configuration updated in googleapis google cloud python from python testvpcservicecontrol test export assets self pytest mark skipif project inside is none reason missing environment variable project id pytest mark skipif project outside is none reason missing environment variable google cloud tests vpcsc outside perimeter project def test export assets self client asset assetserviceclient output config parent inside projects project inside delayed inside lambda client export assets parent inside output config parent outside projects project outside delayed outside lambda client export assets parent outside output config testvpcservicecontrol do test delayed inside delayed outside tests system test vpcsc py delayed inside at delayed outside at staticmethod def do test delayed inside delayed outside if is inside vpcsc lower true assert testvpcservicecontrol is rejected delayed outside e assert false e where false at e where testvpcservicecontrol is rejected tests system test vpcsc py assertionerror
| 1
|
730,254
| 25,165,624,515
|
IssuesEvent
|
2022-11-10 20:33:23
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mp3cut.net - Unsupported feature using Firefox
|
browser-firefox browser-safari priority-normal severity-important type-unsupported action-needssitepatch engine-gecko version100 diagnosis-priority-p2
|
<!-- @browser: Firefox 100 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36 Edg/101.0.1210.32 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/103844 -->
**URL**: https://mp3cut.net/es/
**Browser / Version**: Firefox 100
**Operating System**: Windows 11
**Tested Another Browser**: Yes Firefox
**Problem type**: Something else
**Description**: Audio pitch dont work in Firefox Browser marked as an "old" Browser
**Steps to Reproduce**:
This website allow you to edit mp3 files and change many paramenter one of them are the audio pitch, this work on any Chromium web browser but not on Firefox. they say "This browser dont support this feature"
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/5/19292291-7180-4762-b045-d45256a39ba4.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
2.0
|
mp3cut.net - Unsupported feature using Firefox - <!-- @browser: Firefox 100 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36 Edg/101.0.1210.32 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/103844 -->
**URL**: https://mp3cut.net/es/
**Browser / Version**: Firefox 100
**Operating System**: Windows 11
**Tested Another Browser**: Yes Firefox
**Problem type**: Something else
**Description**: Audio pitch dont work in Firefox Browser marked as an "old" Browser
**Steps to Reproduce**:
This website allow you to edit mp3 files and change many paramenter one of them are the audio pitch, this work on any Chromium web browser but not on Firefox. they say "This browser dont support this feature"
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/5/19292291-7180-4762-b045-d45256a39ba4.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
net unsupported feature using firefox url browser version firefox operating system windows tested another browser yes firefox problem type something else description audio pitch dont work in firefox browser marked as an old browser steps to reproduce this website allow you to edit files and change many paramenter one of them are the audio pitch this work on any chromium web browser but not on firefox they say this browser dont support this feature view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
6,983
| 10,131,472,165
|
IssuesEvent
|
2019-08-01 19:38:05
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Create a manual PR template (to copy and paste) to submit full translation PRs
|
process
|
Each translation should have one issue that starts it (format specified on #4835) and one PR with the format specified on this issue. (To be included in the docs folder or somewhere else, maybe a translations folder).
The PR template should contain a checklist with some tasks to be done, both on android & iOS, including some actual testing.
The checklist should make sure file is in the right format, check usage of whitespaces, there's no contents in the entries comments, follows the guide defined on #4834, and maybe something more.
The PR should also make sure that the right people are notified and the translation has permission to enter our code base.
**The PR template is only a file that will exist in our docs or translations folder and will be copy/pasted to create the PR for the translations**
|
1.0
|
Create a manual PR template (to copy and paste) to submit full translation PRs - Each translation should have one issue that starts it (format specified on #4835) and one PR with the format specified on this issue. (To be included in the docs folder or somewhere else, maybe a translations folder).
The PR template should contain a checklist with some tasks to be done, both on android & iOS, including some actual testing.
The checklist should make sure file is in the right format, check usage of whitespaces, there's no contents in the entries comments, follows the guide defined on #4834, and maybe something more.
The PR should also make sure that the right people are notified and the translation has permission to enter our code base.
**The PR template is only a file that will exist in our docs or translations folder and will be copy/pasted to create the PR for the translations**
|
process
|
create a manual pr template to copy and paste to submit full translation prs each translation should have one issue that starts it format specified on and one pr with the format specified on this issue to be included in the docs folder or somewhere else maybe a translations folder the pr template should contain a checklist with some tasks to be done both on android ios including some actual testing the checklist should make sure file is in the right format check usage of whitespaces there s no contents in the entries comments follows the guide defined on and maybe something more the pr should also make sure that the right people are notified and the translation has permission to enter our code base the pr template is only a file that will exist in our docs or translations folder and will be copy pasted to create the pr for the translations
| 1
|
10,875
| 13,644,944,285
|
IssuesEvent
|
2020-09-25 19:50:20
|
GenderMagProject/GenderMagRecordersAssistant
|
https://api.github.com/repos/GenderMagProject/GenderMagRecordersAssistant
|
closed
|
Back Button
|
Enhancement Learning by Process vs. by Tinkering Medium Priority
|
* **Description** (Example: feature to add exit button)
Add back button to every step
* **Describe the feature to add (or improvement)** (Example: there is no right way to exit from the application)
Add a back button to every step of the process (especially during subgoal stage in order to go back and edit scenario) so that the user can go back at any point.
* **New feature fit for the project: how does your idea fit with the aim/scope of the project?** (Example: feature gives the workflow)
Allows user to backtrack if they make a wrong move
* **Merits: what are the merits of the feature?** (Example: prevent data loss)
Reduces chance of user giving up right away
* **Screenshot, if you can show something similar or a sketch.**

* **Any other information.**
This issue aligns with the GenderMag facet of Learning: by Process vs. by Tinkering because a tinkerer would want to backtrack in order to fix any mistakes.
|
1.0
|
Back Button - * **Description** (Example: feature to add exit button)
Add back button to every step
* **Describe the feature to add (or improvement)** (Example: there is no right way to exit from the application)
Add a back button to every step of the process (especially during subgoal stage in order to go back and edit scenario) so that the user can go back at any point.
* **New feature fit for the project: how does your idea fit with the aim/scope of the project?** (Example: feature gives the workflow)
Allows user to backtrack if they make a wrong move
* **Merits: what are the merits of the feature?** (Example: prevent data loss)
Reduces chance of user giving up right away
* **Screenshot, if you can show something similar or a sketch.**

* **Any other information.**
This issue aligns with the GenderMag facet of Learning: by Process vs. by Tinkering because a tinkerer would want to backtrack in order to fix any mistakes.
|
process
|
back button description example feature to add exit button add back button to every step describe the feature to add or improvement example there is no right way to exit from the application add a back button to every step of the process especially during subgoal stage in order to go back and edit scenario so that the user can go back at any point new feature fit for the project how does your idea fit with the aim scope of the project example feature gives the workflow allows user to backtrack if they make a wrong move merits what are the merits of the feature example prevent data loss reduces chance of user giving up right away screenshot if you can show something similar or a sketch any other information this issue aligns with the gendermag facet of learning by process vs by tinkering because a tinkerer would want to backtrack in order to fix any mistakes
| 1
|
1,042
| 3,322,734,414
|
IssuesEvent
|
2015-11-09 15:43:45
|
HTBox/allReady
|
https://api.github.com/repos/HTBox/allReady
|
closed
|
As an anomyous user, I want to be able to volunteer for an activity, so I can help the overall campaign
|
P2 requirement volunteer workflow web app
|
Needs to be fully documented and then issues logged for any gaps
|
1.0
|
As an anomyous user, I want to be able to volunteer for an activity, so I can help the overall campaign - Needs to be fully documented and then issues logged for any gaps
|
non_process
|
as an anomyous user i want to be able to volunteer for an activity so i can help the overall campaign needs to be fully documented and then issues logged for any gaps
| 0
|
3,975
| 6,905,624,108
|
IssuesEvent
|
2017-11-27 08:06:19
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
closed
|
Processor to inject in Solr/SolrCloud
|
feature processor
|
Solr is also widely used as a powerful search engine. Would be good to also support it for a wider Logisland adoption.
|
1.0
|
Processor to inject in Solr/SolrCloud - Solr is also widely used as a powerful search engine. Would be good to also support it for a wider Logisland adoption.
|
process
|
processor to inject in solr solrcloud solr is also widely used as a powerful search engine would be good to also support it for a wider logisland adoption
| 1
|
40,080
| 20,512,435,005
|
IssuesEvent
|
2022-03-01 08:18:41
|
JuliaMolSim/DFTK.jl
|
https://api.github.com/repos/JuliaMolSim/DFTK.jl
|
closed
|
Improve density computation
|
performance
|
In #434 it became clear that we should take a look at getting density computations faster.
|
True
|
Improve density computation - In #434 it became clear that we should take a look at getting density computations faster.
|
non_process
|
improve density computation in it became clear that we should take a look at getting density computations faster
| 0
|
29,678
| 11,768,043,099
|
IssuesEvent
|
2020-03-15 08:09:46
|
efremropelato/cesium-coreui-react
|
https://api.github.com/repos/efremropelato/cesium-coreui-react
|
closed
|
WS-2019-0333 (Medium) detected in handlebars-4.1.2.tgz
|
security vulnerability
|
## WS-2019-0333 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/cesium-coreui-react/frontend/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/cesium-coreui-react/frontend/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.7.1.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/efremropelato/cesium-coreui-react/commit/8c766291aacf616f0f787849563a1a20989f6c61">8c766291aacf616f0f787849563a1a20989f6c61</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype Pollution vulnerability found in handlebars 1.0.6 before 4.5.3. It is possible to add or modify properties to the Object prototype through a malicious template. Attacker may crash the application or execute Arbitrary Code in specific conditions.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0333 (Medium) detected in handlebars-4.1.2.tgz - ## WS-2019-0333 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/cesium-coreui-react/frontend/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/cesium-coreui-react/frontend/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.7.1.tgz (Root Library)
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/efremropelato/cesium-coreui-react/commit/8c766291aacf616f0f787849563a1a20989f6c61">8c766291aacf616f0f787849563a1a20989f6c61</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype Pollution vulnerability found in handlebars 1.0.6 before 4.5.3. It is possible to add or modify properties to the Object prototype through a malicious template. Attacker may crash the application or execute Arbitrary Code in specific conditions.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/f7f05d7558e674856686b62a00cde5758f3b7a08>WS-2019-0333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1325">https://www.npmjs.com/advisories/1325</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm cesium coreui react frontend package json path to vulnerable library tmp ws scm cesium coreui react frontend node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability found in handlebars before it is possible to add or modify properties to the object prototype through a malicious template attacker may crash the application or execute arbitrary code in specific conditions publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
3,032
| 6,037,552,144
|
IssuesEvent
|
2017-06-09 18:58:10
|
yahoo/fili
|
https://api.github.com/repos/yahoo/fili
|
opened
|
Indicate external vs. internal interfaces via annotation
|
EXTENSIBILITY IDEA ONBOARDING PROCESS TOOLING
|
Similar to `@ForTesting`, we can annotate the external code interfaces (ie. the ones we expect people to call and we'll try not to mess with) in some fashion. Gradle does something similar to this, and possibly so does Guava.
What the annotation is, I do not know, but some sort of annotation.
|
1.0
|
Indicate external vs. internal interfaces via annotation - Similar to `@ForTesting`, we can annotate the external code interfaces (ie. the ones we expect people to call and we'll try not to mess with) in some fashion. Gradle does something similar to this, and possibly so does Guava.
What the annotation is, I do not know, but some sort of annotation.
|
process
|
indicate external vs internal interfaces via annotation similar to fortesting we can annotate the external code interfaces ie the ones we expect people to call and we ll try not to mess with in some fashion gradle does something similar to this and possibly so does guava what the annotation is i do not know but some sort of annotation
| 1
|
248,803
| 18,858,120,124
|
IssuesEvent
|
2021-11-12 09:24:26
|
tenebrius1/pe
|
https://api.github.com/repos/tenebrius1/pe
|
opened
|
[UG] Missing a glossary for non-technical users
|
type.DocumentationBug severity.Low
|
I believe that a glossary would be good for users who are not technically inclined, perhaps for phrases like "System clipboard" mentioned in the share command section.

<!--session: 1636703577616-b03902ef-aacc-4641-8676-e6800c2f1eaa-->
<!--Version: Web v3.4.1-->
|
1.0
|
[UG] Missing a glossary for non-technical users - I believe that a glossary would be good for users who are not technically inclined, perhaps for phrases like "System clipboard" mentioned in the share command section.

<!--session: 1636703577616-b03902ef-aacc-4641-8676-e6800c2f1eaa-->
<!--Version: Web v3.4.1-->
|
non_process
|
missing a glossary for non technical users i believe that a glossary would be good for users who are not technically inclined perhaps for phrases like system clipboard mentioned in the share command section
| 0
|
241,240
| 18,437,485,728
|
IssuesEvent
|
2021-10-14 14:26:45
|
CommonsBuild/Gravity
|
https://api.github.com/repos/CommonsBuild/Gravity
|
opened
|
Creating tickets for conflict management
|
documentation
|
**Organization**
TEC
**Start Date. (D/M/Y)**
13/11/2020
**Input mechanism**
Typeform (CS)
**Graviton in charge**
Juan
**Transformational action considered**
Promote communication to solve the misunderstanding
**Dispute continued?**
N
**Notes**
Misunderstanding around Book clubs
**Additional actions**
No
**Evidence**
Private conversations
|
1.0
|
Creating tickets for conflict management - **Organization**
TEC
**Start Date. (D/M/Y)**
13/11/2020
**Input mechanism**
Typeform (CS)
**Graviton in charge**
Juan
**Transformational action considered**
Promote communication to solve the misunderstanding
**Dispute continued?**
N
**Notes**
Misunderstanding around Book clubs
**Additional actions**
No
**Evidence**
Private conversations
|
non_process
|
creating tickets for conflict management organization tec start date d m y input mechanism typeform cs graviton in charge juan transformational action considered promote communication to solve the misunderstanding dispute continued n notes misunderstanding around book clubs additional actions no evidence private conversations
| 0
|
388,198
| 11,484,616,434
|
IssuesEvent
|
2020-02-11 04:29:00
|
jcgood/kpaamcam
|
https://api.github.com/repos/jcgood/kpaamcam
|
closed
|
Gatekeepers
|
low-priority
|
Implement Gatekeepers to be able to turn on and off specific functionality to allow for a more stable master release.
|
1.0
|
Gatekeepers - Implement Gatekeepers to be able to turn on and off specific functionality to allow for a more stable master release.
|
non_process
|
gatekeepers implement gatekeepers to be able to turn on and off specific functionality to allow for a more stable master release
| 0
|
380,727
| 26,429,652,059
|
IssuesEvent
|
2023-01-14 16:54:29
|
Perl/perl5
|
https://api.github.com/repos/Perl/perl5
|
closed
|
[doc] sort docs say the compare function must return an integer, but it doesn't
|
documentation
|
**Where**
<!-- What module, script or perldoc URL needs to be fixed? -->
perldoc -f sort
**Description**
<!-- Please describe the documentation issue here. -->
`perldoc -f sort` says
> If SUBNAME is specified, it gives the name of a subroutine that returns an integer less than, equal to, or greater than 0, depending on how the elements of the list are to be ordered.
As far as I can tell, it only needs to be a numeric value, and does not need to be an integer.
I'll be glad to submit a PR if that's easiest.
----
Demo to show to myself that random non-integers work just fine.
```
use List::Util qw( shuffle );
my @x = shuffle 1..500;
say join( ', ', @x );
@x = sort floatcompare @x;
say join( ', ', @x );
sub floatcompare {
if ( $a < $b ) {
return -rand(1000) * 0.01;
}
elsif ( $a > $b ) {
return rand(1000) * 0.01;
}
return 0;
}
```
|
1.0
|
[doc] sort docs say the compare function must return an integer, but it doesn't - **Where**
<!-- What module, script or perldoc URL needs to be fixed? -->
perldoc -f sort
**Description**
<!-- Please describe the documentation issue here. -->
`perldoc -f sort` says
> If SUBNAME is specified, it gives the name of a subroutine that returns an integer less than, equal to, or greater than 0, depending on how the elements of the list are to be ordered.
As far as I can tell, it only needs to be a numeric value, and does not need to be an integer.
I'll be glad to submit a PR if that's easiest.
----
Demo to show to myself that random non-integers work just fine.
```
use List::Util qw( shuffle );
my @x = shuffle 1..500;
say join( ', ', @x );
@x = sort floatcompare @x;
say join( ', ', @x );
sub floatcompare {
if ( $a < $b ) {
return -rand(1000) * 0.01;
}
elsif ( $a > $b ) {
return rand(1000) * 0.01;
}
return 0;
}
```
|
non_process
|
sort docs say the compare function must return an integer but it doesn t where perldoc f sort description perldoc f sort says if subname is specified it gives the name of a subroutine that returns an integer less than equal to or greater than depending on how the elements of the list are to be ordered as far as i can tell it only needs to be a numeric value and does not need to be an integer i ll be glad to submit a pr if that s easiest demo to show to myself that random non integers work just fine use list util qw shuffle my x shuffle say join x x sort floatcompare x say join x sub floatcompare if a b return rand elsif a b return rand return
| 0
|
48,893
| 6,114,469,518
|
IssuesEvent
|
2017-06-22 01:20:45
|
quicwg/base-drafts
|
https://api.github.com/repos/quicwg/base-drafts
|
closed
|
Version fields in transport parameters for NewSessionTicket
|
-transport design
|
There were a few errors in #512. Foremost of those was that the presence or absence of version fields in the transport parameters wasn't defined for NewSessionTicket. We don't need version fields in that message.
|
1.0
|
Version fields in transport parameters for NewSessionTicket - There were a few errors in #512. Foremost of those was that the presence or absence of version fields in the transport parameters wasn't defined for NewSessionTicket. We don't need version fields in that message.
|
non_process
|
version fields in transport parameters for newsessionticket there were a few errors in foremost of those was that the presence or absence of version fields in the transport parameters wasn t defined for newsessionticket we don t need version fields in that message
| 0
|
2,885
| 5,848,820,988
|
IssuesEvent
|
2017-05-10 21:51:33
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Make processing dissolve algorithm accept multiple fields
|
HackFest Processing Text User Manual
|
Original commit: https://github.com/qgis/QGIS/commit/7853aa1861521091ed3b15e9bfa77548cc275e02 by nyalldawson
This allows you to dissolve based on more than one field value
(cherry-picked from bb54b4f41a726737d5d28a71632ed29a9a8045df)
|
1.0
|
[FEATURE] Make processing dissolve algorithm accept multiple fields - Original commit: https://github.com/qgis/QGIS/commit/7853aa1861521091ed3b15e9bfa77548cc275e02 by nyalldawson
This allows you to dissolve based on more than one field value
(cherry-picked from bb54b4f41a726737d5d28a71632ed29a9a8045df)
|
process
|
make processing dissolve algorithm accept multiple fields original commit by nyalldawson this allows you to dissolve based on more than one field value cherry picked from
| 1
|
4,024
| 6,955,861,790
|
IssuesEvent
|
2017-12-07 09:28:32
|
ElliotAOram/GhostPyramid
|
https://api.github.com/repos/ElliotAOram/GhostPyramid
|
closed
|
Feature 2: Background subtraction
|
Image Processing
|
# Feature Description
This feature will remove the background from the videofeed to ensure that there is a black background for the hologram and only the foreground is of interest.
# Tasks
* Entry tasks:
* Spike work into how to perform background subtraction
* Design feature:
* [ ] Consult overall model and decide if change is required
* [ ] correct method stub for background subtraction in VideoProcessor
* Build by feature:
* [ ] Write tests for feature
* [ ] Write code to pass tests
* [ ] Refactor where required
|
1.0
|
Feature 2: Background subtraction - # Feature Description
This feature will remove the background from the videofeed to ensure that there is a black background for the hologram and only the foreground is of interest.
# Tasks
* Entry tasks:
* Spike work into how to perform background subtraction
* Design feature:
* [ ] Consult overall model and decide if change is required
* [ ] correct method stub for background subtraction in VideoProcessor
* Build by feature:
* [ ] Write tests for feature
* [ ] Write code to pass tests
* [ ] Refactor where required
|
process
|
feature background subtraction feature description this feature will remove the background from the videofeed to ensure that there is a black background for the hologram and only the foreground is of interest tasks entry tasks spike work into how to perform background subtraction design feature consult overall model and decide if change is required correct method stub for background subtraction in videoprocessor build by feature write tests for feature write code to pass tests refactor where required
| 1
|
102,699
| 16,579,046,122
|
IssuesEvent
|
2021-05-31 09:10:51
|
AlexRogalskiy/javascript-tools
|
https://api.github.com/repos/AlexRogalskiy/javascript-tools
|
opened
|
CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz
|
security vulnerability
|
## CVE-2021-33623 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p>
<p>Path to dependency file: javascript-tools/package.json</p>
<p>Path to vulnerable library: javascript-tools/node_modules/trim-newlines/package.json</p>
<p>
Dependency Hierarchy:
- release-notes-generator-9.0.1.tgz (Root Library)
- conventional-changelog-writer-4.1.0.tgz
- meow-8.1.2.tgz
- :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/javascript-tools/commit/1b604aed156e21b63797eedfaacc70cd23cf15c8">1b604aed156e21b63797eedfaacc70cd23cf15c8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz - ## CVE-2021-33623 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p>
<p>Path to dependency file: javascript-tools/package.json</p>
<p>Path to vulnerable library: javascript-tools/node_modules/trim-newlines/package.json</p>
<p>
Dependency Hierarchy:
- release-notes-generator-9.0.1.tgz (Root Library)
- conventional-changelog-writer-4.1.0.tgz
- meow-8.1.2.tgz
- :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/javascript-tools/commit/1b604aed156e21b63797eedfaacc70cd23cf15c8">1b604aed156e21b63797eedfaacc70cd23cf15c8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in trim newlines tgz cve medium severity vulnerability vulnerable library trim newlines tgz trim newlines from the start and or end of a string library home page a href path to dependency file javascript tools package json path to vulnerable library javascript tools node modules trim newlines package json dependency hierarchy release notes generator tgz root library conventional changelog writer tgz meow tgz x trim newlines tgz vulnerable library found in head commit a href vulnerability details the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim newlines step up your open source security game with whitesource
| 0
|
15,543
| 19,703,501,368
|
IssuesEvent
|
2022-01-12 19:07:52
|
googleapis/java-grafeas
|
https://api.github.com/repos/googleapis/java-grafeas
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'grafeas' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'grafeas' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname grafeas invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
18,056
| 24,067,639,172
|
IssuesEvent
|
2022-09-17 18:22:23
|
COS301-SE-2022/Pure-LoRa-Tracking
|
https://api.github.com/repos/COS301-SE-2022/Pure-LoRa-Tracking
|
closed
|
(data): service to feed AI, data from DB
|
(system) Database (system) Server (role) data engineer (system) AI (bus) processing
|
On the CRON confirming data is ready for the AI, extract the data and send to the AI for further processing
|
1.0
|
(data): service to feed AI, data from DB - On the CRON confirming data is ready for the AI, extract the data and send to the AI for further processing
|
process
|
data service to feed ai data from db on the cron confirming data is ready for the ai extract the data and send to the ai for further processing
| 1
|
16,411
| 21,191,486,633
|
IssuesEvent
|
2022-04-08 17:57:48
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Replace deprecated dependencies
|
stage: work in progress type: chore process: dependencies
|
### Current behavior:
Upon install of packages when developing in Cypress there are some deprecation warning that should probably be addressed (not really covered with renovatebot)
```
npm WARN deprecated coffee-script@1.12.5: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.
npm WARN deprecated fs-promise@1.0.0: Use mz or fs-extra^3.0 with Promise Support
npm WARN deprecated coffee-script@1.11.1: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated nomnom@1.8.1: Package no longer supported. Contact support@npmjs.com for more info.
npm WARN deprecated babel-preset-es2015@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated coffee-script@1.9.3: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated coffee-script@1.12.7: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated github@11.0.0: 'github' has been renamed to '@octokit/rest' (https://git.io/vNB11)
npm WARN deprecated popper.js@1.16.1: You can find the new Popper v2 at @popperjs/core, this package is dedicated to the legacy v1
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
npm WARN deprecated tar.gz@0.1.1: ⚠️ WARNING ⚠️ tar.gz module has been deprecated and your application is vulnerable. Please use tar module instead: https://npmjs.com/tar
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm WARN deprecated hawk@3.1.3: This module moved to @hapi/hawk. Please make sure to switch over as this distribution is no longer supported and may contain bugs and critical security issues.
npm WARN deprecated cross-spawn-async@2.2.5: cross-spawn no longer requires a build toolchain, use it instead
npm WARN deprecated circular-json@0.5.9: CircularJSON is in maintenance only, flatted is its successor.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated samsam@1.1.3: This package has been deprecated in favour of @sinonjs/samsam
npm WARN deprecated postinstall-build@2.1.3: postinstall-build's behavior is now built into npm! You should migrate off of postinstall-build and use the new `prepare` lifecycle script with npm 5.0.0 or greater.
npm WARN deprecated gulp-util@3.0.8: gulp-util is deprecated - replace it, following the guidelines at https://medium.com/gulpjs/gulp-util-ca3b1f9f9ac5
npm WARN deprecated superagent@3.8.3: Please note that v5.0.1+ of superagent removes User-Agent header by default, therefore you may need to add it yourself (e.g. GitHub blocks requests without a User-Agent header). This notice will go away with v5.0.2+ once it is released.
```
<img width="1415" alt="Screen Shot 2020-03-16 at 11 37 26 AM" src="https://user-images.githubusercontent.com/1271364/76724859-cfe84c00-677a-11ea-9eb4-036fade560d5.png">
### Desired behavior:
We should not use deprecated packages
### Versions
4.1.0
|
1.0
|
Replace deprecated dependencies - ### Current behavior:
Upon install of packages when developing in Cypress there are some deprecation warning that should probably be addressed (not really covered with renovatebot)
```
npm WARN deprecated coffee-script@1.12.5: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3.
npm WARN deprecated fs-promise@1.0.0: Use mz or fs-extra^3.0 with Promise Support
npm WARN deprecated coffee-script@1.11.1: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated nomnom@1.8.1: Package no longer supported. Contact support@npmjs.com for more info.
npm WARN deprecated babel-preset-es2015@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated coffee-script@1.9.3: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated coffee-script@1.12.7: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated github@11.0.0: 'github' has been renamed to '@octokit/rest' (https://git.io/vNB11)
npm WARN deprecated popper.js@1.16.1: You can find the new Popper v2 at @popperjs/core, this package is dedicated to the legacy v1
npm WARN deprecated jade@0.26.3: Jade has been renamed to pug, please install the latest version of pug instead of jade
npm WARN deprecated tar.gz@0.1.1: ⚠️ WARNING ⚠️ tar.gz module has been deprecated and your application is vulnerable. Please use tar module instead: https://npmjs.com/tar
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm WARN deprecated hawk@3.1.3: This module moved to @hapi/hawk. Please make sure to switch over as this distribution is no longer supported and may contain bugs and critical security issues.
npm WARN deprecated cross-spawn-async@2.2.5: cross-spawn no longer requires a build toolchain, use it instead
npm WARN deprecated circular-json@0.5.9: CircularJSON is in maintenance only, flatted is its successor.
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated samsam@1.1.3: This package has been deprecated in favour of @sinonjs/samsam
npm WARN deprecated postinstall-build@2.1.3: postinstall-build's behavior is now built into npm! You should migrate off of postinstall-build and use the new `prepare` lifecycle script with npm 5.0.0 or greater.
npm WARN deprecated gulp-util@3.0.8: gulp-util is deprecated - replace it, following the guidelines at https://medium.com/gulpjs/gulp-util-ca3b1f9f9ac5
npm WARN deprecated superagent@3.8.3: Please note that v5.0.1+ of superagent removes User-Agent header by default, therefore you may need to add it yourself (e.g. GitHub blocks requests without a User-Agent header). This notice will go away with v5.0.2+ once it is released.
```
<img width="1415" alt="Screen Shot 2020-03-16 at 11 37 26 AM" src="https://user-images.githubusercontent.com/1271364/76724859-cfe84c00-677a-11ea-9eb4-036fade560d5.png">
### Desired behavior:
We should not use deprecated packages
### Versions
4.1.0
|
process
|
replace deprecated dependencies current behavior upon install of packages when developing in cypress there are some deprecation warning that should probably be addressed not really covered with renovatebot npm warn deprecated coffee script coffeescript on npm has moved to coffeescript no hyphen npm warn deprecated core js core js is no longer maintained and not recommended for usage due to the number of issues please upgrade your dependencies to the actual version of core js npm warn deprecated fs promise use mz or fs extra with promise support npm warn deprecated coffee script coffeescript on npm has moved to coffeescript no hyphen npm warn deprecated nomnom package no longer supported contact support npmjs com for more info npm warn deprecated babel preset 🙌 thanks for using babel we recommend using babel preset env now please read babeljs io env to update npm warn deprecated minimatch please update to minimatch or higher to avoid a regexp dos issue npm warn deprecated coffee script coffeescript on npm has moved to coffeescript no hyphen npm warn deprecated coffee script coffeescript on npm has moved to coffeescript no hyphen npm warn deprecated github github has been renamed to octokit rest npm warn deprecated popper js you can find the new popper at popperjs core this package is dedicated to the legacy npm warn deprecated jade jade has been renamed to pug please install the latest version of pug instead of jade npm warn deprecated tar gz ⚠️ warning ⚠️ tar gz module has been deprecated and your application is vulnerable please use tar module instead npm warn deprecated node uuid use uuid module instead npm warn deprecated hawk this module moved to hapi hawk please make sure to switch over as this distribution is no longer supported and may contain bugs and critical security issues npm warn deprecated cross spawn async cross spawn no longer requires a build toolchain use it instead npm warn deprecated circular json circularjson is in maintenance only flatted is its successor npm warn deprecated request request has been deprecated see npm warn deprecated samsam this package has been deprecated in favour of sinonjs samsam npm warn deprecated postinstall build postinstall build s behavior is now built into npm you should migrate off of postinstall build and use the new prepare lifecycle script with npm or greater npm warn deprecated gulp util gulp util is deprecated replace it following the guidelines at npm warn deprecated superagent please note that of superagent removes user agent header by default therefore you may need to add it yourself e g github blocks requests without a user agent header this notice will go away with once it is released img width alt screen shot at am src desired behavior we should not use deprecated packages versions
| 1
|
71,208
| 18,522,995,351
|
IssuesEvent
|
2021-10-20 16:58:42
|
golang/go
|
https://api.github.com/repos/golang/go
|
opened
|
x/build/dashboard: remove OpenBSD 6.4 builders
|
OS-OpenBSD Builders NeedsFix
|
At this time, OpenBSD 7.0 (released 6 days ago) and 6.9 are the supported releases of OpenBSD per their support policy of maintaining the last 2 releases. OpenBSD 6.4 stopped being supported on October 17, 2019.
Go's OpenBSD support policy matches that of OpenBSD (https://golang.org/wiki/OpenBSD#longterm-support), so it's time to remove the OpenBSD 6.4 builders (for 386/amd64 archs). We'll have coverage from remaining OpenBSD 6.8 builders for same archs, plus ARM/MIPS ones, and any newer OpenBSD builders that are added.
CC @golang/release, @4a6f656c.
|
1.0
|
x/build/dashboard: remove OpenBSD 6.4 builders - At this time, OpenBSD 7.0 (released 6 days ago) and 6.9 are the supported releases of OpenBSD per their support policy of maintaining the last 2 releases. OpenBSD 6.4 stopped being supported on October 17, 2019.
Go's OpenBSD support policy matches that of OpenBSD (https://golang.org/wiki/OpenBSD#longterm-support), so it's time to remove the OpenBSD 6.4 builders (for 386/amd64 archs). We'll have coverage from remaining OpenBSD 6.8 builders for same archs, plus ARM/MIPS ones, and any newer OpenBSD builders that are added.
CC @golang/release, @4a6f656c.
|
non_process
|
x build dashboard remove openbsd builders at this time openbsd released days ago and are the supported releases of openbsd per their support policy of maintaining the last releases openbsd stopped being supported on october go s openbsd support policy matches that of openbsd so it s time to remove the openbsd builders for archs we ll have coverage from remaining openbsd builders for same archs plus arm mips ones and any newer openbsd builders that are added cc golang release
| 0
|
220,206
| 16,892,208,068
|
IssuesEvent
|
2021-06-23 10:37:20
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
New docs: proofread and correct README files
|
Type: Change Type: Documentation
|
Let's make sure that our README files help our users understand what our documentation is and what they can expect from it. The files to be examined are listed below:
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README.md
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README-EDITING.md
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README-DEPLOYMENT.md
|
1.0
|
New docs: proofread and correct README files - Let's make sure that our README files help our users understand what our documentation is and what they can expect from it. The files to be examined are listed below:
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README.md
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README-EDITING.md
https://github.com/handsontable/handsontable/blob/feature/issue-7624/docs/README-DEPLOYMENT.md
|
non_process
|
new docs proofread and correct readme files let s make sure that our readme files help our users understand what our documentation is and what they can expect from it the files to be examined are listed below
| 0
|
97,006
| 8,639,845,025
|
IssuesEvent
|
2018-11-23 22:01:48
|
SilentChaos512/ScalingHealth
|
https://api.github.com/repos/SilentChaos512/ScalingHealth
|
closed
|
[Feature Request] Configurable loot tables based on difficulty.
|
enhancement needs testing wontfix
|
Would love for difficult mobs and such to have bigger and better drops, even drops that are non-standard. Would love to use this in conjunction with lootbags and megaloot.
((Imagine an extreme difficulty mob finally being killed and having it drop a bag of loot containing nether stars, or a philosophers stone from projecte, or maybe awakened draconium.))
|
1.0
|
[Feature Request] Configurable loot tables based on difficulty. - Would love for difficult mobs and such to have bigger and better drops, even drops that are non-standard. Would love to use this in conjunction with lootbags and megaloot.
((Imagine an extreme difficulty mob finally being killed and having it drop a bag of loot containing nether stars, or a philosophers stone from projecte, or maybe awakened draconium.))
|
non_process
|
configurable loot tables based on difficulty would love for difficult mobs and such to have bigger and better drops even drops that are non standard would love to use this in conjunction with lootbags and megaloot imagine an extreme difficulty mob finally being killed and having it drop a bag of loot containing nether stars or a philosophers stone from projecte or maybe awakened draconium
| 0
|
65,190
| 19,253,871,294
|
IssuesEvent
|
2021-12-09 09:13:04
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Poll Create dialog in high contrast theme: delete answer button is a circle
|
T-Defect S-Minor A-Appearance A-Themes-Official O-Occasional A-Polls Z-Labs
|
### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Poll Create dialog in high contrast theme: delete answer button is a circle - ### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
non_process
|
poll create dialog in high contrast theme delete answer button is a circle steps to reproduce turn on polls in labs settings choose the high contrast theme create a poll outcome what did you expect the delete answer buttons should look like x what happened instead instead they are filled circles operating system ubuntu browser information firefox bit url for webapp application version element version react js olm version homeserver matrix org will you send logs no
| 0
|
12,672
| 15,043,171,085
|
IssuesEvent
|
2021-02-03 00:07:12
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Write a trading bot using the Bisq API
|
$BSQ bounty a:feature in:trade-process
|
Write a trading bot using the Bisq API. Best to build on existing trading bot frameworks.
|
1.0
|
Write a trading bot using the Bisq API - Write a trading bot using the Bisq API. Best to build on existing trading bot frameworks.
|
process
|
write a trading bot using the bisq api write a trading bot using the bisq api best to build on existing trading bot frameworks
| 1
|
38,478
| 2,847,877,601
|
IssuesEvent
|
2015-05-29 19:27:38
|
phetsims/joist
|
https://api.github.com/repos/phetsims/joist
|
closed
|
Navbar screen titles text vertically offset in Firefox/OSX 10.9.4
|
high-priority Summer 2015 redeploy
|
In most browsers (e.g. Chrome) there is a buffer space between the screen icon and the screen title, e.g.

However, in Firefox (v31.0 on Mac OSX 10.9.4), the screen titles are vertically set higher and run into / overlap the icons, e.g.

|
1.0
|
Navbar screen titles text vertically offset in Firefox/OSX 10.9.4 - In most browsers (e.g. Chrome) there is a buffer space between the screen icon and the screen title, e.g.

However, in Firefox (v31.0 on Mac OSX 10.9.4), the screen titles are vertically set higher and run into / overlap the icons, e.g.

|
non_process
|
navbar screen titles text vertically offset in firefox osx in most browsers e g chrome there is a buffer space between the screen icon and the screen title e g however in firefox on mac osx the screen titles are vertically set higher and run into overlap the icons e g
| 0
|
15,233
| 19,103,044,827
|
IssuesEvent
|
2021-11-30 01:59:13
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
process nginx access logs for 1 month
|
question log-processing
|
I have nginx access logs for a period of 1 month. Is there a way to generate OVERALL ANALYZED REQUESTS for a period of 1 month from Sept1 to Sept 30 as an example? The number of access logs stored are 30 files (30 days) Thanks in Advance.
|
1.0
|
process nginx access logs for 1 month - I have nginx access logs for a period of 1 month. Is there a way to generate OVERALL ANALYZED REQUESTS for a period of 1 month from Sept1 to Sept 30 as an example? The number of access logs stored are 30 files (30 days) Thanks in Advance.
|
process
|
process nginx access logs for month i have nginx access logs for a period of month is there a way to generate overall analyzed requests for a period of month from to sept as an example the number of access logs stored are files days thanks in advance
| 1
|
13,489
| 16,018,557,585
|
IssuesEvent
|
2021-04-20 19:17:07
|
anlsys/aml
|
https://api.github.com/repos/anlsys/aml
|
closed
|
ECP milestone STPR19-4
|
focus:dev process:prototype status:ongoing
|
In GitLab by @perarnau on Aug 24, 2020, 09:56
Original issue here: https://jira.exascaleproject.org/browse/STPR19-4
Goal: build a duplicator, a high-level tool that can be used to duplicate data across similar areas, using locality information.
Current state:
- duplicator in https://xgitlab.cels.anl.gov/argo/aml/-/blob/replicaset/include/aml/replicaset/hwloc.h
- hwloc areas merged
- excit merged
Steps:
- figure out duplicator API (merge the replicaset branch)
- aim for xsbench integration
- side-step full topology iterator API for now.
|
1.0
|
ECP milestone STPR19-4 - In GitLab by @perarnau on Aug 24, 2020, 09:56
Original issue here: https://jira.exascaleproject.org/browse/STPR19-4
Goal: build a duplicator, a high-level tool that can be used to duplicate data across similar areas, using locality information.
Current state:
- duplicator in https://xgitlab.cels.anl.gov/argo/aml/-/blob/replicaset/include/aml/replicaset/hwloc.h
- hwloc areas merged
- excit merged
Steps:
- figure out duplicator API (merge the replicaset branch)
- aim for xsbench integration
- side-step full topology iterator API for now.
|
process
|
ecp milestone in gitlab by perarnau on aug original issue here goal build a duplicator a high level tool that can be used to duplicate data across similar areas using locality information current state duplicator in hwloc areas merged excit merged steps figure out duplicator api merge the replicaset branch aim for xsbench integration side step full topology iterator api for now
| 1
|
12,121
| 14,740,720,323
|
IssuesEvent
|
2021-01-07 09:31:41
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Prevent Multiple Billing Cycles creations at the same time
|
anc-process anp-0.5 ant-bug ant-parent/primary
|
In GitLab by @kdjstudios on Nov 29, 2018, 13:17
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** ALL
**Client/Site:** ALL
**Account:** ALL
**Issue:**
As found in #1259 a single user and/or two users can access the billing cycles page and create multiple instances at the same time for one billing cycle.
I believe we had already developed a fix for this, but do not recall if it was tested and released. If I can find it it will link the ticket here.
|
1.0
|
Prevent Multiple Billing Cycles creations at the same time - In GitLab by @kdjstudios on Nov 29, 2018, 13:17
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** ALL
**Client/Site:** ALL
**Account:** ALL
**Issue:**
As found in #1259 a single user and/or two users can access the billing cycles page and create multiple instances at the same time for one billing cycle.
I believe we had already developed a fix for this, but do not recall if it was tested and released. If I can find it it will link the ticket here.
|
process
|
prevent multiple billing cycles creations at the same time in gitlab by kdjstudios on nov submitted by kyle helpdesk na server all client site all account all issue as found in a single user and or two users can access the billing cycles page and create multiple instances at the same time for one billing cycle i believe we had already developed a fix for this but do not recall if it was tested and released if i can find it it will link the ticket here
| 1
|
278,458
| 8,641,647,559
|
IssuesEvent
|
2018-11-24 20:02:27
|
richelbilderbeek/djog_unos_2018
|
https://api.github.com/repos/richelbilderbeek/djog_unos_2018
|
closed
|
Add 'get_texture' to sfml_resources class
|
medium priority
|
**Is your feature request related to a problem? Please describe.**
Currently, to get a texture of -say- a cow, one needs to call `sfml_resources::get_cow_texture`. Per `agent_type` there is one member function. This does not scale.
**Describe the solution you'd like**
Create a member function `sfml_resources::get_texture(const agent_type a)` that returns a texture based on the `agent_type`
There's a test that needs fixing:
```c++
//#define FIX_ISSUE_225
#ifdef FIX_ISSUE_225
// Can get the sprite of an agent_type
{
assert(resources.get_texture(agent_type::bacteria).getSize().x > 0);
assert(resources.get_texture(agent_type::cow).getSize().x > 0);
assert(resources.get_texture(agent_type::crocodile).getSize().x > 0);
assert(resources.get_texture(agent_type::fish).getSize().x > 0);
assert(resources.get_texture(agent_type::grass).getSize().x > 0);
}
#endif // FIX_ISSUE_225
```
Remove the preprocessor directive when done.
**Describe alternatives you've considered**
None.
**Additional context**
None.
|
1.0
|
Add 'get_texture' to sfml_resources class - **Is your feature request related to a problem? Please describe.**
Currently, to get a texture of -say- a cow, one needs to call `sfml_resources::get_cow_texture`. Per `agent_type` there is one member function. This does not scale.
**Describe the solution you'd like**
Create a member function `sfml_resources::get_texture(const agent_type a)` that returns a texture based on the `agent_type`
There's a test that needs fixing:
```c++
//#define FIX_ISSUE_225
#ifdef FIX_ISSUE_225
// Can get the sprite of an agent_type
{
assert(resources.get_texture(agent_type::bacteria).getSize().x > 0);
assert(resources.get_texture(agent_type::cow).getSize().x > 0);
assert(resources.get_texture(agent_type::crocodile).getSize().x > 0);
assert(resources.get_texture(agent_type::fish).getSize().x > 0);
assert(resources.get_texture(agent_type::grass).getSize().x > 0);
}
#endif // FIX_ISSUE_225
```
Remove the preprocessor directive when done.
**Describe alternatives you've considered**
None.
**Additional context**
None.
|
non_process
|
add get texture to sfml resources class is your feature request related to a problem please describe currently to get a texture of say a cow one needs to call sfml resources get cow texture per agent type there is one member function this does not scale describe the solution you d like create a member function sfml resources get texture const agent type a that returns a texture based on the agent type there s a test that needs fixing c define fix issue ifdef fix issue can get the sprite of an agent type assert resources get texture agent type bacteria getsize x assert resources get texture agent type cow getsize x assert resources get texture agent type crocodile getsize x assert resources get texture agent type fish getsize x assert resources get texture agent type grass getsize x endif fix issue remove the preprocessor directive when done describe alternatives you ve considered none additional context none
| 0
|
65,078
| 19,089,974,167
|
IssuesEvent
|
2021-11-29 10:58:36
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Replies in the parent of thread are missing padding
|
T-Defect S-Major O-Occasional A-Threads Z-ThreadsP0
|
### Steps to reproduce
1. Send a message
2. Reply to the message
3. Start a thread from the reply
### Outcome
#### What did you expect?
The thread root message (the reply from step 2) should have all its components left-aligned in a similar way to how they are in the main timeline.
#### What happened instead?
The parent message in the thread root is hanging out to the left, unaligned from the rest of the thread.
<img src="https://user-images.githubusercontent.com/279572/140364761-96464911-e9aa-45d0-8824-6127e3ce7e33.png" width="400">
### Operating system
macOS
### Browser information
Firefox 96
### URL for webapp
develop.element.io
### Application version
Element version: ef87da52f4aa-react-9c8e1d32e205-js-195498e9dbf7 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Replies in the parent of thread are missing padding - ### Steps to reproduce
1. Send a message
2. Reply to the message
3. Start a thread from the reply
### Outcome
#### What did you expect?
The thread root message (the reply from step 2) should have all its components left-aligned in a similar way to how they are in the main timeline.
#### What happened instead?
The parent message in the thread root is hanging out to the left, unaligned from the rest of the thread.
<img src="https://user-images.githubusercontent.com/279572/140364761-96464911-e9aa-45d0-8824-6127e3ce7e33.png" width="400">
### Operating system
macOS
### Browser information
Firefox 96
### URL for webapp
develop.element.io
### Application version
Element version: ef87da52f4aa-react-9c8e1d32e205-js-195498e9dbf7 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
non_process
|
replies in the parent of thread are missing padding steps to reproduce send a message reply to the message start a thread from the reply outcome what did you expect the thread root message the reply from step should have all its components left aligned in a similar way to how they are in the main timeline what happened instead the parent message in the thread root is hanging out to the left unaligned from the rest of the thread operating system macos browser information firefox url for webapp develop element io application version element version react js olm version homeserver matrix org will you send logs no
| 0
|
80,123
| 3,550,997,785
|
IssuesEvent
|
2016-01-21 00:37:30
|
eustasy/phoenix
|
https://api.github.com/repos/eustasy/phoenix
|
opened
|
De-duplicate test initialization and database initialization.
|
Priority: Medium Status: Confirmed
|
It makes sense that `admin.php` and `_onces/phoenix/once.test.initialise.php` both need to install the database. That should definitely be a function, and one that checks the global setting itself.
|
1.0
|
De-duplicate test initialization and database initialization. - It makes sense that `admin.php` and `_onces/phoenix/once.test.initialise.php` both need to install the database. That should definitely be a function, and one that checks the global setting itself.
|
non_process
|
de duplicate test initialization and database initialization it makes sense that admin php and onces phoenix once test initialise php both need to install the database that should definitely be a function and one that checks the global setting itself
| 0
|
649,773
| 21,320,546,186
|
IssuesEvent
|
2022-04-17 02:09:25
|
bossbuwi/reality
|
https://api.github.com/repos/bossbuwi/reality
|
opened
|
Create systems ui
|
enhancement ui high priority
|
Create a UI for the systems. No logic is required. It must contain the items below.
- A table showing a list of the systems currently registered on the app.
- A popup showing a system's details if an item on the list is clicked.
- (Optional) A button that will open a form to add a system. Note that this is tagged as optional but it would surely be added later.
No logic is needed for the above items. Logic would be added under a different task/issue.
|
1.0
|
Create systems ui - Create a UI for the systems. No logic is required. It must contain the items below.
- A table showing a list of the systems currently registered on the app.
- A popup showing a system's details if an item on the list is clicked.
- (Optional) A button that will open a form to add a system. Note that this is tagged as optional but it would surely be added later.
No logic is needed for the above items. Logic would be added under a different task/issue.
|
non_process
|
create systems ui create a ui for the systems no logic is required it must contain the items below a table showing a list of the systems currently registered on the app a popup showing a system s details if an item on the list is clicked optional a button that will open a form to add a system note that this is tagged as optional but it would surely be added later no logic is needed for the above items logic would be added under a different task issue
| 0
|
18,569
| 24,556,010,172
|
IssuesEvent
|
2022-10-12 15:55:25
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline indicator] 'You are offline' error message is not getting displayed in the eligibility screen
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Sign in and complete the passcode process
2. Click on 'Closed' study
3. Click on 'Participate' button
4. Turn off the internet/mobile data and Verify the eligibility screen
**AR:** 'You are offline' error message is not getting displayed in the eligibility screen
**ER:** 'You are offline' error message should get displayed in the eligibility screen

|
3.0
|
[Android] [Offline indicator] 'You are offline' error message is not getting displayed in the eligibility screen - **Steps:**
1. Sign in and complete the passcode process
2. Click on 'Closed' study
3. Click on 'Participate' button
4. Turn off the internet/mobile data and Verify the eligibility screen
**AR:** 'You are offline' error message is not getting displayed in the eligibility screen
**ER:** 'You are offline' error message should get displayed in the eligibility screen

|
process
|
you are offline error message is not getting displayed in the eligibility screen steps sign in and complete the passcode process click on closed study click on participate button turn off the internet mobile data and verify the eligibility screen ar you are offline error message is not getting displayed in the eligibility screen er you are offline error message should get displayed in the eligibility screen
| 1
|
208,757
| 16,136,513,876
|
IssuesEvent
|
2021-04-29 12:33:54
|
PostHog/posthog
|
https://api.github.com/repos/PostHog/posthog
|
closed
|
PostHog Development on Apple Silicon
|
documentation
|
All of this will probably work with Rosetta 2, but to get it running natively on ARM, here's the situation.
1. [Homebrew](https://brew.sh/) [works well with ARM](https://github.com/Homebrew/brew/issues/10152) now. Some packages are broken, but most work.
2. Homebrew can be used to install a native version python 3.9 (`brew install python@3.9`)
3. NodeJS v15 works natively on ARM if installed via [nvm](https://github.com/nvm-sh/nvm) (`nvm use v15`)
4. ~~For postgres I still use the intel-based [postgres.app](https://postgresapp.com/)~~
[2020-01-24] Intel based postgres.app doesn't work because it share some library with psycopg2. Install via `brew install postgresql`
5. ~~Redis via homebrew should work (didn't verify as I still have an old install via macports)~~
[2020-01-24] `brew install redis`
6. Docker (Clickhouse, etc) --> not yet. There's a [tech preview](https://www.docker.com/blog/download-and-try-the-tech-preview-of-docker-desktop-for-m1/) out, but I haven't tried it yet. It's probably still broken.
7. Not all python packages install. Here's a list of holdouts:
- `grpcio` - 1.34.0 doesn't work yet - [track here](https://github.com/grpc/grpc/issues/25082)
- `pandas` - 1.2.0 doesn't work yet
- `numpy` - 1.19.5 doesn't work yet
- `cffi` - 1.14.0 fails, but 1.14.4 works, just needs to be bumped
It seems like `grpcio` can be patched to work and there's an issue for [removing numpy and pandas](https://github.com/PostHog/posthog/issues/2248), so technically with a bit of work it should be possible to develop posthog on M1 macs natively.
Working on EE/Clickhouse is another question and will need to wait until the Apple Silicon Docker matures. Considering how much money is being poured into that, I'm sure it will just be a matter of time.
For development itself, both vscode and pycharm have ARM builds, so there's nothing blocking there.
|
1.0
|
PostHog Development on Apple Silicon - All of this will probably work with Rosetta 2, but to get it running natively on ARM, here's the situation.
1. [Homebrew](https://brew.sh/) [works well with ARM](https://github.com/Homebrew/brew/issues/10152) now. Some packages are broken, but most work.
2. Homebrew can be used to install a native version python 3.9 (`brew install python@3.9`)
3. NodeJS v15 works natively on ARM if installed via [nvm](https://github.com/nvm-sh/nvm) (`nvm use v15`)
4. ~~For postgres I still use the intel-based [postgres.app](https://postgresapp.com/)~~
[2020-01-24] Intel based postgres.app doesn't work because it share some library with psycopg2. Install via `brew install postgresql`
5. ~~Redis via homebrew should work (didn't verify as I still have an old install via macports)~~
[2020-01-24] `brew install redis`
6. Docker (Clickhouse, etc) --> not yet. There's a [tech preview](https://www.docker.com/blog/download-and-try-the-tech-preview-of-docker-desktop-for-m1/) out, but I haven't tried it yet. It's probably still broken.
7. Not all python packages install. Here's a list of holdouts:
- `grpcio` - 1.34.0 doesn't work yet - [track here](https://github.com/grpc/grpc/issues/25082)
- `pandas` - 1.2.0 doesn't work yet
- `numpy` - 1.19.5 doesn't work yet
- `cffi` - 1.14.0 fails, but 1.14.4 works, just needs to be bumped
It seems like `grpcio` can be patched to work and there's an issue for [removing numpy and pandas](https://github.com/PostHog/posthog/issues/2248), so technically with a bit of work it should be possible to develop posthog on M1 macs natively.
Working on EE/Clickhouse is another question and will need to wait until the Apple Silicon Docker matures. Considering how much money is being poured into that, I'm sure it will just be a matter of time.
For development itself, both vscode and pycharm have ARM builds, so there's nothing blocking there.
|
non_process
|
posthog development on apple silicon all of this will probably work with rosetta but to get it running natively on arm here s the situation now some packages are broken but most work homebrew can be used to install a native version python brew install python nodejs works natively on arm if installed via nvm use for postgres i still use the intel based intel based postgres app doesn t work because it share some library with install via brew install postgresql redis via homebrew should work didn t verify as i still have an old install via macports brew install redis docker clickhouse etc not yet there s a out but i haven t tried it yet it s probably still broken not all python packages install here s a list of holdouts grpcio doesn t work yet pandas doesn t work yet numpy doesn t work yet cffi fails but works just needs to be bumped it seems like grpcio can be patched to work and there s an issue for so technically with a bit of work it should be possible to develop posthog on macs natively working on ee clickhouse is another question and will need to wait until the apple silicon docker matures considering how much money is being poured into that i m sure it will just be a matter of time for development itself both vscode and pycharm have arm builds so there s nothing blocking there
| 0
|
2,824
| 5,773,465,768
|
IssuesEvent
|
2017-04-28 02:12:58
|
gaocegege/maintainer
|
https://api.github.com/repos/gaocegege/maintainer
|
opened
|
Support custom order in contributor subcommand
|
process/not claimed type/feature
|
Now maintainer is in time order, docker/docker is in lexicographical order
|
1.0
|
Support custom order in contributor subcommand - Now maintainer is in time order, docker/docker is in lexicographical order
|
process
|
support custom order in contributor subcommand now maintainer is in time order docker docker is in lexicographical order
| 1
|
182,176
| 14,906,464,163
|
IssuesEvent
|
2021-01-22 00:39:32
|
carlostrevisan1/iot-generic-control
|
https://api.github.com/repos/carlostrevisan1/iot-generic-control
|
closed
|
Deletar linhas de código comentadas
|
documentation help wanted
|
Todo mundo olha os próprios codigos e apaguem as linhas de código comentadas que não estão mais sendo usadas
|
1.0
|
Deletar linhas de código comentadas - Todo mundo olha os próprios codigos e apaguem as linhas de código comentadas que não estão mais sendo usadas
|
non_process
|
deletar linhas de código comentadas todo mundo olha os próprios codigos e apaguem as linhas de código comentadas que não estão mais sendo usadas
| 0
|
285,728
| 31,155,503,728
|
IssuesEvent
|
2023-08-16 12:54:04
|
nidhi7598/linux-4.1.15_CVE-2018-5873
|
https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2018-5873
|
opened
|
CVE-2018-11508 (Medium) detected in linuxlinux-4.1.52
|
Mend: dependency security vulnerability
|
## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11508 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files vulnerability details the compat get timex function in kernel compat c in the linux kernel before allows local users to obtain sensitive information from kernel memory via adjtimex publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.