Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7,397
| 10,523,131,038
|
IssuesEvent
|
2019-09-30 10:15:33
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
百度小程序for循环:更新渲染数据无法视图更新
|
processing
|
如题,微信和头条正常,
"@mpxjs/api-proxy": "^2.2.27",
"@mpxjs/core": "^2.2.27",
"@mpxjs/webpack-plugin": "^2.2.29",
|
1.0
|
百度小程序for循环:更新渲染数据无法视图更新 - 如题,微信和头条正常,
"@mpxjs/api-proxy": "^2.2.27",
"@mpxjs/core": "^2.2.27",
"@mpxjs/webpack-plugin": "^2.2.29",
|
process
|
百度小程序for循环:更新渲染数据无法视图更新 如题,微信和头条正常, mpxjs api proxy mpxjs core mpxjs webpack plugin
| 1
|
11,340
| 14,163,692,484
|
IssuesEvent
|
2020-11-12 03:01:32
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Real-time daemon seems to ingore --keep-last
|
bug log-processing on-disk
|
I am running the following command as a `systemd` service on my server:
```
goaccess /var/log/apache2/access.log.1 /var/log/apache2/access.log --restore --persist --db-path /var/spool/goaccess/realtime --real-time-html --keep-last=10 -o /var/lib/goaccess/realtime.html --daemonize --pid-file=/var/run/goaccess.pid
```
However, it seems, that it does ignore the 10-day window altogether:
<img width="1127" alt="image" src="https://user-images.githubusercontent.com/1670136/97718202-9243f380-1abd-11eb-96a6-753bd0beb27f.png">
|
1.0
|
Real-time daemon seems to ingore --keep-last - I am running the following command as a `systemd` service on my server:
```
goaccess /var/log/apache2/access.log.1 /var/log/apache2/access.log --restore --persist --db-path /var/spool/goaccess/realtime --real-time-html --keep-last=10 -o /var/lib/goaccess/realtime.html --daemonize --pid-file=/var/run/goaccess.pid
```
However, it seems, that it does ignore the 10-day window altogether:
<img width="1127" alt="image" src="https://user-images.githubusercontent.com/1670136/97718202-9243f380-1abd-11eb-96a6-753bd0beb27f.png">
|
process
|
real time daemon seems to ingore keep last i am running the following command as a systemd service on my server goaccess var log access log var log access log restore persist db path var spool goaccess realtime real time html keep last o var lib goaccess realtime html daemonize pid file var run goaccess pid however it seems that it does ignore the day window altogether img width alt image src
| 1
|
4,874
| 5,310,312,727
|
IssuesEvent
|
2017-02-12 19:00:21
|
catapult-project/catapult
|
https://api.github.com/repos/catapult-project/catapult
|
opened
|
Catapult roll failing at "Switch clients to new JavaScript API (batch 5)"
|
Infrastructure Telemetry
|
[First failing roll](https://codereview.chromium.org/2687073004/) has just [[Telemetry] Switch clients to new JavaScript API (batch 5)](https://codereview.chromium.org/2687773003) in the commit list.
From [the log](https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Ftryserver.chromium.win%2Fwin_chromium_x64_rel_ng%2F364903%2F%2B%2Frecipes%2Fsteps%2Ftelemetry_perf_unittests__with_patch_%2F0%2Fstdout):
```
[21/26] measurements.v8_detached_context_age_in_gc_unittest.V8DetachedContextAgeInGCTests.testWithNoData failed unexpectedly 0.0020s:
Traceback (most recent call last):
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc_unittest.py", line 78, in testWithNoData
results = _MeasureFakePage(histograms)
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc_unittest.py", line 46, in _MeasureFakePage
metric.DidNavigateToPage(page, tab)
File "e:\b\swarm_slave\w\irka5vex\third_party\catapult\common\py_trace_event\py_trace_event\trace_event_impl\decorators.py", line 52, in traced_function
return func(*args, **kwargs)
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc.py", line 40, in DidNavigateToPage
self._data_start = histogram_util.GetHistogram(_TYPE, _NAME, tab)
File "e:\b\swarm_slave\w\irka5vex\third_party\catapult\telemetry\telemetry\value\histogram_util.py", line 113, in GetHistogram
histogram_json = tab.EvaluateJavaScript2(
AttributeError: 'FakeTab' object has no attribute 'EvaluateJavaScript2'
```
This codepath [was changed in the cl](https://codereview.chromium.org/2687773003/diff/40001/telemetry/telemetry/value/histogram_util.py)
I'm reverting for now; assigning to @perezju to reland as time permits. @nedn FYI
|
1.0
|
Catapult roll failing at "Switch clients to new JavaScript API (batch 5)" - [First failing roll](https://codereview.chromium.org/2687073004/) has just [[Telemetry] Switch clients to new JavaScript API (batch 5)](https://codereview.chromium.org/2687773003) in the commit list.
From [the log](https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Ftryserver.chromium.win%2Fwin_chromium_x64_rel_ng%2F364903%2F%2B%2Frecipes%2Fsteps%2Ftelemetry_perf_unittests__with_patch_%2F0%2Fstdout):
```
[21/26] measurements.v8_detached_context_age_in_gc_unittest.V8DetachedContextAgeInGCTests.testWithNoData failed unexpectedly 0.0020s:
Traceback (most recent call last):
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc_unittest.py", line 78, in testWithNoData
results = _MeasureFakePage(histograms)
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc_unittest.py", line 46, in _MeasureFakePage
metric.DidNavigateToPage(page, tab)
File "e:\b\swarm_slave\w\irka5vex\third_party\catapult\common\py_trace_event\py_trace_event\trace_event_impl\decorators.py", line 52, in traced_function
return func(*args, **kwargs)
File "e:\b\swarm_slave\w\irka5vex\tools\perf\core\path_util.pyc\..\..\measurements\v8_detached_context_age_in_gc.py", line 40, in DidNavigateToPage
self._data_start = histogram_util.GetHistogram(_TYPE, _NAME, tab)
File "e:\b\swarm_slave\w\irka5vex\third_party\catapult\telemetry\telemetry\value\histogram_util.py", line 113, in GetHistogram
histogram_json = tab.EvaluateJavaScript2(
AttributeError: 'FakeTab' object has no attribute 'EvaluateJavaScript2'
```
This codepath [was changed in the cl](https://codereview.chromium.org/2687773003/diff/40001/telemetry/telemetry/value/histogram_util.py)
I'm reverting for now; assigning to @perezju to reland as time permits. @nedn FYI
|
non_process
|
catapult roll failing at switch clients to new javascript api batch has just switch clients to new javascript api batch in the commit list from measurements detached context age in gc unittest testwithnodata failed unexpectedly traceback most recent call last file e b swarm slave w tools perf core path util pyc measurements detached context age in gc unittest py line in testwithnodata results measurefakepage histograms file e b swarm slave w tools perf core path util pyc measurements detached context age in gc unittest py line in measurefakepage metric didnavigatetopage page tab file e b swarm slave w third party catapult common py trace event py trace event trace event impl decorators py line in traced function return func args kwargs file e b swarm slave w tools perf core path util pyc measurements detached context age in gc py line in didnavigatetopage self data start histogram util gethistogram type name tab file e b swarm slave w third party catapult telemetry telemetry value histogram util py line in gethistogram histogram json tab attributeerror faketab object has no attribute this codepath i m reverting for now assigning to perezju to reland as time permits nedn fyi
| 0
|
22,746
| 32,063,395,849
|
IssuesEvent
|
2023-09-24 22:36:41
|
hsmusic/hsmusic-wiki
|
https://api.github.com/repos/hsmusic/hsmusic-wiki
|
opened
|
Automatically warn about tracks without any artists at all
|
scope: data processing type: dev friendliness
|
I.e, none in `Artists` on the artist nor on the album. I guess it's possible some wikis could want to use tracks without artists, but the wiki doesn't properly support that yet, and it's always a mistake on HSMusic!
This should check the update values, not computed values, for `artistContribs` (on album/track), so it doesn't report extraneous messages when the issue is just that present references are currently failing (due to ambiguity or missing artists).
I don't tracks without artists need to be filtered out of the actual track dataset - it doesn't *break* anything (badly), it's just a data error that should be addressed.
|
1.0
|
Automatically warn about tracks without any artists at all - I.e, none in `Artists` on the artist nor on the album. I guess it's possible some wikis could want to use tracks without artists, but the wiki doesn't properly support that yet, and it's always a mistake on HSMusic!
This should check the update values, not computed values, for `artistContribs` (on album/track), so it doesn't report extraneous messages when the issue is just that present references are currently failing (due to ambiguity or missing artists).
I don't tracks without artists need to be filtered out of the actual track dataset - it doesn't *break* anything (badly), it's just a data error that should be addressed.
|
process
|
automatically warn about tracks without any artists at all i e none in artists on the artist nor on the album i guess it s possible some wikis could want to use tracks without artists but the wiki doesn t properly support that yet and it s always a mistake on hsmusic this should check the update values not computed values for artistcontribs on album track so it doesn t report extraneous messages when the issue is just that present references are currently failing due to ambiguity or missing artists i don t tracks without artists need to be filtered out of the actual track dataset it doesn t break anything badly it s just a data error that should be addressed
| 1
|
6,469
| 9,546,672,709
|
IssuesEvent
|
2019-05-01 20:38:22
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Internship Opportunity - update description under logo
|
Apply Process Approved Opportunity Create Requirements Ready State Dept.
|
Who: Internship viewers
What: View U.S. in front of Dept of State
Why: State has requested we always display U.S. in front of their program name.
Acceptance Criteria:
The text under the logo on all views of the internship opportunity (creator view and applicant) should be updated to read, "U.S. Department of State"
Mock:
https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop-
|
1.0
|
Internship Opportunity - update description under logo - Who: Internship viewers
What: View U.S. in front of Dept of State
Why: State has requested we always display U.S. in front of their program name.
Acceptance Criteria:
The text under the logo on all views of the internship opportunity (creator view and applicant) should be updated to read, "U.S. Department of State"
Mock:
https://opm.invisionapp.com/share/ZEPNZR09Q54#/320303454_State_-_Opportunity_-Desktop-
|
process
|
internship opportunity update description under logo who internship viewers what view u s in front of dept of state why state has requested we always display u s in front of their program name acceptance criteria the text under the logo on all views of the internship opportunity creator view and applicant should be updated to read u s department of state mock
| 1
|
22,577
| 31,805,275,024
|
IssuesEvent
|
2023-09-13 13:36:49
|
GSA/EDX
|
https://api.github.com/repos/GSA/EDX
|
opened
|
Update personal access token for GitHub Workflow (September 2023)
|
process
|
For the EDXPROJECT_TOKEN to automate the issue workflow (adding it to EDX's Inbox in its Kanban board)
Instructions:
- Click on your user icon at the top right
- Click settings
- Scroll to bottom, click "Developer Settings"
- Under personal access tokens, click tokens classic
- You want to update the EDXPROJECT_TOKEN one with your updated API Key
|
1.0
|
Update personal access token for GitHub Workflow (September 2023) - For the EDXPROJECT_TOKEN to automate the issue workflow (adding it to EDX's Inbox in its Kanban board)
Instructions:
- Click on your user icon at the top right
- Click settings
- Scroll to bottom, click "Developer Settings"
- Under personal access tokens, click tokens classic
- You want to update the EDXPROJECT_TOKEN one with your updated API Key
|
process
|
update personal access token for github workflow september for the edxproject token to automate the issue workflow adding it to edx s inbox in its kanban board instructions click on your user icon at the top right click settings scroll to bottom click developer settings under personal access tokens click tokens classic you want to update the edxproject token one with your updated api key
| 1
|
579,369
| 17,189,917,244
|
IssuesEvent
|
2021-07-16 09:26:50
|
vortexntnu/Vortex-AUV
|
https://api.github.com/repos/vortexntnu/Vortex-AUV
|
closed
|
Create node for controlling gripper and lights
|
High priority
|
Should be fairly simple. The gripper is controlled as on/off via a single GPIO pin. Same story with the lights.
Should be toggleable from joystick.
|
1.0
|
Create node for controlling gripper and lights - Should be fairly simple. The gripper is controlled as on/off via a single GPIO pin. Same story with the lights.
Should be toggleable from joystick.
|
non_process
|
create node for controlling gripper and lights should be fairly simple the gripper is controlled as on off via a single gpio pin same story with the lights should be toggleable from joystick
| 0
|
218,102
| 16,749,691,420
|
IssuesEvent
|
2021-06-11 20:46:38
|
FragSoc/esports-bot
|
https://api.github.com/repos/FragSoc/esports-bot
|
closed
|
Formal minimum dependencies versions?
|
bug documentation
|
I'm pretty sure the bot needs at least python `3.8`, but I'm not too sure about that. We should formalise this.
|
1.0
|
Formal minimum dependencies versions? - I'm pretty sure the bot needs at least python `3.8`, but I'm not too sure about that. We should formalise this.
|
non_process
|
formal minimum dependencies versions i m pretty sure the bot needs at least python but i m not too sure about that we should formalise this
| 0
|
19,012
| 25,013,112,305
|
IssuesEvent
|
2022-11-03 16:37:43
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Debug: Modo `Headless = False` e Gravação de vídeos
|
[2] Alta Prioridade [1] Requisito [0] Desenvolvimento [3] Processamento Dinâmico
|
## Comportamento Esperado
Ferramentas de depuração de coletores se mostram necessárias para a correção de erros, principalmente do novo sistema distribuído.
## Comportamento Atual
Atualmente, não temos muitas ferramentas de debug integradas ao sistema. Além do Trace Viewer (#4798), outras features podem ajudar com a depuração dos coletores e do sistema: execução de navegadores em modo headful e geração de vídeos da execução de coletores.
## Passos para reproduzir o erro
Não se aplica.
## Especificações da Coleta
Não se aplica
|
1.0
|
Debug: Modo `Headless = False` e Gravação de vídeos - ## Comportamento Esperado
Ferramentas de depuração de coletores se mostram necessárias para a correção de erros, principalmente do novo sistema distribuído.
## Comportamento Atual
Atualmente, não temos muitas ferramentas de debug integradas ao sistema. Além do Trace Viewer (#4798), outras features podem ajudar com a depuração dos coletores e do sistema: execução de navegadores em modo headful e geração de vídeos da execução de coletores.
## Passos para reproduzir o erro
Não se aplica.
## Especificações da Coleta
Não se aplica
|
process
|
debug modo headless false e gravação de vídeos comportamento esperado ferramentas de depuração de coletores se mostram necessárias para a correção de erros principalmente do novo sistema distribuído comportamento atual atualmente não temos muitas ferramentas de debug integradas ao sistema além do trace viewer outras features podem ajudar com a depuração dos coletores e do sistema execução de navegadores em modo headful e geração de vídeos da execução de coletores passos para reproduzir o erro não se aplica especificações da coleta não se aplica
| 1
|
4,691
| 7,526,671,763
|
IssuesEvent
|
2018-04-13 14:41:23
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
stdio: cannot write to stdin after #1233
|
help wanted process
|
Continuing from https://github.com/nodejs/node/issues/9201#issuecomment-255103708. The change from #1233 makes it impossible to write to stdin, something that works in v0.10 and v0.12.
Test case:
``` js
var spawn = require('child_process').spawn;
var args = ['-e', 'process.stdin.write("ok\\n")'];
var proc = spawn(process.execPath, args, { stdio: ['pipe'] });
proc.stdin.pipe(process.stdout);
```
Trace:
```
$ strace -s1024 -fe write out/Release/node tmp/bug9201.js
...
[pid 25306] write(2, "events.js:160\n throw er; // Unhandled 'error' event\n ^\n\nError: write after end\n at writeAfterEnd (_stream_writable.js:192:12)\n at Socket.Writable.write (_stream_writable.js:243:5)\n at Socket.write (net.js:661:40)\n at [eval]:1:15\n at ContextifyScript.Script.runInThisContext (vm.js:25:33)\n at Object.exports.runInThisContext (vm.js:77:17)\n at Object.<anonymous> ([eval]-wrapper:6:22)\n at Module._compile (module.js:582:32)\n at bootstrap_node.js:345:29\n at _combinedTickCallback (internal/process/next_tick.js:67:7)\n", 554) = 554
...
```
cc @indutny
|
1.0
|
stdio: cannot write to stdin after #1233 - Continuing from https://github.com/nodejs/node/issues/9201#issuecomment-255103708. The change from #1233 makes it impossible to write to stdin, something that works in v0.10 and v0.12.
Test case:
``` js
var spawn = require('child_process').spawn;
var args = ['-e', 'process.stdin.write("ok\\n")'];
var proc = spawn(process.execPath, args, { stdio: ['pipe'] });
proc.stdin.pipe(process.stdout);
```
Trace:
```
$ strace -s1024 -fe write out/Release/node tmp/bug9201.js
...
[pid 25306] write(2, "events.js:160\n throw er; // Unhandled 'error' event\n ^\n\nError: write after end\n at writeAfterEnd (_stream_writable.js:192:12)\n at Socket.Writable.write (_stream_writable.js:243:5)\n at Socket.write (net.js:661:40)\n at [eval]:1:15\n at ContextifyScript.Script.runInThisContext (vm.js:25:33)\n at Object.exports.runInThisContext (vm.js:77:17)\n at Object.<anonymous> ([eval]-wrapper:6:22)\n at Module._compile (module.js:582:32)\n at bootstrap_node.js:345:29\n at _combinedTickCallback (internal/process/next_tick.js:67:7)\n", 554) = 554
...
```
cc @indutny
|
process
|
stdio cannot write to stdin after continuing from the change from makes it impossible to write to stdin something that works in and test case js var spawn require child process spawn var args var proc spawn process execpath args stdio proc stdin pipe process stdout trace strace fe write out release node tmp js write events js n throw er unhandled error event n n nerror write after end n at writeafterend stream writable js n at socket writable write stream writable js n at socket write net js n at n at contextifyscript script runinthiscontext vm js n at object exports runinthiscontext vm js n at object wrapper n at module compile module js n at bootstrap node js n at combinedtickcallback internal process next tick js n cc indutny
| 1
|
15,906
| 2,611,532,830
|
IssuesEvent
|
2015-02-27 06:03:53
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
opened
|
master chef and elements
|
auto-migrated Component-Lua Priority-Low Type-Enhancement
|
```
So, an old style I proposed to mikade and a new one, both open for discussion!
- elemental mode
Four type of teams, each with a natural element that boost a given subset of
weapons.
For example we could think of Fire, Water, Earth and Wind: the first one has a
boost for explosion weapons, the second can walk on water, the third can dig
through the earth and the last one is invulnerable to fall damage.
There could be even another boost, given only to one of the hogs (aka champion)
who could spit fire or swim or cause earthquakes or fly (respectively)
- master chef
So we prepare a few random boxes with a subset of the weapons, not too
dissimilar to random weapons. Then each team picks one and has to play with
those weapons for all the match.
```
Original issue reported on code.google.com by `vittorio...@gmail.com` on 10 Jun 2013 at 12:35
|
1.0
|
master chef and elements - ```
So, an old style I proposed to mikade and a new one, both open for discussion!
- elemental mode
Four type of teams, each with a natural element that boost a given subset of
weapons.
For example we could think of Fire, Water, Earth and Wind: the first one has a
boost for explosion weapons, the second can walk on water, the third can dig
through the earth and the last one is invulnerable to fall damage.
There could be even another boost, given only to one of the hogs (aka champion)
who could spit fire or swim or cause earthquakes or fly (respectively)
- master chef
So we prepare a few random boxes with a subset of the weapons, not too
dissimilar to random weapons. Then each team picks one and has to play with
those weapons for all the match.
```
Original issue reported on code.google.com by `vittorio...@gmail.com` on 10 Jun 2013 at 12:35
|
non_process
|
master chef and elements so an old style i proposed to mikade and a new one both open for discussion elemental mode four type of teams each with a natural element that boost a given subset of weapons for example we could think of fire water earth and wind the first one has a boost for explosion weapons the second can walk on water the third can dig through the earth and the last one is invulnerable to fall damage there could be even another boost given only to one of the hogs aka champion who could spit fire or swim or cause earthquakes or fly respectively master chef so we prepare a few random boxes with a subset of the weapons not too dissimilar to random weapons then each team picks one and has to play with those weapons for all the match original issue reported on code google com by vittorio gmail com on jun at
| 0
|
21,904
| 30,352,332,385
|
IssuesEvent
|
2023-07-11 20:01:29
|
StormSurgeLive/asgs
|
https://api.github.com/repos/StormSurgeLive/asgs
|
opened
|
Improve swan max file support in `generateXDMF.f90`
|
enhancement incremental improvement postprocessing
|
It appears that this utility does not recognize `swan_DIR_max.63.nc`, `swan_TM01_max.63.nc`, `swan_TM02_max.63.nc`, or `swan_TMM10_max.63.nc`.
|
1.0
|
Improve swan max file support in `generateXDMF.f90` - It appears that this utility does not recognize `swan_DIR_max.63.nc`, `swan_TM01_max.63.nc`, `swan_TM02_max.63.nc`, or `swan_TMM10_max.63.nc`.
|
process
|
improve swan max file support in generatexdmf it appears that this utility does not recognize swan dir max nc swan max nc swan max nc or swan max nc
| 1
|
385,078
| 11,412,236,718
|
IssuesEvent
|
2020-02-01 11:48:13
|
islos-efe-eme/auto-news
|
https://api.github.com/repos/islos-efe-eme/auto-news
|
opened
|
Add scheduler for the news (Slack bot)
|
priority:high slack
|
Depends on https://github.com/islos-efe-eme/auto-news/issues/3
- [ ] Add scheduler for posting the news in the channel `gta-news` once every day.
|
1.0
|
Add scheduler for the news (Slack bot) - Depends on https://github.com/islos-efe-eme/auto-news/issues/3
- [ ] Add scheduler for posting the news in the channel `gta-news` once every day.
|
non_process
|
add scheduler for the news slack bot depends on add scheduler for posting the news in the channel gta news once every day
| 0
|
6,772
| 3,054,334,702
|
IssuesEvent
|
2015-08-13 01:24:54
|
facebook/osquery
|
https://api.github.com/repos/facebook/osquery
|
closed
|
documentation uses old added/removed format in using osqueryd docs
|
documentation
|
> https://github.com/facebook/osquery/blob/master/docs/wiki/introduction/using-osqueryd.md
Each query represents a monitored view of your operating system. The first time a scheduled query runs it logs every row in the resulting table with the "added" action. In this example, on an OS X laptop, after the first 60 seconds it would log:
```json
[
{"model":"XHCI Root Hub SS Simulation","vendor":"Apple Inc."},
{"model":"XHCI Root Hub USB 2.0 Simulation","vendor":"Apple Inc."},
{"model":"BRCM20702 Hub","vendor":"Apple Inc."},
{"model":"Internal Memory Card Reader","vendor":"Apple"},
{"model":"Apple Internal Keyboard \/ Trackpad","vendor":"Apple Inc."},
{"model":"Bluetooth USB Host Controller","vendor":"Apple Inc."}
]
```
If there are no USB devices added or removed to the laptop this query would never log a result again. The query would still run every 60 seconds but the results would match the previous run and thus no state change would be detected. If a USB memory stick was inserted and left in the laptop for 60 seconds the daemon would log:
```
[
{"model":"U3 Cruzer Micro","vendor":"SanDisk Corporation"}
]
```
Each line in the results is decorated with a bit more information as described in the [logging](../deployment/logging.md) guide. This includes time, hostname, added or removed action, etc.
|
1.0
|
documentation uses old added/removed format in using osqueryd docs - > https://github.com/facebook/osquery/blob/master/docs/wiki/introduction/using-osqueryd.md
Each query represents a monitored view of your operating system. The first time a scheduled query runs it logs every row in the resulting table with the "added" action. In this example, on an OS X laptop, after the first 60 seconds it would log:
```json
[
{"model":"XHCI Root Hub SS Simulation","vendor":"Apple Inc."},
{"model":"XHCI Root Hub USB 2.0 Simulation","vendor":"Apple Inc."},
{"model":"BRCM20702 Hub","vendor":"Apple Inc."},
{"model":"Internal Memory Card Reader","vendor":"Apple"},
{"model":"Apple Internal Keyboard \/ Trackpad","vendor":"Apple Inc."},
{"model":"Bluetooth USB Host Controller","vendor":"Apple Inc."}
]
```
If there are no USB devices added or removed to the laptop this query would never log a result again. The query would still run every 60 seconds but the results would match the previous run and thus no state change would be detected. If a USB memory stick was inserted and left in the laptop for 60 seconds the daemon would log:
```
[
{"model":"U3 Cruzer Micro","vendor":"SanDisk Corporation"}
]
```
Each line in the results is decorated with a bit more information as described in the [logging](../deployment/logging.md) guide. This includes time, hostname, added or removed action, etc.
|
non_process
|
documentation uses old added removed format in using osqueryd docs each query represents a monitored view of your operating system the first time a scheduled query runs it logs every row in the resulting table with the added action in this example on an os x laptop after the first seconds it would log json model xhci root hub ss simulation vendor apple inc model xhci root hub usb simulation vendor apple inc model hub vendor apple inc model internal memory card reader vendor apple model apple internal keyboard trackpad vendor apple inc model bluetooth usb host controller vendor apple inc if there are no usb devices added or removed to the laptop this query would never log a result again the query would still run every seconds but the results would match the previous run and thus no state change would be detected if a usb memory stick was inserted and left in the laptop for seconds the daemon would log model cruzer micro vendor sandisk corporation each line in the results is decorated with a bit more information as described in the deployment logging md guide this includes time hostname added or removed action etc
| 0
|
15,396
| 19,580,287,120
|
IssuesEvent
|
2022-01-04 20:19:44
|
2i2c-org/infrastructure
|
https://api.github.com/repos/2i2c-org/infrastructure
|
closed
|
Create a model for our hub capacity
|
type: enhancement :label: team-process
|
# Background
As we begin to support hub infrastructure for other communities, we will need to balance the time of each of our team members in a way that distributes work and makes our support of hubs efficient. There will likely be a non-linear model that answers "how many hubs can our team support at this moment in time?" We should understand this question because it will have a big impact on our sustainability model, as well as how to think about growing our team (or our sales efforts!).
# Expected timeline
I think that this will only become clearer once we actually start running several hubs for people. We'll need to develop a process as a team for operating/supporting those hubs (#306 #298) and then refine that process as our team and number of hubs grows.
I suspect that we won't have a clear picture of this for several months. What if we set a few checkpoints:
- 1 months after we hit 5+ hubs (to get our sea legs and understand any major changes that should be made)
- 3 months (after we've been running a couple of hubs for people and had a clearer picture)
- 9 months (we should have a pretty good answer to this question given our current structure)
# Steps to complete this goal
Have check-ins at each of the following targets:
- [ ] 1 months after we hit 5+ hubs (to get our sea legs and understand any major changes that should be made)
- [ ] 3 months (after we've been running a couple of hubs for people and had a clearer picture)
- [ ] 9 months (we should have a pretty good answer to this question given our current structure)
- [ ] Synthesize our learning in the documentation
|
1.0
|
Create a model for our hub capacity - # Background
As we begin to support hub infrastructure for other communities, we will need to balance the time of each of our team members in a way that distributes work and makes our support of hubs efficient. There will likely be a non-linear model that answers "how many hubs can our team support at this moment in time?" We should understand this question because it will have a big impact on our sustainability model, as well as how to think about growing our team (or our sales efforts!).
# Expected timeline
I think that this will only become clearer once we actually start running several hubs for people. We'll need to develop a process as a team for operating/supporting those hubs (#306 #298) and then refine that process as our team and number of hubs grows.
I suspect that we won't have a clear picture of this for several months. What if we set a few checkpoints:
- 1 months after we hit 5+ hubs (to get our sea legs and understand any major changes that should be made)
- 3 months (after we've been running a couple of hubs for people and had a clearer picture)
- 9 months (we should have a pretty good answer to this question given our current structure)
# Steps to complete this goal
Have check-ins at each of the following targets:
- [ ] 1 months after we hit 5+ hubs (to get our sea legs and understand any major changes that should be made)
- [ ] 3 months (after we've been running a couple of hubs for people and had a clearer picture)
- [ ] 9 months (we should have a pretty good answer to this question given our current structure)
- [ ] Synthesize our learning in the documentation
|
process
|
create a model for our hub capacity background as we begin to support hub infrastructure for other communities we will need to balance the time of each of our team members in a way that distributes work and makes our support of hubs efficient there will likely be a non linear model that answers how many hubs can our team support at this moment in time we should understand this question because it will have a big impact on our sustainability model as well as how to think about growing our team or our sales efforts expected timeline i think that this will only become clearer once we actually start running several hubs for people we ll need to develop a process as a team for operating supporting those hubs and then refine that process as our team and number of hubs grows i suspect that we won t have a clear picture of this for several months what if we set a few checkpoints months after we hit hubs to get our sea legs and understand any major changes that should be made months after we ve been running a couple of hubs for people and had a clearer picture months we should have a pretty good answer to this question given our current structure steps to complete this goal have check ins at each of the following targets months after we hit hubs to get our sea legs and understand any major changes that should be made months after we ve been running a couple of hubs for people and had a clearer picture months we should have a pretty good answer to this question given our current structure synthesize our learning in the documentation
| 1
|
417,014
| 12,154,746,809
|
IssuesEvent
|
2020-04-25 09:51:52
|
leinardi/pylint-pycharm
|
https://api.github.com/repos/leinardi/pylint-pycharm
|
closed
|
PluginException: Icon cannot be found in '/modules/modulesNode.png'
|
Priority: High Status: Accepted Type: Bug
|
**pylint-pycharm version:**
0.12.1
**description:**
pycharm issues this error on each plugin execution.
this is mostly an aesthetic bug since it doesn't seem to prevent the plugin from running.
**full traceback:**
```
com.intellij.diagnostic.PluginException: Icon cannot be found in '/modules/modulesNode.png', action 'class com.leinardi.pycharm.pylint.actions.ScanModule' [Plugin: com.leinardi.pycharm.pylint]
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.reportActionError(ActionManagerImpl.java:394)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.reportActionError(ActionManagerImpl.java:389)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.access$000(ActionManagerImpl.java:75)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl$1.compute(ActionManagerImpl.java:289)
at com.intellij.openapi.util.IconLoader$LazyIcon.getOrComputeIcon(IconLoader.java:1097)
at com.intellij.openapi.util.IconLoader$LazyIcon.paintIcon(IconLoader.java:1073)
at com.intellij.openapi.actionSystem.ex.ActionButtonLook.paintIcon(ActionButtonLook.java:147)
at com.intellij.openapi.actionSystem.ex.ActionButtonLook.paintIcon(ActionButtonLook.java:143)
at com.intellij.openapi.actionSystem.impl.ActionButton.paintButtonLook(ActionButton.java:404)
at com.intellij.openapi.actionSystem.impl.ActionButton.paintComponent(ActionButton.java:370)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1077)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JLayeredPane.paint(JLayeredPane.java:590)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JLayeredPane.paint(JLayeredPane.java:590)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintToOffscreen(JComponent.java:5263)
at java.desktop/javax.swing.BufferStrategyPaintManager.paint(BufferStrategyPaintManager.java:246)
at java.desktop/javax.swing.RepaintManager.paint(RepaintManager.java:1333)
at java.desktop/javax.swing.JComponent._paintImmediately(JComponent.java:5211)
at java.desktop/javax.swing.JComponent.paintImmediately(JComponent.java:5021)
at java.desktop/javax.swing.RepaintManager$4.run(RepaintManager.java:868)
at java.desktop/javax.swing.RepaintManager$4.run(RepaintManager.java:851)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:851)
at java.desktop/javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:826)
at java.desktop/javax.swing.RepaintManager.prePaintDirtyRegions(RepaintManager.java:775)
at java.desktop/javax.swing.RepaintManager$ProcessingRunnable.run(RepaintManager.java:1901)
at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:313)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:776)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:974)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:847)
at com.intellij.ide.IdeEventQueue.lambda$null$8(IdeEventQueue.java:449)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:739)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$9(IdeEventQueue.java:448)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:492)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
```
**pycharm version:**
```
PyCharm 2020.1 (Community Edition)
Build #PC-201.6668.115, built on April 7, 2020
Runtime version: 11.0.6+8-b765.25 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Linux 5.3.0-46-generic
GC: ParNew, ConcurrentMarkSweep
Memory: 1908M
Cores: 8
Non-Bundled Plugins: com.intellij.plugins.visualstudiokeymap, com.jetbrains.plugins.ini4idea, BashSupport, com.leinardi.pycharm.pylint
Current Desktop: ubuntu:GNOME
```
|
1.0
|
PluginException: Icon cannot be found in '/modules/modulesNode.png' - **pylint-pycharm version:**
0.12.1
**description:**
pycharm issues this error on each plugin execution.
this is mostly an aesthetic bug since it doesn't seem to prevent the plugin from running.
**full traceback:**
```
com.intellij.diagnostic.PluginException: Icon cannot be found in '/modules/modulesNode.png', action 'class com.leinardi.pycharm.pylint.actions.ScanModule' [Plugin: com.leinardi.pycharm.pylint]
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.reportActionError(ActionManagerImpl.java:394)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.reportActionError(ActionManagerImpl.java:389)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl.access$000(ActionManagerImpl.java:75)
at com.intellij.openapi.actionSystem.impl.ActionManagerImpl$1.compute(ActionManagerImpl.java:289)
at com.intellij.openapi.util.IconLoader$LazyIcon.getOrComputeIcon(IconLoader.java:1097)
at com.intellij.openapi.util.IconLoader$LazyIcon.paintIcon(IconLoader.java:1073)
at com.intellij.openapi.actionSystem.ex.ActionButtonLook.paintIcon(ActionButtonLook.java:147)
at com.intellij.openapi.actionSystem.ex.ActionButtonLook.paintIcon(ActionButtonLook.java:143)
at com.intellij.openapi.actionSystem.impl.ActionButton.paintButtonLook(ActionButton.java:404)
at com.intellij.openapi.actionSystem.impl.ActionButton.paintComponent(ActionButton.java:370)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1077)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JLayeredPane.paint(JLayeredPane.java:590)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JLayeredPane.paint(JLayeredPane.java:590)
at java.desktop/javax.swing.JComponent.paintChildren(JComponent.java:910)
at java.desktop/javax.swing.JComponent.paint(JComponent.java:1086)
at java.desktop/javax.swing.JComponent.paintToOffscreen(JComponent.java:5263)
at java.desktop/javax.swing.BufferStrategyPaintManager.paint(BufferStrategyPaintManager.java:246)
at java.desktop/javax.swing.RepaintManager.paint(RepaintManager.java:1333)
at java.desktop/javax.swing.JComponent._paintImmediately(JComponent.java:5211)
at java.desktop/javax.swing.JComponent.paintImmediately(JComponent.java:5021)
at java.desktop/javax.swing.RepaintManager$4.run(RepaintManager.java:868)
at java.desktop/javax.swing.RepaintManager$4.run(RepaintManager.java:851)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:851)
at java.desktop/javax.swing.RepaintManager.paintDirtyRegions(RepaintManager.java:826)
at java.desktop/javax.swing.RepaintManager.prePaintDirtyRegions(RepaintManager.java:775)
at java.desktop/javax.swing.RepaintManager$ProcessingRunnable.run(RepaintManager.java:1901)
at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:313)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:776)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:974)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:847)
at com.intellij.ide.IdeEventQueue.lambda$null$8(IdeEventQueue.java:449)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:739)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$9(IdeEventQueue.java:448)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:492)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
```
**pycharm version:**
```
PyCharm 2020.1 (Community Edition)
Build #PC-201.6668.115, built on April 7, 2020
Runtime version: 11.0.6+8-b765.25 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Linux 5.3.0-46-generic
GC: ParNew, ConcurrentMarkSweep
Memory: 1908M
Cores: 8
Non-Bundled Plugins: com.intellij.plugins.visualstudiokeymap, com.jetbrains.plugins.ini4idea, BashSupport, com.leinardi.pycharm.pylint
Current Desktop: ubuntu:GNOME
```
|
non_process
|
pluginexception icon cannot be found in modules modulesnode png pylint pycharm version description pycharm issues this error on each plugin execution this is mostly an aesthetic bug since it doesn t seem to prevent the plugin from running full traceback com intellij diagnostic pluginexception icon cannot be found in modules modulesnode png action class com leinardi pycharm pylint actions scanmodule at com intellij openapi actionsystem impl actionmanagerimpl reportactionerror actionmanagerimpl java at com intellij openapi actionsystem impl actionmanagerimpl reportactionerror actionmanagerimpl java at com intellij openapi actionsystem impl actionmanagerimpl access actionmanagerimpl java at com intellij openapi actionsystem impl actionmanagerimpl compute actionmanagerimpl java at com intellij openapi util iconloader lazyicon getorcomputeicon iconloader java at com intellij openapi util iconloader lazyicon painticon iconloader java at com intellij openapi actionsystem ex actionbuttonlook painticon actionbuttonlook java at com intellij openapi actionsystem ex actionbuttonlook painticon actionbuttonlook java at com intellij openapi actionsystem impl actionbutton paintbuttonlook actionbutton java at com intellij openapi actionsystem impl actionbutton paintcomponent actionbutton java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jlayeredpane paint jlayeredpane java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jlayeredpane paint jlayeredpane java at java desktop javax swing jcomponent paintchildren jcomponent java at java desktop javax swing jcomponent paint jcomponent java at java desktop javax swing jcomponent painttooffscreen jcomponent java at java desktop javax swing bufferstrategypaintmanager paint bufferstrategypaintmanager java at java desktop javax swing repaintmanager paint repaintmanager java at java desktop javax swing jcomponent paintimmediately jcomponent java at java desktop javax swing jcomponent paintimmediately jcomponent java at java desktop javax swing repaintmanager run repaintmanager java at java desktop javax swing repaintmanager run repaintmanager java at java base java security accesscontroller doprivileged native method at java base java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java desktop javax swing repaintmanager paintdirtyregions repaintmanager java at java desktop javax swing repaintmanager paintdirtyregions repaintmanager java at java desktop javax swing repaintmanager prepaintdirtyregions repaintmanager java at java desktop javax swing repaintmanager processingrunnable run repaintmanager java at java desktop java awt event invocationevent dispatch invocationevent java at java desktop java awt eventqueue dispatcheventimpl eventqueue java at java desktop java awt eventqueue run eventqueue java at java desktop java awt eventqueue run eventqueue java at java base java security accesscontroller doprivileged native method at java base java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java desktop java awt eventqueue dispatchevent eventqueue java at com intellij ide ideeventqueue defaultdispatchevent ideeventqueue java at com intellij ide ideeventqueue dispatchevent ideeventqueue java at com intellij ide ideeventqueue lambda null ideeventqueue java at com intellij openapi progress impl coreprogressmanager computeprioritized coreprogressmanager java at com intellij ide ideeventqueue lambda dispatchevent ideeventqueue java at com intellij ide ideeventqueue dispatchevent ideeventqueue java at java desktop java awt eventdispatchthread pumponeeventforfilters eventdispatchthread java at java desktop java awt eventdispatchthread pumpeventsforfilter eventdispatchthread java at java desktop java awt eventdispatchthread pumpeventsforhierarchy eventdispatchthread java at java desktop java awt eventdispatchthread pumpevents eventdispatchthread java at java desktop java awt eventdispatchthread pumpevents eventdispatchthread java at java desktop java awt eventdispatchthread run eventdispatchthread java pycharm version pycharm community edition build pc built on april runtime version vm openjdk bit server vm by jetbrains s r o linux generic gc parnew concurrentmarksweep memory cores non bundled plugins com intellij plugins visualstudiokeymap com jetbrains plugins bashsupport com leinardi pycharm pylint current desktop ubuntu gnome
| 0
|
59,183
| 6,630,834,260
|
IssuesEvent
|
2017-09-25 02:50:21
|
steedos/apps
|
https://api.github.com/repos/steedos/apps
|
closed
|
流程设计中,若已指定审批岗位的节点,人员发生修改(删除),无法再次对此步骤进行属性编辑
|
fix:Done test:OK
|

流程设计器中,已经配置好了处理人(指定审批岗位),当此审批岗位人员被删除或是调动时,流程设计器会提示:步骤中指定的处理人已删除或已停用。这时,我们需要对此节点就行修改,要点击右边的属性按钮。属性按钮无响应!
|
1.0
|
流程设计中,若已指定审批岗位的节点,人员发生修改(删除),无法再次对此步骤进行属性编辑 - 
流程设计器中,已经配置好了处理人(指定审批岗位),当此审批岗位人员被删除或是调动时,流程设计器会提示:步骤中指定的处理人已删除或已停用。这时,我们需要对此节点就行修改,要点击右边的属性按钮。属性按钮无响应!
|
non_process
|
流程设计中,若已指定审批岗位的节点,人员发生修改(删除),无法再次对此步骤进行属性编辑 流程设计器中,已经配置好了处理人(指定审批岗位),当此审批岗位人员被删除或是调动时,流程设计器会提示:步骤中指定的处理人已删除或已停用。这时,我们需要对此节点就行修改,要点击右边的属性按钮。属性按钮无响应
| 0
|
30,948
| 5,889,756,334
|
IssuesEvent
|
2017-05-17 13:37:44
|
LDMW/app
|
https://api.github.com/repos/LDMW/app
|
closed
|
13/4 Call
|
discuss documentation
|
**Note: from memory, should have made notes (next time)**
The main points from the conversation were:
### Sprint planning
We will be working in a more agile way with everyone getting involved with time estimating each feature that will be built.
Fast iterations on each new feature.
### Why Wagtail?
Because it is a cms which has already been integrated in parts of the nhs, but there is no real tie to it (I'm actually quite happy to go with it though).
### Project clarity
There will be much more of a plan for next week during the project planning phase
### Wireframes




|
1.0
|
13/4 Call - **Note: from memory, should have made notes (next time)**
The main points from the conversation were:
### Sprint planning
We will be working in a more agile way with everyone getting involved with time estimating each feature that will be built.
Fast iterations on each new feature.
### Why Wagtail?
Because it is a cms which has already been integrated in parts of the nhs, but there is no real tie to it (I'm actually quite happy to go with it though).
### Project clarity
There will be much more of a plan for next week during the project planning phase
### Wireframes




|
non_process
|
call note from memory should have made notes next time the main points from the conversation were sprint planning we will be working in a more agile way with everyone getting involved with time estimating each feature that will be built fast iterations on each new feature why wagtail because it is a cms which has already been integrated in parts of the nhs but there is no real tie to it i m actually quite happy to go with it though project clarity there will be much more of a plan for next week during the project planning phase wireframes
| 0
|
212,741
| 7,242,280,583
|
IssuesEvent
|
2018-02-14 06:47:25
|
NuGet/Home
|
https://api.github.com/repos/NuGet/Home
|
closed
|
Engineering: NuGet.Client should produce nupkgs using SemVer 2.0.0
|
Area: Engineering Improvements Priority:2 Sprint 131
|
The NuGet.Client repo should make sure of SemVer 2.0.0 by producing nupkgs with release labels that include `.` instead of `-` for proper ordering.
Current: `3.6.0-rc-1950`
Expected: `3.6.0-rc.1.1950`
|
1.0
|
Engineering: NuGet.Client should produce nupkgs using SemVer 2.0.0 - The NuGet.Client repo should make sure of SemVer 2.0.0 by producing nupkgs with release labels that include `.` instead of `-` for proper ordering.
Current: `3.6.0-rc-1950`
Expected: `3.6.0-rc.1.1950`
|
non_process
|
engineering nuget client should produce nupkgs using semver the nuget client repo should make sure of semver by producing nupkgs with release labels that include instead of for proper ordering current rc expected rc
| 0
|
1,033
| 25,115,254,434
|
IssuesEvent
|
2022-11-09 01:03:05
|
jongfeel/BookReview
|
https://api.github.com/repos/jongfeel/BookReview
|
closed
|
3부 6장 불평을 처리하는 안전밸브
|
2022 How to Win Friends & Influence People
|
### 6장 불편을 처리하는 안전밸브
상대방이 먼저 자신의 이야기를 마치도록 내버려 두어라. 상대방은 자신의 일과 문제에 대해서 당신보다 훨씬 더 잘 알고 있다. 그러니 상대방에게 질문을 던져라. 상대방이 당신에게 이야기를 하게 만들어라.
---
규칙 6: 다른 사람이 말을 많이 하도록 만들어라.
Let the other man do a great deal of the talking.
|
1.0
|
3부 6장 불평을 처리하는 안전밸브 - ### 6장 불편을 처리하는 안전밸브
상대방이 먼저 자신의 이야기를 마치도록 내버려 두어라. 상대방은 자신의 일과 문제에 대해서 당신보다 훨씬 더 잘 알고 있다. 그러니 상대방에게 질문을 던져라. 상대방이 당신에게 이야기를 하게 만들어라.
---
규칙 6: 다른 사람이 말을 많이 하도록 만들어라.
Let the other man do a great deal of the talking.
|
non_process
|
불평을 처리하는 안전밸브 불편을 처리하는 안전밸브 상대방이 먼저 자신의 이야기를 마치도록 내버려 두어라 상대방은 자신의 일과 문제에 대해서 당신보다 훨씬 더 잘 알고 있다 그러니 상대방에게 질문을 던져라 상대방이 당신에게 이야기를 하게 만들어라 규칙 다른 사람이 말을 많이 하도록 만들어라 let the other man do a great deal of the talking
| 0
|
421,589
| 28,348,841,911
|
IssuesEvent
|
2023-04-12 00:07:09
|
fga-eps-mds/2023-1-CAPJu-Doc
|
https://api.github.com/repos/fga-eps-mds/2023-1-CAPJu-Doc
|
opened
|
Atualizar mural do Lean Inception
|
documentation eps
|
# Atualizar Mural
## Descrição.
Atualizar mural as informações do CAPJU (Até a etapa 5 - Jornada do usuário) e completar etapas as demais informações
<!-- ### Issue relacionada com [US <Numero>](link) <Se tiver> -->
## Como solucionar <Nao necessario>
Debater com a equipe e validar com o PO com base no Mural do link
## Critérios de aceitação
- [ ] Atualização dos dados até a jornada do usuário ( até etapa 5)
- [ ] Fazer o Brainstorming de atividades
- [ ] Fazer Revisão Técnica de Negócio e UX
- [ ] Fazer a etapa do Sequenciador definindo um novo MVP e incrementos subsequentes
- [ ] Atualizar o MVP Canvas com o resumo de tudo
|
1.0
|
Atualizar mural do Lean Inception - # Atualizar Mural
## Descrição.
Atualizar mural as informações do CAPJU (Até a etapa 5 - Jornada do usuário) e completar etapas as demais informações
<!-- ### Issue relacionada com [US <Numero>](link) <Se tiver> -->
## Como solucionar <Nao necessario>
Debater com a equipe e validar com o PO com base no Mural do link
## Critérios de aceitação
- [ ] Atualização dos dados até a jornada do usuário ( até etapa 5)
- [ ] Fazer o Brainstorming de atividades
- [ ] Fazer Revisão Técnica de Negócio e UX
- [ ] Fazer a etapa do Sequenciador definindo um novo MVP e incrementos subsequentes
- [ ] Atualizar o MVP Canvas com o resumo de tudo
|
non_process
|
atualizar mural do lean inception atualizar mural descrição atualizar mural as informações do capju até a etapa jornada do usuário e completar etapas as demais informações como solucionar debater com a equipe e validar com o po com base no mural do link critérios de aceitação atualização dos dados até a jornada do usuário até etapa fazer o brainstorming de atividades fazer revisão técnica de negócio e ux fazer a etapa do sequenciador definindo um novo mvp e incrementos subsequentes atualizar o mvp canvas com o resumo de tudo
| 0
|
12,656
| 15,026,002,684
|
IssuesEvent
|
2021-02-01 21:57:49
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
closed
|
Tech Team Update: 2021-01-27
|
team-process
|
Hey @2i2c-org/tech-team - we're a couple of days late for our latest updates. Sorry about that! Can folks fill out the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) with their own updates? ✨
- **Updates HackMD**: https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw
- **Team Sync history**: https://2i2c.org/team-compass/team/tech/sync/
# ToDo
- [x] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [x] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [x] Wait 2-3 days
- [x] Copy/paste into the `team-compass` repository
- [x] Clean up the HackMD
- [x] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
|
1.0
|
Tech Team Update: 2021-01-27 - Hey @2i2c-org/tech-team - we're a couple of days late for our latest updates. Sorry about that! Can folks fill out the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) with their own updates? ✨
- **Updates HackMD**: https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw
- **Team Sync history**: https://2i2c.org/team-compass/team/tech/sync/
# ToDo
- [x] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [x] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [x] Wait 2-3 days
- [x] Copy/paste into the `team-compass` repository
- [x] Clean up the HackMD
- [x] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
|
process
|
tech team update hey org tech team we re a couple of days late for our latest updates sorry about that can folks fill out the with their own updates ✨ updates hackmd team sync history todo clean up the for this update ping the team members in wait days copy paste into the team compass repository clean up the hackmd link to new updates in team compass in
| 1
|
13,274
| 15,757,795,002
|
IssuesEvent
|
2021-03-31 05:51:57
|
kubeflow/internal-acls
|
https://api.github.com/repos/kubeflow/internal-acls
|
closed
|
Presubmit test to prevent Github-sync from being broken by inconsistent member lists
|
kind/bug kind/process priority/p1
|
/kind process
Follow up on #344
We've seen a couple of times that Github sync is broken when a member is added to a team but not the Kubeflow org.
For example:
```
$ kubectl logs github-sync-1600450800-gz94v -n github-admin
{"component":"peribolos","file":"prow/flagutil/github.go:78","func":"k8s.io/test-infra/prow/flagutil.(*GitHubOptions).Validate","level":"warning","msg":"It doesn't look like you are using ghproxy to cache API calls to GitHub! This has become a required component of Prow and other components will soon be allowed to add features that may rapidly consume API ratelimit without caching. Starting May 1, 2020 use Prow components without ghproxy at your own risk! https://github.com/kubernetes/test-infra/tree/master/ghproxy#ghproxy","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"Throttle(300, 100)","time":"2020-09-18T17:40:21Z"}
{"component":"peribolos","file":"prow/cmd/peribolos/main.go:818","func":"main.configureOrg","level":"info","msg":"Skipping org metadata configuration","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgInvitations(kubeflow)","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"User()","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgMembers(kubeflow, admin)","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgMembers(kubeflow, member)","time":"2020-09-18T17:40:21Z"}
{"component":"peribolos","file":"prow/cmd/peribolos/main.go:201","func":"main.main","level":"fatal","msg":"Configuration failed: failed to configure kubeflow members: all team members/maintainers must also be org members: karkumar","time":"2020-09-18T17:40:24Z"}
```
Such errors should be caught earlier if we had some presubmit test to check on this.
/assign @Bobgy @rmgogogo
/cc @jlewi
|
1.0
|
Presubmit test to prevent Github-sync from being broken by inconsistent member lists - /kind process
Follow up on #344
We've seen a couple of times that Github sync is broken when a member is added to a team but not the Kubeflow org.
For example:
```
$ kubectl logs github-sync-1600450800-gz94v -n github-admin
{"component":"peribolos","file":"prow/flagutil/github.go:78","func":"k8s.io/test-infra/prow/flagutil.(*GitHubOptions).Validate","level":"warning","msg":"It doesn't look like you are using ghproxy to cache API calls to GitHub! This has become a required component of Prow and other components will soon be allowed to add features that may rapidly consume API ratelimit without caching. Starting May 1, 2020 use Prow components without ghproxy at your own risk! https://github.com/kubernetes/test-infra/tree/master/ghproxy#ghproxy","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"Throttle(300, 100)","time":"2020-09-18T17:40:21Z"}
{"component":"peribolos","file":"prow/cmd/peribolos/main.go:818","func":"main.configureOrg","level":"info","msg":"Skipping org metadata configuration","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgInvitations(kubeflow)","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"User()","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgMembers(kubeflow, admin)","time":"2020-09-18T17:40:21Z"}
{"client":"github","component":"peribolos","file":"prow/github/client.go:563","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListOrgMembers(kubeflow, member)","time":"2020-09-18T17:40:21Z"}
{"component":"peribolos","file":"prow/cmd/peribolos/main.go:201","func":"main.main","level":"fatal","msg":"Configuration failed: failed to configure kubeflow members: all team members/maintainers must also be org members: karkumar","time":"2020-09-18T17:40:24Z"}
```
Such errors should be caught earlier if we had some presubmit test to check on this.
/assign @Bobgy @rmgogogo
/cc @jlewi
|
process
|
presubmit test to prevent github sync from being broken by inconsistent member lists kind process follow up on we ve seen a couple of times that github sync is broken when a member is added to a team but not the kubeflow org for example kubectl logs github sync n github admin component peribolos file prow flagutil github go func io test infra prow flagutil githuboptions validate level warning msg it doesn t look like you are using ghproxy to cache api calls to github this has become a required component of prow and other components will soon be allowed to add features that may rapidly consume api ratelimit without caching starting may use prow components without ghproxy at your own risk client github component peribolos file prow github client go func io test infra prow github client log level info msg throttle time component peribolos file prow cmd peribolos main go func main configureorg level info msg skipping org metadata configuration time client github component peribolos file prow github client go func io test infra prow github client log level info msg listorginvitations kubeflow time client github component peribolos file prow github client go func io test infra prow github client log level info msg user time client github component peribolos file prow github client go func io test infra prow github client log level info msg listorgmembers kubeflow admin time client github component peribolos file prow github client go func io test infra prow github client log level info msg listorgmembers kubeflow member time component peribolos file prow cmd peribolos main go func main main level fatal msg configuration failed failed to configure kubeflow members all team members maintainers must also be org members karkumar time such errors should be caught earlier if we had some presubmit test to check on this assign bobgy rmgogogo cc jlewi
| 1
|
10,268
| 13,124,712,763
|
IssuesEvent
|
2020-08-06 04:40:01
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report]构建为swan的时候,全局配置属性丢失
|
processing
|
**问题描述**
mpx构建为百度小程序时,丢失全局配置属性:networkTimeout、permission、requiredBackgroundModes
原因如下:
https://github.com/didi/mpx/blob/c8b6f09491abf3553db90c2442099f9f94022962/packages/webpack-plugin/lib/platform/json/wx/index.js#L210-L220
在这个文件里做了删除。
**建议**
各家小程序都在持续更新迭代,所以希望mpx官方也定期检查配置,^@^
|
1.0
|
[Bug report]构建为swan的时候,全局配置属性丢失 - **问题描述**
mpx构建为百度小程序时,丢失全局配置属性:networkTimeout、permission、requiredBackgroundModes
原因如下:
https://github.com/didi/mpx/blob/c8b6f09491abf3553db90c2442099f9f94022962/packages/webpack-plugin/lib/platform/json/wx/index.js#L210-L220
在这个文件里做了删除。
**建议**
各家小程序都在持续更新迭代,所以希望mpx官方也定期检查配置,^@^
|
process
|
构建为swan的时候,全局配置属性丢失 问题描述 mpx构建为百度小程序时,丢失全局配置属性:networktimeout、permission、requiredbackgroundmodes 原因如下: 在这个文件里做了删除。 建议 各家小程序都在持续更新迭代,所以希望mpx官方也定期检查配置,
| 1
|
339,571
| 10,256,254,447
|
IssuesEvent
|
2019-08-21 17:13:30
|
NuGet/Home
|
https://api.github.com/repos/NuGet/Home
|
closed
|
PackagePathResolver.GetInstalledPath doesn't work with relative paths
|
Priority:2 Sprint 157 Type:Bug
|
Either make sure `PackagePathResolver.Root` is absolute [here](https://github.com/NuGet/NuGet.Client/blob/3803820961f4d61c06d07b179dab1d0439ec0d91/src/NuGet.Core/NuGet.Packaging/PackageExtraction/PackagePathHelper.cs#L123) or just make it absolute in the PackagePathResolver constructor.
|
1.0
|
PackagePathResolver.GetInstalledPath doesn't work with relative paths - Either make sure `PackagePathResolver.Root` is absolute [here](https://github.com/NuGet/NuGet.Client/blob/3803820961f4d61c06d07b179dab1d0439ec0d91/src/NuGet.Core/NuGet.Packaging/PackageExtraction/PackagePathHelper.cs#L123) or just make it absolute in the PackagePathResolver constructor.
|
non_process
|
packagepathresolver getinstalledpath doesn t work with relative paths either make sure packagepathresolver root is absolute or just make it absolute in the packagepathresolver constructor
| 0
|
105,353
| 13,181,071,331
|
IssuesEvent
|
2020-08-12 13:47:42
|
JamesOwers/midi_degradation_toolkit
|
https://api.github.com/repos/JamesOwers/midi_degradation_toolkit
|
closed
|
df_to_csv and csv_to_df should probably be in the same place
|
design
|
I'd go for having them both in that "midi" package and renaming it perhaps? (Currently csv_to_df is data_structures.read_note_csv)
Essentially, mdtk.midi (renamed) would be for file I/O and conversion, while mdtk.data_structures would be about doing things with dataframes.
|
1.0
|
df_to_csv and csv_to_df should probably be in the same place - I'd go for having them both in that "midi" package and renaming it perhaps? (Currently csv_to_df is data_structures.read_note_csv)
Essentially, mdtk.midi (renamed) would be for file I/O and conversion, while mdtk.data_structures would be about doing things with dataframes.
|
non_process
|
df to csv and csv to df should probably be in the same place i d go for having them both in that midi package and renaming it perhaps currently csv to df is data structures read note csv essentially mdtk midi renamed would be for file i o and conversion while mdtk data structures would be about doing things with dataframes
| 0
|
73,811
| 8,940,639,185
|
IssuesEvent
|
2019-01-24 00:34:16
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
opened
|
Thermocycler: Define Temperature Cycles (API)
|
WIP api design feature medium
|
As a Thermocycler user, I would like to be able to define temperature cycles via API.
## Acceptance Criteria
- [ ] User able to define a 'cycle' consisting of an arbitrary number of steps
- [ ] A step includes a defined temperature, temperature ramp rate, and time at that temperature
- [ ] If no temperature ramp rate is defined, default to max
- [ ] User able to define a number of cycles
## Open Questions
- How to handle Thermocycler hold and storage temperatures used before and after the cycle? Assumption: these will be defined as individual steps
- How to handle 'steps' that do not consist of 'temperatures' and 'time at temp' (i.e. are there other actions users might include? Need to research further)
- How to handle incrementing temperature per cycle (do we prioritize this? one way to handle this might be allowing users to write their own 'for' loops with individual user-facing 'setTemp' commands')
- How should users define ramp rates? Defining linear rates (e.g. degrees/second) would seem to provide the most control and reproducibility, but several other thermocyclers only allow users to define ramp rate as a % of max. Is there a hardware limitation driving this?
- How should the API method to define a cycle be patterned? For example:
- One function in which cycle properties and iterations are params
- Following a builder pattern
- etc.
- Consider (1) consistency with rest of API (2) ease of understanding (3) flexibility and future-proofing
## Design
- No user interface, but tagging @pantslakz because ^^^ has UI implications
|
1.0
|
Thermocycler: Define Temperature Cycles (API) - As a Thermocycler user, I would like to be able to define temperature cycles via API.
## Acceptance Criteria
- [ ] User able to define a 'cycle' consisting of an arbitrary number of steps
- [ ] A step includes a defined temperature, temperature ramp rate, and time at that temperature
- [ ] If no temperature ramp rate is defined, default to max
- [ ] User able to define a number of cycles
## Open Questions
- How to handle Thermocycler hold and storage temperatures used before and after the cycle? Assumption: these will be defined as individual steps
- How to handle 'steps' that do not consist of 'temperatures' and 'time at temp' (i.e. are there other actions users might include? Need to research further)
- How to handle incrementing temperature per cycle (do we prioritize this? one way to handle this might be allowing users to write their own 'for' loops with individual user-facing 'setTemp' commands')
- How should users define ramp rates? Defining linear rates (e.g. degrees/second) would seem to provide the most control and reproducibility, but several other thermocyclers only allow users to define ramp rate as a % of max. Is there a hardware limitation driving this?
- How should the API method to define a cycle be patterned? For example:
- One function in which cycle properties and iterations are params
- Following a builder pattern
- etc.
- Consider (1) consistency with rest of API (2) ease of understanding (3) flexibility and future-proofing
## Design
- No user interface, but tagging @pantslakz because ^^^ has UI implications
|
non_process
|
thermocycler define temperature cycles api as a thermocycler user i would like to be able to define temperature cycles via api acceptance criteria user able to define a cycle consisting of an arbitrary number of steps a step includes a defined temperature temperature ramp rate and time at that temperature if no temperature ramp rate is defined default to max user able to define a number of cycles open questions how to handle thermocycler hold and storage temperatures used before and after the cycle assumption these will be defined as individual steps how to handle steps that do not consist of temperatures and time at temp i e are there other actions users might include need to research further how to handle incrementing temperature per cycle do we prioritize this one way to handle this might be allowing users to write their own for loops with individual user facing settemp commands how should users define ramp rates defining linear rates e g degrees second would seem to provide the most control and reproducibility but several other thermocyclers only allow users to define ramp rate as a of max is there a hardware limitation driving this how should the api method to define a cycle be patterned for example one function in which cycle properties and iterations are params following a builder pattern etc consider consistency with rest of api ease of understanding flexibility and future proofing design no user interface but tagging pantslakz because has ui implications
| 0
|
6,976
| 10,127,500,294
|
IssuesEvent
|
2019-08-01 10:21:56
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
User experience: add "Start payment" before "Payment started" button
|
in:gui in:trade-process was:dropped
|
For a new user, "Payment started" button without any clarification may be somewhat confusing. It is good that once you press it, it asks you if you have made a payment or not, but maybe it would be better if there was some instruction before that button, such as "Start payment".
|
1.0
|
User experience: add "Start payment" before "Payment started" button - For a new user, "Payment started" button without any clarification may be somewhat confusing. It is good that once you press it, it asks you if you have made a payment or not, but maybe it would be better if there was some instruction before that button, such as "Start payment".
|
process
|
user experience add start payment before payment started button for a new user payment started button without any clarification may be somewhat confusing it is good that once you press it it asks you if you have made a payment or not but maybe it would be better if there was some instruction before that button such as start payment
| 1
|
3,815
| 6,800,316,550
|
IssuesEvent
|
2017-11-02 13:37:15
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
Update third party dependencies
|
dev process enhancement Priority - High
|
Noticed while looking at https://github.com/syndesisio/syndesis-ui/issues/934 that our dependency versions are ancient at this point and many could use a bump up to a newer version.
|
1.0
|
Update third party dependencies - Noticed while looking at https://github.com/syndesisio/syndesis-ui/issues/934 that our dependency versions are ancient at this point and many could use a bump up to a newer version.
|
process
|
update third party dependencies noticed while looking at that our dependency versions are ancient at this point and many could use a bump up to a newer version
| 1
|
239,913
| 7,800,159,509
|
IssuesEvent
|
2018-06-09 05:42:57
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0007568:
do not send iMIP-messages via ActiveSync
|
ActiveSync Bug Mantis high priority
|
**Reported by pschuele on 10 Dec 2012 14:25**
**Version:** Milan (2012.03.7)
do not send iMIP-messages via ActiveSync
- check attachments, do not send message with Felamimail_Model_Message::CONTENT_TYPE_CALENDAR attachment
|
1.0
|
0007568:
do not send iMIP-messages via ActiveSync - **Reported by pschuele on 10 Dec 2012 14:25**
**Version:** Milan (2012.03.7)
do not send iMIP-messages via ActiveSync
- check attachments, do not send message with Felamimail_Model_Message::CONTENT_TYPE_CALENDAR attachment
|
non_process
|
do not send imip messages via activesync reported by pschuele on dec version milan do not send imip messages via activesync check attachments do not send message with felamimail model message content type calendar attachment
| 0
|
17,373
| 23,198,521,665
|
IssuesEvent
|
2022-08-01 18:54:21
|
vectordotdev/vector
|
https://api.github.com/repos/vectordotdev/vector
|
closed
|
Make Vector more scriptable
|
meta: idea needs: approval needs: requirements domain: processing
|
In this issue I want to discuss on high level adding scripting APIs to Vector. It might not be the top priority at the moment, but I'm creating this issue now to give us enough time to think about it and discuss it.
## Introduction
### Goals and scope of this issue
The goal is to define how ideal APIs should look like, even if they cannot be easily implemented at the moment. This would allow us to have the whole picture in advance to ensure that when we implement separate scripting-related features they play together well.
In the text below JavaScript will be used as the scripting language because I'm familiar with it and it can be implemented on top of QuickJS engine (see #721). Most of the ideas described here can be translated to some other scripting language, for example Lua, if we find out that it fits better.
### Intended usage of scripting
I want to highlight that _scripting is intended to be used for non-standard things and can't replace native sources/transforms/sinks_ because a scripting language will always be an order of magnitude slower than native Rust code. However, it still could be indispensable to users who need to do something custom.
Additionally, if we actually arrive at the point where scripting is flexible enough, it would be possible to prototype some new features as scripts before actual high-performant and user-friendly implementation in Rust.
## Scriptable components
### Overview
We can have all three possible types of scriptable components:
* scriptable _source_ for generating events from user code
* scriptable _transform_ for changing, augmenting, or multiplying events
* scriptable _sink_ for storing events in non-standard ways
Below there are examples of how they can be used.
### Config structure
Each of the components needs to have some kind of a _handler_ function which implements the logic. The config could either
* load the source from a file
```toml
[component]
handler = "function_name"
path = "script_path.js"
```
where the path by default is relative to the path of the config file, and not to the current working directory to ensure that running `vector --config <config path>` works with any working directory.
* or contain inlined source as a string:
```toml
[component]
handler = "function name"
source = "<actual JS source>"
```
I also find it reasonable to allow usage of anonymous handlers for simple cases, where entire source contains (and evaluates to) a single function. In that case the handler field should be skipped and the config could look just like
```toml
[component]
source = "function (...) {}"
```
### Sources
The general idea of is similar to #992, but instead of a shell command a JavaScript function is periodically executed and generates either actual event or promise that resolves to an event.
* Just a counter with state, can be used for tests:
```javascript
let counter = 0;
function handler() {
return ++counter;
}
```
* HTTP API reader with [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API):
```javascript
async function handler() {
const res = await fetch("http://...");
const data = await res.json();
return {message: data.message};
}
```
* File reader. Here a decision has to be made on filesystem APIs, as they are not standard. This example uses Node-style [readFileSync](https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options), while QuickJS provides more [low-level filesystem functions](https://bellard.org/quickjs/quickjs.html#std-module). In particular, it can be used to read values from `/sys` or `/proc` filesystems:
```javascript
function handler() {
const content = fs.readFileSync("/sys/class/gpio/gpio3");
return {state: content};
}
```
It should also be possible to returned multiple events at once by returning an array or no events by returning `null`.
### Transforms
Scripted transforms are a generalization of #721 with support of promises in addition to normal events. The promise support is mostly needed for the fetch API, which can be used as described below. The promise can resolve to a single event, an array or events, or `null`. Returning actual event (or array or `null`) instead of promises needs to be supported too.
Examples:
* Currency rate conversion:
```javascript
let currencyRate = 0;
let currencyRateFetchedAt = new Date(0);
async function getCurrencyRate() {
const now = new Date();
if (now - currencyRateFetchedAt > 1000 * 60 * 60) { // cache rate for one hour
const res = await fetch("http://currency-api/...");
const data = await res.json();
currencyRate = data.rate;
currencyRateFetchedAt = now;
}
return currencyRate;
}
async function handler(event) {
return {...event, price: event.price * (await getCurrencyRate())};
}
```
* Merging events from different sources. For example, there could be two sources, one of which produces events containing current room temperature and another containing current atmospheric pressure. They can be combined using the following config:
```toml
[transforms.combiner]
type = "javascript"
inputs = ["temperature_source", "pressure_source"]
handler = "handler"
path = "..."
```
with script source looking like
```javascript
let temperature = null;
let pressure = null;
function handler(event) {
if (event.hasOwnProperty("temperature")) {
temperature = event.temperature;
}
if (event.hasOwnProperty("pressure")) {
pressure = event.pressure;
}
if (temperature !== null && pressure !== null) {
return {
timestamp: event.timestamp,
message: `Temperature: ${temperature}, pressure: ${pressure}`
}
}
}
```
### Sinks
While scriptable sinks seems to be less important to me than sources and transforms, I think the main use case here not covered by native sinks can be making multi-step HTTP requests, writing to the filesystem using complex logic, or invoking command-line programs with arguments combined using complex logic. The handler should take an `events` array containing a batch of events and process them either synchronously or asynchronously.
Example:
* Send HTTP requests requiring temporary authentication tokens:
```javascript
let token = null
async function handler(events) {
if (token === null) {
// send authentication request with login and password from environment variables
// and receive the token
token = await authenticate();
}
// send actual request using the token
await makeRequest(events, token);
}
```
## Proposed API groups to be implemented
From examples and use cases listed above, I think the following list of APIs provided to the user scripts would be useful:
* HTTP requests (for JavaScript it is [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API))
* Access to environment variables
* Access to the filesystem
* Invocation of other programs (similar to `system`/`popen`)
* Logging facilities (ability to log to Vector's standard log, could be useful for debugging)
## Questions
* Should there be optional timeouts for handlers, which would make Vector restart the scripting engine if the timeout is exceeded?
* Are there missing API groups which could be provided too?
|
1.0
|
Make Vector more scriptable - In this issue I want to discuss on high level adding scripting APIs to Vector. It might not be the top priority at the moment, but I'm creating this issue now to give us enough time to think about it and discuss it.
## Introduction
### Goals and scope of this issue
The goal is to define how ideal APIs should look like, even if they cannot be easily implemented at the moment. This would allow us to have the whole picture in advance to ensure that when we implement separate scripting-related features they play together well.
In the text below JavaScript will be used as the scripting language because I'm familiar with it and it can be implemented on top of QuickJS engine (see #721). Most of the ideas described here can be translated to some other scripting language, for example Lua, if we find out that it fits better.
### Intended usage of scripting
I want to highlight that _scripting is intended to be used for non-standard things and can't replace native sources/transforms/sinks_ because a scripting language will always be an order of magnitude slower than native Rust code. However, it still could be indispensable to users who need to do something custom.
Additionally, if we actually arrive at the point where scripting is flexible enough, it would be possible to prototype some new features as scripts before actual high-performant and user-friendly implementation in Rust.
## Scriptable components
### Overview
We can have all three possible types of scriptable components:
* scriptable _source_ for generating events from user code
* scriptable _transform_ for changing, augmenting, or multiplying events
* scriptable _sink_ for storing events in non-standard ways
Below there are examples of how they can be used.
### Config structure
Each of the components needs to have some kind of a _handler_ function which implements the logic. The config could either
* load the source from a file
```toml
[component]
handler = "function_name"
path = "script_path.js"
```
where the path by default is relative to the path of the config file, and not to the current working directory to ensure that running `vector --config <config path>` works with any working directory.
* or contain inlined source as a string:
```toml
[component]
handler = "function name"
source = "<actual JS source>"
```
I also find it reasonable to allow usage of anonymous handlers for simple cases, where entire source contains (and evaluates to) a single function. In that case the handler field should be skipped and the config could look just like
```toml
[component]
source = "function (...) {}"
```
### Sources
The general idea of is similar to #992, but instead of a shell command a JavaScript function is periodically executed and generates either actual event or promise that resolves to an event.
* Just a counter with state, can be used for tests:
```javascript
let counter = 0;
function handler() {
return ++counter;
}
```
* HTTP API reader with [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API):
```javascript
async function handler() {
const res = await fetch("http://...");
const data = await res.json();
return {message: data.message};
}
```
* File reader. Here a decision has to be made on filesystem APIs, as they are not standard. This example uses Node-style [readFileSync](https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options), while QuickJS provides more [low-level filesystem functions](https://bellard.org/quickjs/quickjs.html#std-module). In particular, it can be used to read values from `/sys` or `/proc` filesystems:
```javascript
function handler() {
const content = fs.readFileSync("/sys/class/gpio/gpio3");
return {state: content};
}
```
It should also be possible to returned multiple events at once by returning an array or no events by returning `null`.
### Transforms
Scripted transforms are a generalization of #721 with support of promises in addition to normal events. The promise support is mostly needed for the fetch API, which can be used as described below. The promise can resolve to a single event, an array or events, or `null`. Returning actual event (or array or `null`) instead of promises needs to be supported too.
Examples:
* Currency rate conversion:
```javascript
let currencyRate = 0;
let currencyRateFetchedAt = new Date(0);
async function getCurrencyRate() {
const now = new Date();
if (now - currencyRateFetchedAt > 1000 * 60 * 60) { // cache rate for one hour
const res = await fetch("http://currency-api/...");
const data = await res.json();
currencyRate = data.rate;
currencyRateFetchedAt = now;
}
return currencyRate;
}
async function handler(event) {
return {...event, price: event.price * (await getCurrencyRate())};
}
```
* Merging events from different sources. For example, there could be two sources, one of which produces events containing current room temperature and another containing current atmospheric pressure. They can be combined using the following config:
```toml
[transforms.combiner]
type = "javascript"
inputs = ["temperature_source", "pressure_source"]
handler = "handler"
path = "..."
```
with script source looking like
```javascript
let temperature = null;
let pressure = null;
function handler(event) {
if (event.hasOwnProperty("temperature")) {
temperature = event.temperature;
}
if (event.hasOwnProperty("pressure")) {
pressure = event.pressure;
}
if (temperature !== null && pressure !== null) {
return {
timestamp: event.timestamp,
message: `Temperature: ${temperature}, pressure: ${pressure}`
}
}
}
```
### Sinks
While scriptable sinks seems to be less important to me than sources and transforms, I think the main use case here not covered by native sinks can be making multi-step HTTP requests, writing to the filesystem using complex logic, or invoking command-line programs with arguments combined using complex logic. The handler should take an `events` array containing a batch of events and process them either synchronously or asynchronously.
Example:
* Send HTTP requests requiring temporary authentication tokens:
```javascript
let token = null
async function handler(events) {
if (token === null) {
// send authentication request with login and password from environment variables
// and receive the token
token = await authenticate();
}
// send actual request using the token
await makeRequest(events, token);
}
```
## Proposed API groups to be implemented
From examples and use cases listed above, I think the following list of APIs provided to the user scripts would be useful:
* HTTP requests (for JavaScript it is [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API))
* Access to environment variables
* Access to the filesystem
* Invocation of other programs (similar to `system`/`popen`)
* Logging facilities (ability to log to Vector's standard log, could be useful for debugging)
## Questions
* Should there be optional timeouts for handlers, which would make Vector restart the scripting engine if the timeout is exceeded?
* Are there missing API groups which could be provided too?
|
process
|
make vector more scriptable in this issue i want to discuss on high level adding scripting apis to vector it might not be the top priority at the moment but i m creating this issue now to give us enough time to think about it and discuss it introduction goals and scope of this issue the goal is to define how ideal apis should look like even if they cannot be easily implemented at the moment this would allow us to have the whole picture in advance to ensure that when we implement separate scripting related features they play together well in the text below javascript will be used as the scripting language because i m familiar with it and it can be implemented on top of quickjs engine see most of the ideas described here can be translated to some other scripting language for example lua if we find out that it fits better intended usage of scripting i want to highlight that scripting is intended to be used for non standard things and can t replace native sources transforms sinks because a scripting language will always be an order of magnitude slower than native rust code however it still could be indispensable to users who need to do something custom additionally if we actually arrive at the point where scripting is flexible enough it would be possible to prototype some new features as scripts before actual high performant and user friendly implementation in rust scriptable components overview we can have all three possible types of scriptable components scriptable source for generating events from user code scriptable transform for changing augmenting or multiplying events scriptable sink for storing events in non standard ways below there are examples of how they can be used config structure each of the components needs to have some kind of a handler function which implements the logic the config could either load the source from a file toml handler function name path script path js where the path by default is relative to the path of the config file and not to the current working directory to ensure that running vector config works with any working directory or contain inlined source as a string toml handler function name source i also find it reasonable to allow usage of anonymous handlers for simple cases where entire source contains and evaluates to a single function in that case the handler field should be skipped and the config could look just like toml source function sources the general idea of is similar to but instead of a shell command a javascript function is periodically executed and generates either actual event or promise that resolves to an event just a counter with state can be used for tests javascript let counter function handler return counter http api reader with javascript async function handler const res await fetch const data await res json return message data message file reader here a decision has to be made on filesystem apis as they are not standard this example uses node style while quickjs provides more in particular it can be used to read values from sys or proc filesystems javascript function handler const content fs readfilesync sys class gpio return state content it should also be possible to returned multiple events at once by returning an array or no events by returning null transforms scripted transforms are a generalization of with support of promises in addition to normal events the promise support is mostly needed for the fetch api which can be used as described below the promise can resolve to a single event an array or events or null returning actual event or array or null instead of promises needs to be supported too examples currency rate conversion javascript let currencyrate let currencyratefetchedat new date async function getcurrencyrate const now new date if now currencyratefetchedat cache rate for one hour const res await fetch const data await res json currencyrate data rate currencyratefetchedat now return currencyrate async function handler event return event price event price await getcurrencyrate merging events from different sources for example there could be two sources one of which produces events containing current room temperature and another containing current atmospheric pressure they can be combined using the following config toml type javascript inputs handler handler path with script source looking like javascript let temperature null let pressure null function handler event if event hasownproperty temperature temperature event temperature if event hasownproperty pressure pressure event pressure if temperature null pressure null return timestamp event timestamp message temperature temperature pressure pressure sinks while scriptable sinks seems to be less important to me than sources and transforms i think the main use case here not covered by native sinks can be making multi step http requests writing to the filesystem using complex logic or invoking command line programs with arguments combined using complex logic the handler should take an events array containing a batch of events and process them either synchronously or asynchronously example send http requests requiring temporary authentication tokens javascript let token null async function handler events if token null send authentication request with login and password from environment variables and receive the token token await authenticate send actual request using the token await makerequest events token proposed api groups to be implemented from examples and use cases listed above i think the following list of apis provided to the user scripts would be useful http requests for javascript it is access to environment variables access to the filesystem invocation of other programs similar to system popen logging facilities ability to log to vector s standard log could be useful for debugging questions should there be optional timeouts for handlers which would make vector restart the scripting engine if the timeout is exceeded are there missing api groups which could be provided too
| 1
|
95,858
| 16,112,865,488
|
IssuesEvent
|
2021-04-28 01:00:51
|
bci-oss/keycloak
|
https://api.github.com/repos/bci-oss/keycloak
|
opened
|
CVE-2021-23382 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-6.0.23.tgz</b>, <b>postcss-6.0.1.tgz</b>, <b>postcss-7.0.27.tgz</b></p></summary>
<p>
<details><summary><b>postcss-6.0.23.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-extract-imports/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-local-by-default/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-scope/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-values/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- postcss-modules-1.5.0.tgz
- css-modules-loader-core-1.1.0.tgz
- postcss-modules-scope-1.1.0.tgz
- :x: **postcss-6.0.23.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-6.0.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/css-modules-loader-core/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- postcss-modules-1.5.0.tgz
- css-modules-loader-core-1.1.0.tgz
- :x: **postcss-6.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.27.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- :x: **postcss-7.0.27.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.23","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss-modules:1.5.0;css-modules-loader-core:1.1.0;postcss-modules-scope:1.1.0;postcss:6.0.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.1","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss-modules:1.5.0;css-modules-loader-core:1.1.0;postcss:6.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.27","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss:7.0.27","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23382 (Medium) detected in multiple libraries - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-6.0.23.tgz</b>, <b>postcss-6.0.1.tgz</b>, <b>postcss-7.0.27.tgz</b></p></summary>
<p>
<details><summary><b>postcss-6.0.23.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-extract-imports/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-local-by-default/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-scope/node_modules/postcss/package.json,keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss-modules-values/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- postcss-modules-1.5.0.tgz
- css-modules-loader-core-1.1.0.tgz
- postcss-modules-scope-1.1.0.tgz
- :x: **postcss-6.0.23.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-6.0.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/css-modules-loader-core/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- postcss-modules-1.5.0.tgz
- css-modules-loader-core-1.1.0.tgz
- :x: **postcss-6.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.27.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.27.tgz</a></p>
<p>Path to dependency file: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/package.json</p>
<p>Path to vulnerable library: keycloak/themes/src/main/resources/theme/keycloak.v2/account/src/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-postcss-2.5.0.tgz (Root Library)
- :x: **postcss-7.0.27.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.23","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss-modules:1.5.0;css-modules-loader-core:1.1.0;postcss-modules-scope:1.1.0;postcss:6.0.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.1","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss-modules:1.5.0;css-modules-loader-core:1.1.0;postcss:6.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.27","packageFilePaths":["/themes/src/main/resources/theme/keycloak.v2/account/src/package.json"],"isTransitiveDependency":true,"dependencyTree":"rollup-plugin-postcss:2.5.0;postcss:7.0.27","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file keycloak themes src main resources theme keycloak account src package json path to vulnerable library keycloak themes src main resources theme keycloak account src node modules postcss modules extract imports node modules postcss package json keycloak themes src main resources theme keycloak account src node modules postcss modules local by default node modules postcss package json keycloak themes src main resources theme keycloak account src node modules postcss modules scope node modules postcss package json keycloak themes src main resources theme keycloak account src node modules postcss modules values node modules postcss package json dependency hierarchy rollup plugin postcss tgz root library postcss modules tgz css modules loader core tgz postcss modules scope tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file keycloak themes src main resources theme keycloak account src package json path to vulnerable library keycloak themes src main resources theme keycloak account src node modules css modules loader core node modules postcss package json dependency hierarchy rollup plugin postcss tgz root library postcss modules tgz css modules loader core tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file keycloak themes src main resources theme keycloak account src package json path to vulnerable library keycloak themes src main resources theme keycloak account src node modules postcss package json dependency hierarchy rollup plugin postcss tgz root library x postcss tgz vulnerable library found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree rollup plugin postcss postcss modules css modules loader core postcss modules scope postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree rollup plugin postcss postcss modules css modules loader core postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree rollup plugin postcss postcss isminimumfixversionavailable true minimumfixversion postcss basebranches vulnerabilityidentifier cve vulnerabilitydetails the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl vulnerabilityurl
| 0
|
213,553
| 16,524,589,591
|
IssuesEvent
|
2021-05-26 18:20:09
|
openservicemesh/osm
|
https://api.github.com/repos/openservicemesh/osm
|
closed
|
test: pkg/certificate/providers/certmanager/certificate_manager.go - delete cert from cache
|
area/tests size/XS
|
In `pkg/certificate/providers/certmanager/certificate_manager.go`, the unit test coverage is low for an deleteFromCache() func. See highlighted lines below.

Scenarios not covered:
* cert is deleted from cache
ref #1489
|
1.0
|
test: pkg/certificate/providers/certmanager/certificate_manager.go - delete cert from cache - In `pkg/certificate/providers/certmanager/certificate_manager.go`, the unit test coverage is low for an deleteFromCache() func. See highlighted lines below.

Scenarios not covered:
* cert is deleted from cache
ref #1489
|
non_process
|
test pkg certificate providers certmanager certificate manager go delete cert from cache in pkg certificate providers certmanager certificate manager go the unit test coverage is low for an deletefromcache func see highlighted lines below scenarios not covered cert is deleted from cache ref
| 0
|
26,751
| 13,099,849,368
|
IssuesEvent
|
2020-08-03 22:39:56
|
arborx/ArborX
|
https://api.github.com/repos/arborx/ArborX
|
closed
|
Examine interface and performance implications of having a query index
|
API enhancement performance
|
Currently, the only way to access the index of a query is to have a user attach it. In many situations, we know the index itself and do not need user info to process it. There are use cases where we need this index. Therefore, we need to see if it makes sense to always have it and treat it ourselves.
|
True
|
Examine interface and performance implications of having a query index - Currently, the only way to access the index of a query is to have a user attach it. In many situations, we know the index itself and do not need user info to process it. There are use cases where we need this index. Therefore, we need to see if it makes sense to always have it and treat it ourselves.
|
non_process
|
examine interface and performance implications of having a query index currently the only way to access the index of a query is to have a user attach it in many situations we know the index itself and do not need user info to process it there are use cases where we need this index therefore we need to see if it makes sense to always have it and treat it ourselves
| 0
|
4,765
| 7,632,081,196
|
IssuesEvent
|
2018-05-05 10:56:19
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Cronjob issues
|
log-processing question
|
Hello. Thanks for Goaccess. The following command works without a cron job:
```
zcat -f /var/log/apache2/access.log /var/log/apache2/access.log.{1..16} | goaccess -o /var/www/html/analytics123456789.html
```
When it runs on a crob job, it only seems to take in account the first access log `/var/log/apache2/access.log` and doesn't get data for the rest.
|
1.0
|
Cronjob issues - Hello. Thanks for Goaccess. The following command works without a cron job:
```
zcat -f /var/log/apache2/access.log /var/log/apache2/access.log.{1..16} | goaccess -o /var/www/html/analytics123456789.html
```
When it runs on a crob job, it only seems to take in account the first access log `/var/log/apache2/access.log` and doesn't get data for the rest.
|
process
|
cronjob issues hello thanks for goaccess the following command works without a cron job zcat f var log access log var log access log goaccess o var www html html when it runs on a crob job it only seems to take in account the first access log var log access log and doesn t get data for the rest
| 1
|
550,213
| 16,107,080,342
|
IssuesEvent
|
2021-04-27 16:07:15
|
dmwm/WMCore
|
https://api.github.com/repos/dmwm/WMCore
|
closed
|
[py2py3] Apply changes from `pylint --py3k` to files involved in #10012
|
High Priority New Feature Python3
|
**Impact of the new feature**
Files involved in #10012
**Is your feature request related to a problem? Please describe.**
#10012 was not checked against `pylint --py3k`, whose report about the files involved in that PR is
```plaintext
************* Module src.python.WMCore.DataStructs.File
W: 90, 4: __cmp__ method defined (cmp-method)
************* Module src.python.WMCore.DataStructs.MathStructs.ContinuousSummaryHistogram
W: 99,17: round built-in referenced (round-builtin)
************* Module src.python.WMCore.DataStructs.Run
W: 59, 4: __cmp__ method defined (cmp-method)
W: 59, 4: __cmp__ method defined (cmp-method)
W: 90, 4: __cmp__ method defined (cmp-method)
W: 99,17: round built-in referenced (round-builtin)
```
We should then modernize the files mentioned in the report. This will likely involve some refactoring, since a change in how `__cmp__` is honored is involved.
**Describe the solution you'd like**
We should likely port the behavior of `__cmp__` to new `__lt__` and `__eq__` methods, following the guide provided by [fedora](https://portingguide.readthedocs.io/en/latest/comparisons.html#rich-comparisons) .
**Describe alternatives you've considered**
There is not much else we can do about this.
**Additional context**
This should require any change in third-party libraries
|
1.0
|
[py2py3] Apply changes from `pylint --py3k` to files involved in #10012 - **Impact of the new feature**
Files involved in #10012
**Is your feature request related to a problem? Please describe.**
#10012 was not checked against `pylint --py3k`, whose report about the files involved in that PR is
```plaintext
************* Module src.python.WMCore.DataStructs.File
W: 90, 4: __cmp__ method defined (cmp-method)
************* Module src.python.WMCore.DataStructs.MathStructs.ContinuousSummaryHistogram
W: 99,17: round built-in referenced (round-builtin)
************* Module src.python.WMCore.DataStructs.Run
W: 59, 4: __cmp__ method defined (cmp-method)
W: 59, 4: __cmp__ method defined (cmp-method)
W: 90, 4: __cmp__ method defined (cmp-method)
W: 99,17: round built-in referenced (round-builtin)
```
We should then modernize the files mentioned in the report. This will likely involve some refactoring, since a change in how `__cmp__` is honored is involved.
**Describe the solution you'd like**
We should likely port the behavior of `__cmp__` to new `__lt__` and `__eq__` methods, following the guide provided by [fedora](https://portingguide.readthedocs.io/en/latest/comparisons.html#rich-comparisons) .
**Describe alternatives you've considered**
There is not much else we can do about this.
**Additional context**
This should require any change in third-party libraries
|
non_process
|
apply changes from pylint to files involved in impact of the new feature files involved in is your feature request related to a problem please describe was not checked against pylint whose report about the files involved in that pr is plaintext module src python wmcore datastructs file w cmp method defined cmp method module src python wmcore datastructs mathstructs continuoussummaryhistogram w round built in referenced round builtin module src python wmcore datastructs run w cmp method defined cmp method w cmp method defined cmp method w cmp method defined cmp method w round built in referenced round builtin we should then modernize the files mentioned in the report this will likely involve some refactoring since a change in how cmp is honored is involved describe the solution you d like we should likely port the behavior of cmp to new lt and eq methods following the guide provided by describe alternatives you ve considered there is not much else we can do about this additional context this should require any change in third party libraries
| 0
|
282,841
| 30,889,441,667
|
IssuesEvent
|
2023-08-04 02:43:48
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
reopened
|
WS-2021-0334 (High) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2021-0334 (High) detected in linux-stable-rtv4.1.33 - ## WS-2021-0334 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_synproxy_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Linux/Kernel in versions v5.13-rc1 to v5.13-rc6 is vulnerable to out of bounds when parsing TCP options
<p>Publish Date: 2021-05-31
<p>URL: <a href=https://github.com/gregkh/linux/commit/6defc77d48eff74075b80ad5925061b2fc010d98>WS-2021-0334</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000919">https://osv.dev/vulnerability/UVI-2021-1000919</a></p>
<p>Release Date: 2021-05-31</p>
<p>Fix Resolution: v5.4.128</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in linux stable ws high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net netfilter nf synproxy core c net netfilter nf synproxy core c vulnerability details linux kernel in versions to is vulnerable to out of bounds when parsing tcp options publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
474,146
| 13,653,333,470
|
IssuesEvent
|
2020-09-27 12:14:06
|
Trinityyi/boardeaux
|
https://api.github.com/repos/Trinityyi/boardeaux
|
closed
|
Design and style boards and cards
|
Priority: normal enhancement
|
- [x] Implement a baseline for CSS styling
- [x] Organize CSS code and handle styling conventions
- [x] Add icons/iconfont
- [x] Style cards
- [x] Style modals
- [x] Style menu
- [x] Style boards
|
1.0
|
Design and style boards and cards - - [x] Implement a baseline for CSS styling
- [x] Organize CSS code and handle styling conventions
- [x] Add icons/iconfont
- [x] Style cards
- [x] Style modals
- [x] Style menu
- [x] Style boards
|
non_process
|
design and style boards and cards implement a baseline for css styling organize css code and handle styling conventions add icons iconfont style cards style modals style menu style boards
| 0
|
85,152
| 24,524,888,323
|
IssuesEvent
|
2022-10-11 12:26:26
|
microsoft/fluentui
|
https://api.github.com/repos/microsoft/fluentui
|
closed
|
[Feature]: Enable screener checks to run from the screener proxy
|
Area: Build System Type: Epic CI
|
### Library
React Components / v9 (@fluentui/react-components)
### Describe the feature that you would like added
Make screener checks runs to be triggered by the screener proxy instead of ADO using the GitHub API.
- [x] Convert the Azure DevOps job that runs the screener checks to a GitHub Action; | https://github.com/microsoft/fluentui/pull/24106
- [x] Remove ADO jobs regarding screener checks; | https://github.com/microsoft/fluentui/pull/24178
- [ ] Trigger screener checks using the GitHub REST API.
### Have you discussed this feature with our team
@ling1726
### Additional context
_No response_
### Validations
- [X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
|
1.0
|
[Feature]: Enable screener checks to run from the screener proxy - ### Library
React Components / v9 (@fluentui/react-components)
### Describe the feature that you would like added
Make screener checks runs to be triggered by the screener proxy instead of ADO using the GitHub API.
- [x] Convert the Azure DevOps job that runs the screener checks to a GitHub Action; | https://github.com/microsoft/fluentui/pull/24106
- [x] Remove ADO jobs regarding screener checks; | https://github.com/microsoft/fluentui/pull/24178
- [ ] Trigger screener checks using the GitHub REST API.
### Have you discussed this feature with our team
@ling1726
### Additional context
_No response_
### Validations
- [X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
|
non_process
|
enable screener checks to run from the screener proxy library react components fluentui react components describe the feature that you would like added make screener checks runs to be triggered by the screener proxy instead of ado using the github api convert the azure devops job that runs the screener checks to a github action remove ado jobs regarding screener checks trigger screener checks using the github rest api have you discussed this feature with our team additional context no response validations check that there isn t already an issue that request the same feature to avoid creating a duplicate
| 0
|
2,554
| 4,913,767,104
|
IssuesEvent
|
2016-11-23 13:39:33
|
esdee1902/TA05_K21T03_Team3.6
|
https://api.github.com/repos/esdee1902/TA05_K21T03_Team3.6
|
closed
|
As a player, I want to hear sounds when the ball hit wall or paddles.
|
Requirement
|
1. Tìm và tải âm thanh: 10 phút
2. Tìm hiểu cách thêm nó vào source code: 30 phút
3. Test xem sự ảnh hưởng tới trò chơi: 10 phút
Tổng cộng: 50 phút
|
1.0
|
As a player, I want to hear sounds when the ball hit wall or paddles. - 1. Tìm và tải âm thanh: 10 phút
2. Tìm hiểu cách thêm nó vào source code: 30 phút
3. Test xem sự ảnh hưởng tới trò chơi: 10 phút
Tổng cộng: 50 phút
|
non_process
|
as a player i want to hear sounds when the ball hit wall or paddles tìm và tải âm thanh phút tìm hiểu cách thêm nó vào source code phút test xem sự ảnh hưởng tới trò chơi phút tổng cộng phút
| 0
|
80,750
| 23,296,430,545
|
IssuesEvent
|
2022-08-06 16:47:48
|
bevyengine/bevy
|
https://api.github.com/repos/bevyengine/bevy
|
closed
|
Adding DefaultPlugins to the App panics with `default-features = false`
|
C-Bug A-Build-System A-App
|
## Bevy version
0.8
## What you did
Use Bevy without default features and add the `DefaultPlugins` to the `App`
`cargo.toml`:
```toml
[dependencies.bevy]
version = "0.8.0"
default-features = false
features = ["dynamic"]
```
`main.rs`:
```rust
use bevy::prelude::*;
fn main() {
App::new().add_plugins(DefaultPlugins).run();
}
```
## What went wrong
The code panics with the following backtrace:
```
thread 'main' panicked at 'Requested resource bevy_asset::asset_server::AssetServer does not exist in the `World`.
Did you forget to add it using `app.insert_resource` / `app.init_resource`?
Resources are also implicitly added via `app.add_event`,
and can be added by plugins.', /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_asset-0.8.0/src/assets.rs:324:43
stack backtrace:
0: rust_begin_unwind
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/core/src/panicking.rs:142:14
2: bevy_ecs::world::World::resource
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_ecs-0.8.0/src/world/mod.rs:817:21
3: <bevy_app::app::App as bevy_asset::assets::AddAsset>::add_asset
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_asset-0.8.0/src/assets.rs:324:32
4: <bevy_render::RenderPlugin as bevy_app::plugin::Plugin>::build
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_render-0.8.0/src/lib.rs:130:9
5: bevy_app::plugin_group::PluginGroupBuilder::finish
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_app-0.8.0/src/plugin_group.rs:135:21
6: bevy_app::app::App::add_plugins
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_app-0.8.0/src/app.rs:810:9
7: viewer::main
at ./bin/viewer/main.rs:20:5
8: core::ops::function::FnOnce::call_once
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/core/src/ops/function.rs:248:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
## Additional information
It looks like the recent change that removed `bevy_asset` from `no-default-features` caused this issue as adding `bevy_asset` to the features functions as a workaround.
|
1.0
|
Adding DefaultPlugins to the App panics with `default-features = false` - ## Bevy version
0.8
## What you did
Use Bevy without default features and add the `DefaultPlugins` to the `App`
`cargo.toml`:
```toml
[dependencies.bevy]
version = "0.8.0"
default-features = false
features = ["dynamic"]
```
`main.rs`:
```rust
use bevy::prelude::*;
fn main() {
App::new().add_plugins(DefaultPlugins).run();
}
```
## What went wrong
The code panics with the following backtrace:
```
thread 'main' panicked at 'Requested resource bevy_asset::asset_server::AssetServer does not exist in the `World`.
Did you forget to add it using `app.insert_resource` / `app.init_resource`?
Resources are also implicitly added via `app.add_event`,
and can be added by plugins.', /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_asset-0.8.0/src/assets.rs:324:43
stack backtrace:
0: rust_begin_unwind
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/core/src/panicking.rs:142:14
2: bevy_ecs::world::World::resource
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_ecs-0.8.0/src/world/mod.rs:817:21
3: <bevy_app::app::App as bevy_asset::assets::AddAsset>::add_asset
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_asset-0.8.0/src/assets.rs:324:32
4: <bevy_render::RenderPlugin as bevy_app::plugin::Plugin>::build
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_render-0.8.0/src/lib.rs:130:9
5: bevy_app::plugin_group::PluginGroupBuilder::finish
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_app-0.8.0/src/plugin_group.rs:135:21
6: bevy_app::app::App::add_plugins
at /home/felix/.cargo/registry/src/github.com-1ecc6299db9ec823/bevy_app-0.8.0/src/app.rs:810:9
7: viewer::main
at ./bin/viewer/main.rs:20:5
8: core::ops::function::FnOnce::call_once
at /rustc/1b57946a405d5b2a87e612335db033edb2c3427f/library/core/src/ops/function.rs:248:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
## Additional information
It looks like the recent change that removed `bevy_asset` from `no-default-features` caused this issue as adding `bevy_asset` to the features functions as a workaround.
|
non_process
|
adding defaultplugins to the app panics with default features false bevy version what you did use bevy without default features and add the defaultplugins to the app cargo toml toml version default features false features main rs rust use bevy prelude fn main app new add plugins defaultplugins run what went wrong the code panics with the following backtrace thread main panicked at requested resource bevy asset asset server assetserver does not exist in the world did you forget to add it using app insert resource app init resource resources are also implicitly added via app add event and can be added by plugins home felix cargo registry src github com bevy asset src assets rs stack backtrace rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs bevy ecs world world resource at home felix cargo registry src github com bevy ecs src world mod rs add asset at home felix cargo registry src github com bevy asset src assets rs build at home felix cargo registry src github com bevy render src lib rs bevy app plugin group plugingroupbuilder finish at home felix cargo registry src github com bevy app src plugin group rs bevy app app app add plugins at home felix cargo registry src github com bevy app src app rs viewer main at bin viewer main rs core ops function fnonce call once at rustc library core src ops function rs note some details are omitted run with rust backtrace full for a verbose backtrace additional information it looks like the recent change that removed bevy asset from no default features caused this issue as adding bevy asset to the features functions as a workaround
| 0
|
7,077
| 10,227,248,737
|
IssuesEvent
|
2019-08-16 20:12:06
|
googleapis/google-cloud-cpp-spanner
|
https://api.github.com/repos/googleapis/google-cloud-cpp-spanner
|
closed
|
docker cmake-super build compiles with -j1
|
priority: p2 type: process
|
The docker cmake-super build takes significantly longer than other builds (2-3x). From the logs we see:
`make[3]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule.`
@coryan suggests we could use ninja or CMAKE_BUILD_PARALLEL_LEVEL if we move to cmake-3.12+ https://github.com/googleapis/google-cloud-cpp/issues/2278
|
1.0
|
docker cmake-super build compiles with -j1 - The docker cmake-super build takes significantly longer than other builds (2-3x). From the logs we see:
`make[3]: warning: jobserver unavailable: using -j1. Add '+' to parent make rule.`
@coryan suggests we could use ninja or CMAKE_BUILD_PARALLEL_LEVEL if we move to cmake-3.12+ https://github.com/googleapis/google-cloud-cpp/issues/2278
|
process
|
docker cmake super build compiles with the docker cmake super build takes significantly longer than other builds from the logs we see make warning jobserver unavailable using add to parent make rule coryan suggests we could use ninja or cmake build parallel level if we move to cmake
| 1
|
71,166
| 18,501,470,503
|
IssuesEvent
|
2021-10-19 14:10:47
|
NOAA-EMC/NCEPLIBS-g2c
|
https://api.github.com/repos/NOAA-EMC/NCEPLIBS-g2c
|
closed
|
Skip the 1.6.3 release - next release should be 1.6.4
|
build
|
The NCO has done a 1.6.3 release of g2clib. To avoid confusion, we will call the next release 1.6.4.
|
1.0
|
Skip the 1.6.3 release - next release should be 1.6.4 - The NCO has done a 1.6.3 release of g2clib. To avoid confusion, we will call the next release 1.6.4.
|
non_process
|
skip the release next release should be the nco has done a release of to avoid confusion we will call the next release
| 0
|
13,240
| 15,707,747,013
|
IssuesEvent
|
2021-03-26 19:22:38
|
correctcomputation/checkedc-clang
|
https://api.github.com/repos/correctcomputation/checkedc-clang
|
opened
|
`ClangTool::run` `chdir` call corrupts internal Clang include file path cache (?)
|
Upstream bug clang preprocessor command-line
|
As part of the change to expand macros before running 3C, I tried to change `convert_project` so that instead of (1) passing an adjusted version of the union of all compiler options seen in the compilation database to 3C via `-extra-arg-before`, it (2) lets 3C read the options directly from the compilation database. This is because approach (1) may be wrong if different translation units have different compiler options, and I was more concerned about this as `convert_project` started to have more direct interaction with the preprocessor. Importantly, the adjustment in (1) included expanding relative paths in `-I` options to absolute paths based on the working directory of the translation unit carrying the options.
Unfortunately, this change seemed to cause our icecast benchmark to trigger a bug in Clang LibTooling. The symptom looks like this:
```
2021-03-26 14:52:27.200 INFO generate_ccommands - run3C: Running:/home/matt/3c-3.wt/build/bin/3c -dump-stats -p /home/matt/benchmarks/icecast-2.4.4/compile_commands.json -extra-arg=-w -base-dir="/home/matt/benchmarks/icecast-2.4.4" -output-dir="/home/matt/benchmarks/icecast-2.4.4/out.checked" /home/matt/benchmarks/icecast-2.4.4/src/format_flac.c /home/matt/benchmarks/icecast-2.4.4/src/format_ogg.c /home/matt/benchmarks/icecast-2.4.4/src/format_kate.c /home/matt/benchmarks/icecast-2.4.4/src/main.c /home/matt/benchmarks/icecast-2.4.4/src/format_mp3.c /home/matt/benchmarks/icecast-2.4.4/src/sighandler.c /home/matt/benchmarks/icecast-2.4.4/src/global.c /home/matt/benchmarks/icecast-2.4.4/src/cfgfile.c /home/matt/benchmarks/icecast-2.4.4/src/format_ebml.c /home/matt/benchmarks/icecast-2.4.4/src/event.c /home/matt/benchmarks/icecast-2.4.4/src/auth_htpasswd.c /home/matt/benchmarks/icecast-2.4.4/src/refbuf.c /home/matt/benchmarks/icecast-2.4.4/src/avl/avl.c /home/matt/benchmarks/icecast-2.4.4/src/format_vorbis.c /home/matt/benchmarks/icecast-2.4.4/src/connection.c /home/matt/benchmarks/icecast-2.4.4/src/util.c /home/matt/benchmarks/icecast-2.4.4/src/admin.c /home/matt/benchmarks/icecast-2.4.4/src/log/log.c /home/matt/benchmarks/icecast-2.4.4/src/format_opus.c /home/matt/benchmarks/icecast-2.4.4/src/thread/thread.c /home/matt/benchmarks/icecast-2.4.4/src/client.c /home/matt/benchmarks/icecast-2.4.4/src/timing/timing.c /home/matt/benchmarks/icecast-2.4.4/src/net/resolver.c /home/matt/benchmarks/icecast-2.4.4/src/stats.c /home/matt/benchmarks/icecast-2.4.4/src/net/sock.c /home/matt/benchmarks/icecast-2.4.4/src/source.c /home/matt/benchmarks/icecast-2.4.4/src/slave.c /home/matt/benchmarks/icecast-2.4.4/src/format_skeleton.c /home/matt/benchmarks/icecast-2.4.4/src/logging.c /home/matt/benchmarks/icecast-2.4.4/src/fserve.c /home/matt/benchmarks/icecast-2.4.4/src/auth.c /home/matt/benchmarks/icecast-2.4.4/src/format_midi.c /home/matt/benchmarks/icecast-2.4.4/src/md5.c /home/matt/benchmarks/icecast-2.4.4/src/format.c /home/matt/benchmarks/icecast-2.4.4/src/xslt.c /home/matt/benchmarks/icecast-2.4.4/src/httpp/httpp.c
avl.c:33:11: fatal error: cannot open file '../config.h': No such file or directory
#include <config.h>
^
avl.c:33:11: fatal error: cannot open file '../config.h': No such file or directory
#include <config.h>
^
[...more similar errors...]
```
My rough theory is as follows: Clang has a cache where the first time it sees `#include STR` (where `STR` is of the form `<PATH>` or `"PATH"`), it searches the include path for the first matching file and caches the path at which it found the file (to a first approximation, the concatenation of the `-I` directory with `PATH`). If Clang later sees `#include STR` again, it tries to open the cached path directly and raises a fatal error (seen above) if it fails. The problem arises when the cached path is relative, which can occur if the directory path specified via `-I` was relative. `ClangTool::run` iterates over the specified translation units, and for each one, it does a `chdir` to the working directory specified in the compilation database but (apparently) does not invalidate the cache. Consequently, if different translation units have different working directories, the preprocessor may try and fail to open a cached relative path because the working directory is different than it was when the path was added to the cache, when instead the preprocessor should do the include search over. Surprisingly, #488 did not fix the problem because `ClangTool::buildASTs` still calls `ClangTool::run` internally (!).
Here is [the original benchmark workflow run in which the problem appeared](https://github.com/correctcomputation/actions/runs/2203501812?check_suite_focus=true) (though the logs will probably expire from GitHub soon). It should be possible to reproduce the problem by re-running that revision of the `preprocess-before-conversion` workflow (https://github.com/correctcomputation/actions/commit/7651529638585a2c23f576ee08398ede5af6f239) on the corresponding revision of the `preprocess-before-conversion` branch of this repository (c113b1dbf76bd51d0b5326594b47ee9f8e12a686). We could probably construct a smaller test case with a compilation database with two entries (and presumably that's what we would do if we wanted to add a regression test for the problem to 3C), but I don't want to take the time to do that now.
In a web search, I found a few reports of similar-looking problems ([1](https://lists.llvm.org/pipermail/cfe-dev/2013-December/033883.html), [2](https://lists.llvm.org/pipermail/cfe-users/2015-April/000681.html)), but it doesn't appear that anyone has tracked down the details and formally reported the bug in [the Clang bug database](https://bugs.llvm.org/). We could do so if we wish.
Ultimately, we'll probably want to fix or work around this problem somehow so that end users get correct behavior when running 3C on a compilation database like that of icecast. For now, I'm planning to work around the problem in `convert_project` by restoring the legacy behavior of passing `-extra-arg-before` to 3C, but only for the absolute versions of `-I` options. Since we use `-extra-arg-before`, this will ensure that every included file is found via an absolute `-I` directory before we reach the relative ones in the compilation database, so the cached path will be absolute, avoiding the problem. In principle, this could be wrong if different translation units have different sets of resolved `-I` directories: if we apply the union of the `-I` directories to all translation units, then a translation unit could use a file from an `-I` directory that was not supposed to be active for that translation unit, when it was intended to use a file from a later `-I` directory instead. However, I don't believe this happens in any of our current benchmarks.
|
1.0
|
`ClangTool::run` `chdir` call corrupts internal Clang include file path cache (?) - As part of the change to expand macros before running 3C, I tried to change `convert_project` so that instead of (1) passing an adjusted version of the union of all compiler options seen in the compilation database to 3C via `-extra-arg-before`, it (2) lets 3C read the options directly from the compilation database. This is because approach (1) may be wrong if different translation units have different compiler options, and I was more concerned about this as `convert_project` started to have more direct interaction with the preprocessor. Importantly, the adjustment in (1) included expanding relative paths in `-I` options to absolute paths based on the working directory of the translation unit carrying the options.
Unfortunately, this change seemed to cause our icecast benchmark to trigger a bug in Clang LibTooling. The symptom looks like this:
```
2021-03-26 14:52:27.200 INFO generate_ccommands - run3C: Running:/home/matt/3c-3.wt/build/bin/3c -dump-stats -p /home/matt/benchmarks/icecast-2.4.4/compile_commands.json -extra-arg=-w -base-dir="/home/matt/benchmarks/icecast-2.4.4" -output-dir="/home/matt/benchmarks/icecast-2.4.4/out.checked" /home/matt/benchmarks/icecast-2.4.4/src/format_flac.c /home/matt/benchmarks/icecast-2.4.4/src/format_ogg.c /home/matt/benchmarks/icecast-2.4.4/src/format_kate.c /home/matt/benchmarks/icecast-2.4.4/src/main.c /home/matt/benchmarks/icecast-2.4.4/src/format_mp3.c /home/matt/benchmarks/icecast-2.4.4/src/sighandler.c /home/matt/benchmarks/icecast-2.4.4/src/global.c /home/matt/benchmarks/icecast-2.4.4/src/cfgfile.c /home/matt/benchmarks/icecast-2.4.4/src/format_ebml.c /home/matt/benchmarks/icecast-2.4.4/src/event.c /home/matt/benchmarks/icecast-2.4.4/src/auth_htpasswd.c /home/matt/benchmarks/icecast-2.4.4/src/refbuf.c /home/matt/benchmarks/icecast-2.4.4/src/avl/avl.c /home/matt/benchmarks/icecast-2.4.4/src/format_vorbis.c /home/matt/benchmarks/icecast-2.4.4/src/connection.c /home/matt/benchmarks/icecast-2.4.4/src/util.c /home/matt/benchmarks/icecast-2.4.4/src/admin.c /home/matt/benchmarks/icecast-2.4.4/src/log/log.c /home/matt/benchmarks/icecast-2.4.4/src/format_opus.c /home/matt/benchmarks/icecast-2.4.4/src/thread/thread.c /home/matt/benchmarks/icecast-2.4.4/src/client.c /home/matt/benchmarks/icecast-2.4.4/src/timing/timing.c /home/matt/benchmarks/icecast-2.4.4/src/net/resolver.c /home/matt/benchmarks/icecast-2.4.4/src/stats.c /home/matt/benchmarks/icecast-2.4.4/src/net/sock.c /home/matt/benchmarks/icecast-2.4.4/src/source.c /home/matt/benchmarks/icecast-2.4.4/src/slave.c /home/matt/benchmarks/icecast-2.4.4/src/format_skeleton.c /home/matt/benchmarks/icecast-2.4.4/src/logging.c /home/matt/benchmarks/icecast-2.4.4/src/fserve.c /home/matt/benchmarks/icecast-2.4.4/src/auth.c /home/matt/benchmarks/icecast-2.4.4/src/format_midi.c /home/matt/benchmarks/icecast-2.4.4/src/md5.c /home/matt/benchmarks/icecast-2.4.4/src/format.c /home/matt/benchmarks/icecast-2.4.4/src/xslt.c /home/matt/benchmarks/icecast-2.4.4/src/httpp/httpp.c
avl.c:33:11: fatal error: cannot open file '../config.h': No such file or directory
#include <config.h>
^
avl.c:33:11: fatal error: cannot open file '../config.h': No such file or directory
#include <config.h>
^
[...more similar errors...]
```
My rough theory is as follows: Clang has a cache where the first time it sees `#include STR` (where `STR` is of the form `<PATH>` or `"PATH"`), it searches the include path for the first matching file and caches the path at which it found the file (to a first approximation, the concatenation of the `-I` directory with `PATH`). If Clang later sees `#include STR` again, it tries to open the cached path directly and raises a fatal error (seen above) if it fails. The problem arises when the cached path is relative, which can occur if the directory path specified via `-I` was relative. `ClangTool::run` iterates over the specified translation units, and for each one, it does a `chdir` to the working directory specified in the compilation database but (apparently) does not invalidate the cache. Consequently, if different translation units have different working directories, the preprocessor may try and fail to open a cached relative path because the working directory is different than it was when the path was added to the cache, when instead the preprocessor should do the include search over. Surprisingly, #488 did not fix the problem because `ClangTool::buildASTs` still calls `ClangTool::run` internally (!).
Here is [the original benchmark workflow run in which the problem appeared](https://github.com/correctcomputation/actions/runs/2203501812?check_suite_focus=true) (though the logs will probably expire from GitHub soon). It should be possible to reproduce the problem by re-running that revision of the `preprocess-before-conversion` workflow (https://github.com/correctcomputation/actions/commit/7651529638585a2c23f576ee08398ede5af6f239) on the corresponding revision of the `preprocess-before-conversion` branch of this repository (c113b1dbf76bd51d0b5326594b47ee9f8e12a686). We could probably construct a smaller test case with a compilation database with two entries (and presumably that's what we would do if we wanted to add a regression test for the problem to 3C), but I don't want to take the time to do that now.
In a web search, I found a few reports of similar-looking problems ([1](https://lists.llvm.org/pipermail/cfe-dev/2013-December/033883.html), [2](https://lists.llvm.org/pipermail/cfe-users/2015-April/000681.html)), but it doesn't appear that anyone has tracked down the details and formally reported the bug in [the Clang bug database](https://bugs.llvm.org/). We could do so if we wish.
Ultimately, we'll probably want to fix or work around this problem somehow so that end users get correct behavior when running 3C on a compilation database like that of icecast. For now, I'm planning to work around the problem in `convert_project` by restoring the legacy behavior of passing `-extra-arg-before` to 3C, but only for the absolute versions of `-I` options. Since we use `-extra-arg-before`, this will ensure that every included file is found via an absolute `-I` directory before we reach the relative ones in the compilation database, so the cached path will be absolute, avoiding the problem. In principle, this could be wrong if different translation units have different sets of resolved `-I` directories: if we apply the union of the `-I` directories to all translation units, then a translation unit could use a file from an `-I` directory that was not supposed to be active for that translation unit, when it was intended to use a file from a later `-I` directory instead. However, I don't believe this happens in any of our current benchmarks.
|
process
|
clangtool run chdir call corrupts internal clang include file path cache as part of the change to expand macros before running i tried to change convert project so that instead of passing an adjusted version of the union of all compiler options seen in the compilation database to via extra arg before it lets read the options directly from the compilation database this is because approach may be wrong if different translation units have different compiler options and i was more concerned about this as convert project started to have more direct interaction with the preprocessor importantly the adjustment in included expanding relative paths in i options to absolute paths based on the working directory of the translation unit carrying the options unfortunately this change seemed to cause our icecast benchmark to trigger a bug in clang libtooling the symptom looks like this info generate ccommands running home matt wt build bin dump stats p home matt benchmarks icecast compile commands json extra arg w base dir home matt benchmarks icecast output dir home matt benchmarks icecast out checked home matt benchmarks icecast src format flac c home matt benchmarks icecast src format ogg c home matt benchmarks icecast src format kate c home matt benchmarks icecast src main c home matt benchmarks icecast src format c home matt benchmarks icecast src sighandler c home matt benchmarks icecast src global c home matt benchmarks icecast src cfgfile c home matt benchmarks icecast src format ebml c home matt benchmarks icecast src event c home matt benchmarks icecast src auth htpasswd c home matt benchmarks icecast src refbuf c home matt benchmarks icecast src avl avl c home matt benchmarks icecast src format vorbis c home matt benchmarks icecast src connection c home matt benchmarks icecast src util c home matt benchmarks icecast src admin c home matt benchmarks icecast src log log c home matt benchmarks icecast src format opus c home matt benchmarks icecast src thread thread c home matt benchmarks icecast src client c home matt benchmarks icecast src timing timing c home matt benchmarks icecast src net resolver c home matt benchmarks icecast src stats c home matt benchmarks icecast src net sock c home matt benchmarks icecast src source c home matt benchmarks icecast src slave c home matt benchmarks icecast src format skeleton c home matt benchmarks icecast src logging c home matt benchmarks icecast src fserve c home matt benchmarks icecast src auth c home matt benchmarks icecast src format midi c home matt benchmarks icecast src c home matt benchmarks icecast src format c home matt benchmarks icecast src xslt c home matt benchmarks icecast src httpp httpp c avl c fatal error cannot open file config h no such file or directory include avl c fatal error cannot open file config h no such file or directory include my rough theory is as follows clang has a cache where the first time it sees include str where str is of the form or path it searches the include path for the first matching file and caches the path at which it found the file to a first approximation the concatenation of the i directory with path if clang later sees include str again it tries to open the cached path directly and raises a fatal error seen above if it fails the problem arises when the cached path is relative which can occur if the directory path specified via i was relative clangtool run iterates over the specified translation units and for each one it does a chdir to the working directory specified in the compilation database but apparently does not invalidate the cache consequently if different translation units have different working directories the preprocessor may try and fail to open a cached relative path because the working directory is different than it was when the path was added to the cache when instead the preprocessor should do the include search over surprisingly did not fix the problem because clangtool buildasts still calls clangtool run internally here is though the logs will probably expire from github soon it should be possible to reproduce the problem by re running that revision of the preprocess before conversion workflow on the corresponding revision of the preprocess before conversion branch of this repository we could probably construct a smaller test case with a compilation database with two entries and presumably that s what we would do if we wanted to add a regression test for the problem to but i don t want to take the time to do that now in a web search i found a few reports of similar looking problems but it doesn t appear that anyone has tracked down the details and formally reported the bug in we could do so if we wish ultimately we ll probably want to fix or work around this problem somehow so that end users get correct behavior when running on a compilation database like that of icecast for now i m planning to work around the problem in convert project by restoring the legacy behavior of passing extra arg before to but only for the absolute versions of i options since we use extra arg before this will ensure that every included file is found via an absolute i directory before we reach the relative ones in the compilation database so the cached path will be absolute avoiding the problem in principle this could be wrong if different translation units have different sets of resolved i directories if we apply the union of the i directories to all translation units then a translation unit could use a file from an i directory that was not supposed to be active for that translation unit when it was intended to use a file from a later i directory instead however i don t believe this happens in any of our current benchmarks
| 1
|
18,941
| 24,901,739,161
|
IssuesEvent
|
2022-10-28 21:48:47
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
Run checks to use latest supported Python (3.9)
|
api: bigquery type: process
|
In `setup.py` and in README we declare that we support `Python >= 3.6`, but the nox test sessions only use Python up to 3.8. Python 3.9 is also missing from the `classifiers` list in `setup.py`.
We should bump the maximum versions to `3.9` where applicable. If tests fail in 3.9, we should bound the `python_requires` version in `setup.py` until the failures are fixed (though - bound it in any case and manually increase only after we confirm that it works with the new minor Python release?).
If we first need to setup Kokoro environment to support Python 3.9 test environment, let's do that and keep this ticket open as a reminder.
cc: @tswast @busunkim96
|
1.0
|
Run checks to use latest supported Python (3.9) - In `setup.py` and in README we declare that we support `Python >= 3.6`, but the nox test sessions only use Python up to 3.8. Python 3.9 is also missing from the `classifiers` list in `setup.py`.
We should bump the maximum versions to `3.9` where applicable. If tests fail in 3.9, we should bound the `python_requires` version in `setup.py` until the failures are fixed (though - bound it in any case and manually increase only after we confirm that it works with the new minor Python release?).
If we first need to setup Kokoro environment to support Python 3.9 test environment, let's do that and keep this ticket open as a reminder.
cc: @tswast @busunkim96
|
process
|
run checks to use latest supported python in setup py and in readme we declare that we support python but the nox test sessions only use python up to python is also missing from the classifiers list in setup py we should bump the maximum versions to where applicable if tests fail in we should bound the python requires version in setup py until the failures are fixed though bound it in any case and manually increase only after we confirm that it works with the new minor python release if we first need to setup kokoro environment to support python test environment let s do that and keep this ticket open as a reminder cc tswast
| 1
|
9,910
| 12,950,137,587
|
IssuesEvent
|
2020-07-19 12:02:19
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Cryptic error when XHTML outout generate copy outer param set to not allowed value
|
feature good first issue preprocess priority/low stale
|
If I process DITA OT XHTML using DITA OT 2.x and the copy outer parameter set to:
```
-Dgenerate.copy.outer=2
```
I obtain a very cryptic error at some point like:
```
BUILD FAILED
C:\wade\DITA-OT\build.xml:41: The following error occurred while executing this line:
C:\wade\DITA-OT\plugins\org.dita.base\build_preprocess.xml:78: Failed to run pipeline: null
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:199)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.Project.executeTarget(Project.java:1364)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.Main.runBuild(Main.java:851)
at org.apache.tools.ant.Main.startAnt(Main.java:235)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: org.dita.dost.exception.DITAOTException
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:248)
at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:68)
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:193)
... 29 more
Caused by: java.lang.IllegalArgumentException
at org.dita.dost.util.Job$Generate.get(Job.java:802)
at org.dita.dost.util.Job.setGeneratecopyouter(Job.java:848)
at org.dita.dost.module.GenMapAndTopicListModule.parseInputParameters(GenMapAndTopicListModule.java:333)
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:232)
... 31 more
```
with DITA OT 3.x I obtain a similar failure:
\dita-ot-3x-plugin-master\dita- ot-3.x\build.xml:45: The following error occurred while executing this line:
dita-ot-3x-plugin-master\dita-ot-3.x\plugins\org.dita.base\build_preprocess2.xml:71: java.lang.IllegalArgumentException
at org.dita.dost.util.Job$Generate.get(Job.java:856)
at org.dita.dost.util.Job.setGeneratecopyouter(Job.java:902)
at org.dita.dost.module.reader.AbstractReaderModule.parseInputParameters(AbstractReaderModule.java:230)
at org.dita.dost.module.reader.MapReaderModule.execute(MapReaderModule.java:46)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:169)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
|
1.0
|
Cryptic error when XHTML outout generate copy outer param set to not allowed value - If I process DITA OT XHTML using DITA OT 2.x and the copy outer parameter set to:
```
-Dgenerate.copy.outer=2
```
I obtain a very cryptic error at some point like:
```
BUILD FAILED
C:\wade\DITA-OT\build.xml:41: The following error occurred while executing this line:
C:\wade\DITA-OT\plugins\org.dita.base\build_preprocess.xml:78: Failed to run pipeline: null
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:199)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.Project.executeTarget(Project.java:1364)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.Main.runBuild(Main.java:851)
at org.apache.tools.ant.Main.startAnt(Main.java:235)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: org.dita.dost.exception.DITAOTException
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:248)
at org.dita.dost.pipeline.PipelineFacade.execute(PipelineFacade.java:68)
at org.dita.dost.invoker.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:193)
... 29 more
Caused by: java.lang.IllegalArgumentException
at org.dita.dost.util.Job$Generate.get(Job.java:802)
at org.dita.dost.util.Job.setGeneratecopyouter(Job.java:848)
at org.dita.dost.module.GenMapAndTopicListModule.parseInputParameters(GenMapAndTopicListModule.java:333)
at org.dita.dost.module.GenMapAndTopicListModule.execute(GenMapAndTopicListModule.java:232)
... 31 more
```
with DITA OT 3.x I obtain a similar failure:
\dita-ot-3x-plugin-master\dita- ot-3.x\build.xml:45: The following error occurred while executing this line:
dita-ot-3x-plugin-master\dita-ot-3.x\plugins\org.dita.base\build_preprocess2.xml:71: java.lang.IllegalArgumentException
at org.dita.dost.util.Job$Generate.get(Job.java:856)
at org.dita.dost.util.Job.setGeneratecopyouter(Job.java:902)
at org.dita.dost.module.reader.AbstractReaderModule.parseInputParameters(AbstractReaderModule.java:230)
at org.dita.dost.module.reader.MapReaderModule.execute(MapReaderModule.java:46)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:169)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
|
process
|
cryptic error when xhtml outout generate copy outer param set to not allowed value if i process dita ot xhtml using dita ot x and the copy outer parameter set to dgenerate copy outer i obtain a very cryptic error at some point like build failed c wade dita ot build xml the following error occurred while executing this line c wade dita ot plugins org dita base build preprocess xml failed to run pipeline null at org dita dost invoker extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant project executetarget project java at org apache tools ant helper defaultexecutor executetargets defaultexecutor java at org apache tools ant project executetargets project java at org apache tools ant main runbuild main java at org apache tools ant main startant main java at org apache tools ant launch launcher run launcher java at org apache tools ant launch launcher main launcher java caused by org dita dost exception ditaotexception at org dita dost module genmapandtopiclistmodule execute genmapandtopiclistmodule java at org dita dost pipeline pipelinefacade execute pipelinefacade java at org dita dost invoker extensibleantinvoker execute extensibleantinvoker java more caused by java lang illegalargumentexception at org dita dost util job generate get job java at org dita dost util job setgeneratecopyouter job java at org dita dost module genmapandtopiclistmodule parseinputparameters genmapandtopiclistmodule java at org dita dost module genmapandtopiclistmodule execute genmapandtopiclistmodule java more with dita ot x i obtain a similar failure dita ot plugin master dita ot x build xml the following error occurred while executing this line dita ot plugin master dita ot x plugins org dita base build xml java lang illegalargumentexception at org dita dost util job generate get job java at org dita dost util job setgeneratecopyouter job java at org dita dost module reader abstractreadermodule parseinputparameters abstractreadermodule java at org dita dost module reader mapreadermodule execute mapreadermodule java at org dita dost ant extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source
| 1
|
84,425
| 15,721,437,312
|
IssuesEvent
|
2021-03-29 03:06:36
|
mycomplexsoul/delta
|
https://api.github.com/repos/mycomplexsoul/delta
|
opened
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: delta/package.json</p>
<p>Path to vulnerable library: delta/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.2.3.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mycomplexsoul/delta/commit/6dc7c3e3f97e8e28471c127e33ba7e52892e0f53">6dc7c3e3f97e8e28471c127e33ba7e52892e0f53</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: delta/package.json</p>
<p>Path to vulnerable library: delta/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.2.3.tgz (Root Library)
- socket.io-2.3.0.tgz
- socket.io-client-2.3.0.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mycomplexsoul/delta/commit/6dc7c3e3f97e8e28471c127e33ba7e52892e0f53">6dc7c3e3f97e8e28471c127e33ba7e52892e0f53</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28502</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file delta package json path to vulnerable library delta node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest step up your open source security game with whitesource
| 0
|
365,739
| 25,549,936,283
|
IssuesEvent
|
2022-11-29 22:28:44
|
cal-itp/benefits
|
https://api.github.com/repos/cal-itp/benefits
|
closed
|
Make it clear which terminology to use
|
documentation deliverable
|
There have been some discussions that have come up around terminology within the Benefits project:
- "Benefit" and "discount" - cc https://github.com/cal-itp/mobility-marketplace/issues/370
- What are [the different things the user selects between](https://www.figma.com/proto/SeSd3LaLd6WkbEYhmtKpO3/Benefits-(IAL2-Login.gov)?node-id=5957%3A23559&scaling=scale-down&page-id=5450%3A21241&starting-point-node-id=5902%3A24813)?
- Is there / should there be a catch-all term that encompasses both?
- The model in the app is `EligibilityType`.
- This one might be large/important enough to create a (non-architectural) [decision record](https://18f.gsa.gov/2021/07/06/architecture_decision_records_helpful_now_invaluable_later/) around.
- [Identity "verification" and "proofing"](https://cal-itp.slack.com/archives/CG64NU5C7/p1663020296383499)
I could see a Glossary/Terminology section of the [style guide](https://www.figma.com/proto/SeSd3LaLd6WkbEYhmtKpO3/Benefits-(IAL2-Login.gov)?node-id=4942%3A17385&scaling=scale-down&page-id=4890%3A17182) or the [docs site](https://docs.calitp.org/benefits/).
## Acceptance Criteria
- [ ] For some "core terms" (listed above), it is clear when to use one versus another
## Additional context
<!-- Include information about scope, time frame, person who requested the task, links to resources -->
## What is the definition of done?
|
1.0
|
Make it clear which terminology to use - There have been some discussions that have come up around terminology within the Benefits project:
- "Benefit" and "discount" - cc https://github.com/cal-itp/mobility-marketplace/issues/370
- What are [the different things the user selects between](https://www.figma.com/proto/SeSd3LaLd6WkbEYhmtKpO3/Benefits-(IAL2-Login.gov)?node-id=5957%3A23559&scaling=scale-down&page-id=5450%3A21241&starting-point-node-id=5902%3A24813)?
- Is there / should there be a catch-all term that encompasses both?
- The model in the app is `EligibilityType`.
- This one might be large/important enough to create a (non-architectural) [decision record](https://18f.gsa.gov/2021/07/06/architecture_decision_records_helpful_now_invaluable_later/) around.
- [Identity "verification" and "proofing"](https://cal-itp.slack.com/archives/CG64NU5C7/p1663020296383499)
I could see a Glossary/Terminology section of the [style guide](https://www.figma.com/proto/SeSd3LaLd6WkbEYhmtKpO3/Benefits-(IAL2-Login.gov)?node-id=4942%3A17385&scaling=scale-down&page-id=4890%3A17182) or the [docs site](https://docs.calitp.org/benefits/).
## Acceptance Criteria
- [ ] For some "core terms" (listed above), it is clear when to use one versus another
## Additional context
<!-- Include information about scope, time frame, person who requested the task, links to resources -->
## What is the definition of done?
|
non_process
|
make it clear which terminology to use there have been some discussions that have come up around terminology within the benefits project benefit and discount cc what are is there should there be a catch all term that encompasses both the model in the app is eligibilitytype this one might be large important enough to create a non architectural around i could see a glossary terminology section of the or the acceptance criteria for some core terms listed above it is clear when to use one versus another additional context what is the definition of done
| 0
|
124,642
| 26,502,265,655
|
IssuesEvent
|
2023-01-18 11:10:55
|
salmenf/webwriter
|
https://api.github.com/repos/salmenf/webwriter
|
opened
|
Schema-based parsing at all system boundaries
|
code quality core
|
External data (document formats, packages, etc.) should be parsed so data in the system is uniform.
|
1.0
|
Schema-based parsing at all system boundaries - External data (document formats, packages, etc.) should be parsed so data in the system is uniform.
|
non_process
|
schema based parsing at all system boundaries external data document formats packages etc should be parsed so data in the system is uniform
| 0
|
15,883
| 20,071,519,068
|
IssuesEvent
|
2022-02-04 07:37:39
|
plazi/treatmentBank
|
https://api.github.com/repos/plazi/treatmentBank
|
closed
|
figures not linked since Feb 1.
|
help wanted invalid processing BLR
|
@gsautter is there a reason that the figures in the recently batch processed articles (since Feb 1) are not linked to the images?
Could be that there is a Zenodo issue? https://tb.plazi.org/GgServer/dioStats/stats?outputFields=doc.articleUuid+doc.gbifId+doc.zenodoDepId&groupingFields=doc.articleUuid+doc.gbifId+doc.zenodoDepId&FP-doc.articleUuid=FFC07C53B27DBE37FFD19038FFEAFFE2&format=HTML
e.g. https://tb.plazi.org/GgServer/summary/FFC07C53B27DBE37FFD19038FFEAFFE2

https://tb.plazi.org/GgServer/summary/FFFF7518FFBD1432FF8A2D7CA4693666a

https://tb.plazi.org/GgServer/summary/FF9F895C2F5BFFD79154FFEFFFFEFFC4

|
1.0
|
figures not linked since Feb 1. - @gsautter is there a reason that the figures in the recently batch processed articles (since Feb 1) are not linked to the images?
Could be that there is a Zenodo issue? https://tb.plazi.org/GgServer/dioStats/stats?outputFields=doc.articleUuid+doc.gbifId+doc.zenodoDepId&groupingFields=doc.articleUuid+doc.gbifId+doc.zenodoDepId&FP-doc.articleUuid=FFC07C53B27DBE37FFD19038FFEAFFE2&format=HTML
e.g. https://tb.plazi.org/GgServer/summary/FFC07C53B27DBE37FFD19038FFEAFFE2

https://tb.plazi.org/GgServer/summary/FFFF7518FFBD1432FF8A2D7CA4693666a

https://tb.plazi.org/GgServer/summary/FF9F895C2F5BFFD79154FFEFFFFEFFC4

|
process
|
figures not linked since feb gsautter is there a reason that the figures in the recently batch processed articles since feb are not linked to the images could be that there is a zenodo issue e g
| 1
|
65,534
| 7,885,386,058
|
IssuesEvent
|
2018-06-27 12:18:37
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Create unified UI for block variants
|
Chrome Needs Design Feedback
|
Coming out of #773 and relating to other issues like #317, #728, and especially #522. There seems to be a need for a consistent style for choosing between variations on a type of block (e.g. the two types of blockquotes). I can't find a ticket tackling this across all blocks, so I hope this is an appropriate new ticket.
Right now, the convention appears to be numbered styles:

However, that is _really_ similar to how Heading hierarchical levels are communicated and I also don't think that solution can scale.
If there are 6 blockquote styles, remembering which is which will be very challenging. Nielsen Norman Group's recommendation for interface design is to [never use an icon without a text label](https://www.nngroup.com/articles/icon-usability/). That's obviously not feasible for a project like this, but I really think block variations are a moment where the recommendation deserves heeding.
Here's a terrible mockup:

Hopefully that at least communicates the idea. In my experience, naming variations like this also increases consistency of formatting across the site—to the great benefit of the user—since styles are somewhat more semantic (think of "Warning" or "Disclaimer" paragraph styles instead of P1, P2, etc.).
We're building a system here to help people communicate, not just format text in a vacuum. I think this change will improve that experience greatly.
|
1.0
|
Create unified UI for block variants - Coming out of #773 and relating to other issues like #317, #728, and especially #522. There seems to be a need for a consistent style for choosing between variations on a type of block (e.g. the two types of blockquotes). I can't find a ticket tackling this across all blocks, so I hope this is an appropriate new ticket.
Right now, the convention appears to be numbered styles:

However, that is _really_ similar to how Heading hierarchical levels are communicated and I also don't think that solution can scale.
If there are 6 blockquote styles, remembering which is which will be very challenging. Nielsen Norman Group's recommendation for interface design is to [never use an icon without a text label](https://www.nngroup.com/articles/icon-usability/). That's obviously not feasible for a project like this, but I really think block variations are a moment where the recommendation deserves heeding.
Here's a terrible mockup:

Hopefully that at least communicates the idea. In my experience, naming variations like this also increases consistency of formatting across the site—to the great benefit of the user—since styles are somewhat more semantic (think of "Warning" or "Disclaimer" paragraph styles instead of P1, P2, etc.).
We're building a system here to help people communicate, not just format text in a vacuum. I think this change will improve that experience greatly.
|
non_process
|
create unified ui for block variants coming out of and relating to other issues like and especially there seems to be a need for a consistent style for choosing between variations on a type of block e g the two types of blockquotes i can t find a ticket tackling this across all blocks so i hope this is an appropriate new ticket right now the convention appears to be numbered styles however that is really similar to how heading hierarchical levels are communicated and i also don t think that solution can scale if there are blockquote styles remembering which is which will be very challenging nielsen norman group s recommendation for interface design is to that s obviously not feasible for a project like this but i really think block variations are a moment where the recommendation deserves heeding here s a terrible mockup hopefully that at least communicates the idea in my experience naming variations like this also increases consistency of formatting across the site—to the great benefit of the user—since styles are somewhat more semantic think of warning or disclaimer paragraph styles instead of etc we re building a system here to help people communicate not just format text in a vacuum i think this change will improve that experience greatly
| 0
|
17,202
| 22,779,632,849
|
IssuesEvent
|
2022-07-08 18:07:25
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
reopened
|
docker service does not start, lerna missing?
|
bug P2 process
|
### Description
Hi @Nana-EC
There seems to be an issue when I try to run the relay node as a docker service. I hope the issue is obvious once you see the logs :) Perhaps the issue is resolved once you have `lerna` as a `dependency` instead of a `devDependency`.
```sh
$ docker-compose up
relay_1 |
relay_1 | > start
relay_1 | > npx lerna exec --scope @hashgraph/json-rpc-server -- npm run start
relay_1 |
Need to install the following packages:
relay_1 | lerna
```
### Steps to reproduce
1. run docker-compose up with the given docker compose file
### Additional context
docker-compose.yaml
```yaml
version: "3.8"
services:
relay:
image: "ghcr.io/hashgraph/hedera-json-rpc-relay:0.1.0-rc3"
ports:
- 7546:7546
tty: true
volumes:
- ./deploy/config/hedera-relay-local/.env:/home/node/app/.env
```
### Hedera network
testnet
### Version
v0.1.0-rc3
### Operating system
Linux
|
1.0
|
docker service does not start, lerna missing? - ### Description
Hi @Nana-EC
There seems to be an issue when I try to run the relay node as a docker service. I hope the issue is obvious once you see the logs :) Perhaps the issue is resolved once you have `lerna` as a `dependency` instead of a `devDependency`.
```sh
$ docker-compose up
relay_1 |
relay_1 | > start
relay_1 | > npx lerna exec --scope @hashgraph/json-rpc-server -- npm run start
relay_1 |
Need to install the following packages:
relay_1 | lerna
```
### Steps to reproduce
1. run docker-compose up with the given docker compose file
### Additional context
docker-compose.yaml
```yaml
version: "3.8"
services:
relay:
image: "ghcr.io/hashgraph/hedera-json-rpc-relay:0.1.0-rc3"
ports:
- 7546:7546
tty: true
volumes:
- ./deploy/config/hedera-relay-local/.env:/home/node/app/.env
```
### Hedera network
testnet
### Version
v0.1.0-rc3
### Operating system
Linux
|
process
|
docker service does not start lerna missing description hi nana ec there seems to be an issue when i try to run the relay node as a docker service i hope the issue is obvious once you see the logs perhaps the issue is resolved once you have lerna as a dependency instead of a devdependency sh docker compose up relay relay start relay npx lerna exec scope hashgraph json rpc server npm run start relay need to install the following packages relay lerna steps to reproduce run docker compose up with the given docker compose file additional context docker compose yaml yaml version services relay image ghcr io hashgraph hedera json rpc relay ports tty true volumes deploy config hedera relay local env home node app env hedera network testnet version operating system linux
| 1
|
219,894
| 17,117,650,200
|
IssuesEvent
|
2021-07-11 17:37:30
|
phetsims/gravity-and-orbits
|
https://api.github.com/repos/phetsims/gravity-and-orbits
|
closed
|
CT designed API changes detected
|
priority:2-high status:ready-for-review type:automated-testing
|
```
gravity-and-orbits : phet-io-api-compatibility : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/gravity-and-orbits/gravity-and-orbits_en.html?continuousTest=%7B%22test%22%3A%5B%22gravity-and-orbits%22%2C%22phet-io-api-compatibility%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1623335973957%22%2C%22timestamp%22%3A1623340144390%7D&ea&brand=phet-io&phetioStandalone&phetioCompareAPI
Query: ea&brand=phet-io&phetioStandalone&phetioCompareAPI
Uncaught Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
gravityAndOrbits.general.model.simInfo._data.initialState differs. Expected: {"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simName":"Gravity and Orbits","simVersion":"1.5.0-dev.4","wrapperMetadata":null}, actual: {"simName":"Gravity and Orbits","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simVersion":"1.5.0-dev.6","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","wrapperMetadata":null,"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0"}
Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
gravityAndOrbits.general.model.simInfo._data.initialState differs. Expected: {"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simName":"Gravity and Orbits","simVersion":"1.5.0-dev.4","wrapperMetadata":null}, actual: {"simName":"Gravity and Orbits","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simVersion":"1.5.0-dev.6","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","wrapperMetadata":null,"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0"}
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/assert/js/assert.js:25:13)
at XMLHttpRequest.<anonymous> (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/phet-io/js/phetioEngine.js:321:23)
id: Bayes Chrome
Snapshot from 6/10/2021, 8:39:33 AM
```
|
1.0
|
CT designed API changes detected - ```
gravity-and-orbits : phet-io-api-compatibility : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/gravity-and-orbits/gravity-and-orbits_en.html?continuousTest=%7B%22test%22%3A%5B%22gravity-and-orbits%22%2C%22phet-io-api-compatibility%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1623335973957%22%2C%22timestamp%22%3A1623340144390%7D&ea&brand=phet-io&phetioStandalone&phetioCompareAPI
Query: ea&brand=phet-io&phetioStandalone&phetioCompareAPI
Uncaught Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
gravityAndOrbits.general.model.simInfo._data.initialState differs. Expected: {"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simName":"Gravity and Orbits","simVersion":"1.5.0-dev.4","wrapperMetadata":null}, actual: {"simName":"Gravity and Orbits","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simVersion":"1.5.0-dev.6","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","wrapperMetadata":null,"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0"}
Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
gravityAndOrbits.general.model.simInfo._data.initialState differs. Expected: {"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simName":"Gravity and Orbits","simVersion":"1.5.0-dev.4","wrapperMetadata":null}, actual: {"simName":"Gravity and Orbits","screens":[{"name":"Home","phetioID":"gravityAndOrbits.homeScreen"},{"name":"Model","phetioID":"gravityAndOrbits.modelScreen"},{"name":"To Scale","phetioID":"gravityAndOrbits.toScaleScreen"}],"simVersion":"1.5.0-dev.6","repoName":"gravity-and-orbits","screenPropertyValue":"gravityAndOrbits.homeScreen","wrapperMetadata":null,"dataStreamVersion":"1.0.0","phetioCommandProcessorProtocol":"1.0.0"}
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/assert/js/assert.js:25:13)
at XMLHttpRequest.<anonymous> (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1623335973957/phet-io/js/phetioEngine.js:321:23)
id: Bayes Chrome
Snapshot from 6/10/2021, 8:39:33 AM
```
|
non_process
|
ct designed api changes detected gravity and orbits phet io api compatibility unbuilt query ea brand phet io phetiostandalone phetiocompareapi uncaught error assertion failed designed api changes detected please roll them back or revise the reference api gravityandorbits general model siminfo data initialstate differs expected datastreamversion phetiocommandprocessorprotocol reponame gravity and orbits screenpropertyvalue gravityandorbits homescreen screens simname gravity and orbits simversion dev wrappermetadata null actual simname gravity and orbits screens simversion dev reponame gravity and orbits screenpropertyvalue gravityandorbits homescreen wrappermetadata null datastreamversion phetiocommandprocessorprotocol error assertion failed designed api changes detected please roll them back or revise the reference api gravityandorbits general model siminfo data initialstate differs expected datastreamversion phetiocommandprocessorprotocol reponame gravity and orbits screenpropertyvalue gravityandorbits homescreen screens simname gravity and orbits simversion dev wrappermetadata null actual simname gravity and orbits screens simversion dev reponame gravity and orbits screenpropertyvalue gravityandorbits homescreen wrappermetadata null datastreamversion phetiocommandprocessorprotocol at window assertions assertfunction at xmlhttprequest id bayes chrome snapshot from am
| 0
|
8,282
| 11,447,519,499
|
IssuesEvent
|
2020-02-06 00:03:56
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
posthtml-expressions breaks vue single file components
|
:bug: Bug HTML Preprocessing Stale Vue
|
<!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
<!--- Provide a general summary of the issue here -->
When configuring `posthtml-expressions` in a project with Vue's single file components, the components are processed with posthtml before vue, and variables are replaced with "undefined".
Or something close to that. Anyways, it breaks the build.
## 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
You can see an error replication here:
https://github.com/samuelgozi/parcel-posthtml-vue-error
Please note that the `src/index.html` file has a `<if condition="false"> ... </if>` tag in it that should not be rendered when `posthtml-expressions` is working.
The fastest way to verify that the error is caused by the `posthtml-expressions` plugin, is to comment out the plugin from `.posthtmlrc`, or just changing the name of the file(example `.posthtmlrc123`)
## 🤔 Expected Behavior
Vue files should not be touched by posthtml at all.
## 😯 Current Behavior
It seems like posthtml tries to process Vue files before vue itself, and all variable calls inside `{{ var }}` are replaced by `undefined`.
No errors show anywhere, (I didnt manage to reproduce any).
## 💁 Possible Solution
No idea, i dont know the internals of parcel JS, but if someone can point me in the right direction, Ill take a look.
Maybe will help that in the full project where I encountered this error, a stack trace showed that it has something to do with `pipeline.js:30`.
Adding before that line `console.log(err)` might help.
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel |1.12.3|
| Node |12.6.0|
| npm/Yarn | 1.17.3|
| Operating System | MacOS Mojave |
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
1.0
|
posthtml-expressions breaks vue single file components - <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
<!--- Provide a general summary of the issue here -->
When configuring `posthtml-expressions` in a project with Vue's single file components, the components are processed with posthtml before vue, and variables are replaced with "undefined".
Or something close to that. Anyways, it breaks the build.
## 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
You can see an error replication here:
https://github.com/samuelgozi/parcel-posthtml-vue-error
Please note that the `src/index.html` file has a `<if condition="false"> ... </if>` tag in it that should not be rendered when `posthtml-expressions` is working.
The fastest way to verify that the error is caused by the `posthtml-expressions` plugin, is to comment out the plugin from `.posthtmlrc`, or just changing the name of the file(example `.posthtmlrc123`)
## 🤔 Expected Behavior
Vue files should not be touched by posthtml at all.
## 😯 Current Behavior
It seems like posthtml tries to process Vue files before vue itself, and all variable calls inside `{{ var }}` are replaced by `undefined`.
No errors show anywhere, (I didnt manage to reproduce any).
## 💁 Possible Solution
No idea, i dont know the internals of parcel JS, but if someone can point me in the right direction, Ill take a look.
Maybe will help that in the full project where I encountered this error, a stack trace showed that it has something to do with `pipeline.js:30`.
Adding before that line `console.log(err)` might help.
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel |1.12.3|
| Node |12.6.0|
| npm/Yarn | 1.17.3|
| Operating System | MacOS Mojave |
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
process
|
posthtml expressions breaks vue single file components thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report when configuring posthtml expressions in a project with vue s single file components the components are processed with posthtml before vue and variables are replaced with undefined or something close to that anyways it breaks the build 🎛 configuration babelrc package json cli command you can see an error replication here please note that the src index html file has a tag in it that should not be rendered when posthtml expressions is working the fastest way to verify that the error is caused by the posthtml expressions plugin is to comment out the plugin from posthtmlrc or just changing the name of the file example 🤔 expected behavior vue files should not be touched by posthtml at all 😯 current behavior it seems like posthtml tries to process vue files before vue itself and all variable calls inside var are replaced by undefined no errors show anywhere i didnt manage to reproduce any 💁 possible solution no idea i dont know the internals of parcel js but if someone can point me in the right direction ill take a look maybe will help that in the full project where i encountered this error a stack trace showed that it has something to do with pipeline js adding before that line console log err might help 🌍 your environment software version s parcel node npm yarn operating system macos mojave love parcel please consider supporting our collective 👉
| 1
|
216,083
| 16,628,317,797
|
IssuesEvent
|
2021-06-03 12:37:29
|
simon-ritchie/apysc
|
https://api.github.com/repos/simon-ritchie/apysc
|
closed
|
Adjust document code block execution implementation to append jslib optional arguments
|
documentation enhancement
|
- Point to the common js lib directory (to reduce duplicated js lib files)
- Skip js lib exporting
|
1.0
|
Adjust document code block execution implementation to append jslib optional arguments - - Point to the common js lib directory (to reduce duplicated js lib files)
- Skip js lib exporting
|
non_process
|
adjust document code block execution implementation to append jslib optional arguments point to the common js lib directory to reduce duplicated js lib files skip js lib exporting
| 0
|
27,258
| 5,327,170,033
|
IssuesEvent
|
2017-02-15 08:13:51
|
nordsoftware/react-boilerplate
|
https://api.github.com/repos/nordsoftware/react-boilerplate
|
opened
|
Async/await
|
documentation enhancement
|
Async / await is not just for managing promises. It is the way to control synchronous / asynchronous execution on parts of the code.
Note that **await** may only be used in functions marked with the **async** keyword.
```js
// with
function sleep(ms) {
// Promise here is just to accomplish the sleep functionality
return new Promise(resolve => setTimeout(resolve, ms));
}
async function bar(x) {
await sleep(2000); // wait for 2 seconds then return
return x;
}
// this is incorrect, await is used without function being declared as async, will throw an error
function foo(x) {
await bar(x);
}
// this is correct
async function foo(x) {
await bar(x);
}
```
By marking the function as async,, we are able to use await ( or not ) to control if the functions called inside will be executed in synchronous or asynchronous manner.
```js
// will return 2x after 2 seconds
async function add1(x) {
var a = bar(x);
var b = bar(x);
return await a + await b;
}
// returns 2x after 4 seconds.
async function add1(x) {
var a = await bar(x);
var b = await bar(x); // starts after a is done
return a + b;
}
```
This makes it possible to change the sync/async flow simply by adding or omitting await before the function call.
|
1.0
|
Async/await - Async / await is not just for managing promises. It is the way to control synchronous / asynchronous execution on parts of the code.
Note that **await** may only be used in functions marked with the **async** keyword.
```js
// with
function sleep(ms) {
// Promise here is just to accomplish the sleep functionality
return new Promise(resolve => setTimeout(resolve, ms));
}
async function bar(x) {
await sleep(2000); // wait for 2 seconds then return
return x;
}
// this is incorrect, await is used without function being declared as async, will throw an error
function foo(x) {
await bar(x);
}
// this is correct
async function foo(x) {
await bar(x);
}
```
By marking the function as async,, we are able to use await ( or not ) to control if the functions called inside will be executed in synchronous or asynchronous manner.
```js
// will return 2x after 2 seconds
async function add1(x) {
var a = bar(x);
var b = bar(x);
return await a + await b;
}
// returns 2x after 4 seconds.
async function add1(x) {
var a = await bar(x);
var b = await bar(x); // starts after a is done
return a + b;
}
```
This makes it possible to change the sync/async flow simply by adding or omitting await before the function call.
|
non_process
|
async await async await is not just for managing promises it is the way to control synchronous asynchronous execution on parts of the code note that await may only be used in functions marked with the async keyword js with function sleep ms promise here is just to accomplish the sleep functionality return new promise resolve settimeout resolve ms async function bar x await sleep wait for seconds then return return x this is incorrect await is used without function being declared as async will throw an error function foo x await bar x this is correct async function foo x await bar x by marking the function as async we are able to use await or not to control if the functions called inside will be executed in synchronous or asynchronous manner js will return after seconds async function x var a bar x var b bar x return await a await b returns after seconds async function x var a await bar x var b await bar x starts after a is done return a b this makes it possible to change the sync async flow simply by adding or omitting await before the function call
| 0
|
233,912
| 19,086,056,759
|
IssuesEvent
|
2021-11-29 06:12:30
|
boostcampwm-2021/iOS06-MateRunner
|
https://api.github.com/repos/boostcampwm-2021/iOS06-MateRunner
|
opened
|
[단위 테스트] TeamRunningResultViewModel
|
test
|
## 🗣 설명
- Input에 대해 Output이 정상적으로 반환되는지 테스트합니다.
```swift
struct Input {
let viewDidLoadEvent: Observable<Void>
let closeButtonDidTapEvent: Observable<Void>
let emojiButtonDidTapEvent: Observable<Void>
}
struct Output {
var dateTime: String
var dayOfWeekAndTime: String
var headerText: String
var userDistance: String
var calorie: String
var time: String
var userNickname: String
var totalDistance: String
var contributionRate: String
var points: [CLLocationCoordinate2D]
var region: Region
var canceledResultShouldShow: Bool
var selectedEmoji: PublishRelay<String> = PublishRelay<String>()
var saveFailAlertShouldShow: PublishRelay<Bool> = PublishRelay<Bool>()
}
```
## 📋 체크리스트
> 구현해야하는 이슈 체크리스트
- [ ] 단위테스트
|
1.0
|
[단위 테스트] TeamRunningResultViewModel - ## 🗣 설명
- Input에 대해 Output이 정상적으로 반환되는지 테스트합니다.
```swift
struct Input {
let viewDidLoadEvent: Observable<Void>
let closeButtonDidTapEvent: Observable<Void>
let emojiButtonDidTapEvent: Observable<Void>
}
struct Output {
var dateTime: String
var dayOfWeekAndTime: String
var headerText: String
var userDistance: String
var calorie: String
var time: String
var userNickname: String
var totalDistance: String
var contributionRate: String
var points: [CLLocationCoordinate2D]
var region: Region
var canceledResultShouldShow: Bool
var selectedEmoji: PublishRelay<String> = PublishRelay<String>()
var saveFailAlertShouldShow: PublishRelay<Bool> = PublishRelay<Bool>()
}
```
## 📋 체크리스트
> 구현해야하는 이슈 체크리스트
- [ ] 단위테스트
|
non_process
|
teamrunningresultviewmodel 🗣 설명 input에 대해 output이 정상적으로 반환되는지 테스트합니다 swift struct input let viewdidloadevent observable let closebuttondidtapevent observable let emojibuttondidtapevent observable struct output var datetime string var dayofweekandtime string var headertext string var userdistance string var calorie string var time string var usernickname string var totaldistance string var contributionrate string var points var region region var canceledresultshouldshow bool var selectedemoji publishrelay publishrelay var savefailalertshouldshow publishrelay publishrelay 📋 체크리스트 구현해야하는 이슈 체크리스트 단위테스트
| 0
|
10,982
| 13,783,261,463
|
IssuesEvent
|
2020-10-08 18:56:04
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Efficiency Improvement
|
Calculator Process Heating
|
Graphs like O2 enrichment
Y = Savings
X = Combustion preheat temp, Flue gas temp, O2 in flue
|
1.0
|
Efficiency Improvement - Graphs like O2 enrichment
Y = Savings
X = Combustion preheat temp, Flue gas temp, O2 in flue
|
process
|
efficiency improvement graphs like enrichment y savings x combustion preheat temp flue gas temp in flue
| 1
|
118,668
| 11,985,758,418
|
IssuesEvent
|
2020-04-07 18:04:54
|
logicahealth/covid-19
|
https://api.github.com/repos/logicahealth/covid-19
|
closed
|
Add additional licensing statements to IG landing page.
|
documentation enhancement
|
Submitted OBO Carol Macumber of Clinical Architecture and agreed to be Stan:
For any specification that references external terminologies, it’s HL7’s policy to include the following Copyright statement.
"This HL7 specification contains and references intellectual property owned by third parties ("Third Party IP"). Implementers and testers of this specification SHALL abide by the license requirements for each terminology content artifact utilized within a functioning implementation. Terminology licenses SHALL be obtained from the Third Party IP owner for each code system and/or other specified artifact used. It is the sole responsibility of each organization deploying or testing this specification to ensure their implementations comply with licensing requirements of each Third Party IP."
Thanks,
Carol (as vice-chair of HL7 Terminology Authority)
|
1.0
|
Add additional licensing statements to IG landing page. - Submitted OBO Carol Macumber of Clinical Architecture and agreed to be Stan:
For any specification that references external terminologies, it’s HL7’s policy to include the following Copyright statement.
"This HL7 specification contains and references intellectual property owned by third parties ("Third Party IP"). Implementers and testers of this specification SHALL abide by the license requirements for each terminology content artifact utilized within a functioning implementation. Terminology licenses SHALL be obtained from the Third Party IP owner for each code system and/or other specified artifact used. It is the sole responsibility of each organization deploying or testing this specification to ensure their implementations comply with licensing requirements of each Third Party IP."
Thanks,
Carol (as vice-chair of HL7 Terminology Authority)
|
non_process
|
add additional licensing statements to ig landing page submitted obo carol macumber of clinical architecture and agreed to be stan for any specification that references external terminologies it’s ’s policy to include the following copyright statement this specification contains and references intellectual property owned by third parties third party ip implementers and testers of this specification shall abide by the license requirements for each terminology content artifact utilized within a functioning implementation terminology licenses shall be obtained from the third party ip owner for each code system and or other specified artifact used it is the sole responsibility of each organization deploying or testing this specification to ensure their implementations comply with licensing requirements of each third party ip thanks carol as vice chair of terminology authority
| 0
|
605
| 3,074,885,578
|
IssuesEvent
|
2015-08-20 10:11:07
|
sysown/proxysql-0.2
|
https://api.github.com/repos/sysown/proxysql-0.2
|
opened
|
Make query retry optional
|
CONNECTION POOL enhancement MYSQL PROTOCOL QUERY PROCESSOR
|
## Why
ProxySQL is now able to re-execute queries if these fails because killed or the server has gone away.
This feature should be optional
## What
* [ ] add variable mysql-query_retries_on_failure
* [ ] make a new field in mysql_query_rules
|
1.0
|
Make query retry optional - ## Why
ProxySQL is now able to re-execute queries if these fails because killed or the server has gone away.
This feature should be optional
## What
* [ ] add variable mysql-query_retries_on_failure
* [ ] make a new field in mysql_query_rules
|
process
|
make query retry optional why proxysql is now able to re execute queries if these fails because killed or the server has gone away this feature should be optional what add variable mysql query retries on failure make a new field in mysql query rules
| 1
|
14,525
| 17,620,701,885
|
IssuesEvent
|
2021-08-18 14:59:11
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Changes to GO:0039580 suppression by virus of host PKR activity
|
multi-species process
|
Hi @pmasson55
GO:0039580 suppression by virus of host PKR activity is mapped to ~KW-1223~ https://www.uniprot.org/keywords/1102 and https://viralzone.expasy.org/554
Looks like proteins annotated to that term inhibit PKR, which is a kinase that regulates eukaryotic translation initiation factor 2.
So - a mapping to 'GO:0039611 suppression by virus of host translation initiation factor activity' + GO:0030291 protein serine/threonine kinase inhibitor activity seems more appropriate, what do you think?
Also - I think this is a type of suppression of innate immune response, also because it is mediated by interferon?
Thanks, Pascale
|
1.0
|
Changes to GO:0039580 suppression by virus of host PKR activity - Hi @pmasson55
GO:0039580 suppression by virus of host PKR activity is mapped to ~KW-1223~ https://www.uniprot.org/keywords/1102 and https://viralzone.expasy.org/554
Looks like proteins annotated to that term inhibit PKR, which is a kinase that regulates eukaryotic translation initiation factor 2.
So - a mapping to 'GO:0039611 suppression by virus of host translation initiation factor activity' + GO:0030291 protein serine/threonine kinase inhibitor activity seems more appropriate, what do you think?
Also - I think this is a type of suppression of innate immune response, also because it is mediated by interferon?
Thanks, Pascale
|
process
|
changes to go suppression by virus of host pkr activity hi go suppression by virus of host pkr activity is mapped to kw and looks like proteins annotated to that term inhibit pkr which is a kinase that regulates eukaryotic translation initiation factor so a mapping to go suppression by virus of host translation initiation factor activity go protein serine threonine kinase inhibitor activity seems more appropriate what do you think also i think this is a type of suppression of innate immune response also because it is mediated by interferon thanks pascale
| 1
|
1,470
| 4,049,448,673
|
IssuesEvent
|
2016-05-23 14:15:14
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
A bug in mathtools.sty.ltxml?
|
bug packages postprocessing
|
If I compile the below code via
`latexmlc --mathimages --destination=test-out.xml test.tex`
then I get an error: the image of the formula can’t be created. If I delete the (unused) package mathtools, then everything goes fine. (The real problem happens in a larger TEX file where I indeed need mathtools, and where the aligned formula has more lines.)
Thanks for your input! Simon
```
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\begin{equation}
\left\{\begin{aligned}
&x=\sqrt{2},
\end{aligned}\right.
\end{equation}
\end{document}
```
|
1.0
|
A bug in mathtools.sty.ltxml? - If I compile the below code via
`latexmlc --mathimages --destination=test-out.xml test.tex`
then I get an error: the image of the formula can’t be created. If I delete the (unused) package mathtools, then everything goes fine. (The real problem happens in a larger TEX file where I indeed need mathtools, and where the aligned formula has more lines.)
Thanks for your input! Simon
```
\documentclass{article}
\usepackage{mathtools}
\begin{document}
\begin{equation}
\left\{\begin{aligned}
&x=\sqrt{2},
\end{aligned}\right.
\end{equation}
\end{document}
```
|
process
|
a bug in mathtools sty ltxml if i compile the below code via latexmlc mathimages destination test out xml test tex then i get an error the image of the formula can’t be created if i delete the unused package mathtools then everything goes fine the real problem happens in a larger tex file where i indeed need mathtools and where the aligned formula has more lines thanks for your input simon documentclass article usepackage mathtools begin document begin equation left begin aligned x sqrt end aligned right end equation end document
| 1
|
681,651
| 23,319,592,746
|
IssuesEvent
|
2022-08-08 15:13:41
|
SeekyCt/ppcdis
|
https://api.github.com/repos/SeekyCt/ppcdis
|
opened
|
Improve tail call detection
|
bug enhancement high priority
|
In a lot of cases, tail calls are missed. A branch should never happen to a stwu r1 outside of tail calls, so that could fix a lot of the current cases. Performing partial finalisation of tags before doing the tail call postprocessing could help too
|
1.0
|
Improve tail call detection - In a lot of cases, tail calls are missed. A branch should never happen to a stwu r1 outside of tail calls, so that could fix a lot of the current cases. Performing partial finalisation of tags before doing the tail call postprocessing could help too
|
non_process
|
improve tail call detection in a lot of cases tail calls are missed a branch should never happen to a stwu outside of tail calls so that could fix a lot of the current cases performing partial finalisation of tags before doing the tail call postprocessing could help too
| 0
|
2,297
| 5,116,217,883
|
IssuesEvent
|
2017-01-07 01:18:36
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test failure: System.ServiceProcess.Tests.SafeServiceControllerTests/GetServices
|
area-System.ServiceProcess test-run-core
|
Opened on behalf of @jiangzeng
The test `System.ServiceProcess.Tests.SafeServiceControllerTests/GetServices` has failed.
KeyIso.CanStop\r
Expected: True\r
Actual: False
Stack Trace:
at System.ServiceProcess.Tests.SafeServiceControllerTests.GetServices()
Build : Master - 20161215.04 (Core Tests)
Failing configurations:
- Windows.7.Amd64
- AnyCPU-Release
Details;
https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20161215.04/workItem/System.ServiceProcess.ServiceController.Tests/analysis/xunit/System.ServiceProcess.Tests.SafeServiceControllerTests~2FGetServices
|
1.0
|
Test failure: System.ServiceProcess.Tests.SafeServiceControllerTests/GetServices - Opened on behalf of @jiangzeng
The test `System.ServiceProcess.Tests.SafeServiceControllerTests/GetServices` has failed.
KeyIso.CanStop\r
Expected: True\r
Actual: False
Stack Trace:
at System.ServiceProcess.Tests.SafeServiceControllerTests.GetServices()
Build : Master - 20161215.04 (Core Tests)
Failing configurations:
- Windows.7.Amd64
- AnyCPU-Release
Details;
https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20161215.04/workItem/System.ServiceProcess.ServiceController.Tests/analysis/xunit/System.ServiceProcess.Tests.SafeServiceControllerTests~2FGetServices
|
process
|
test failure system serviceprocess tests safeservicecontrollertests getservices opened on behalf of jiangzeng the test system serviceprocess tests safeservicecontrollertests getservices has failed keyiso canstop r expected true r actual false stack trace at system serviceprocess tests safeservicecontrollertests getservices build master core tests failing configurations windows anycpu release details
| 1
|
254,736
| 8,087,342,951
|
IssuesEvent
|
2018-08-09 01:10:01
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
USER ISSUE: Bullrush seed showing water when planted and then one died
|
Low Priority
|
**Version:** 0.7.0.0 beta staging-7da08558


they still have 6.6 hours to grow to maturity, surely it shouldn't diie this early in its life?
|
1.0
|
USER ISSUE: Bullrush seed showing water when planted and then one died - **Version:** 0.7.0.0 beta staging-7da08558


they still have 6.6 hours to grow to maturity, surely it shouldn't diie this early in its life?
|
non_process
|
user issue bullrush seed showing water when planted and then one died version beta staging they still have hours to grow to maturity surely it shouldn t diie this early in its life
| 0
|
239,694
| 7,799,930,661
|
IssuesEvent
|
2018-06-09 02:16:22
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
ros messages for csail-toyota simulator
|
configuration: ros priority: backlog team: cars type: feature request
|
from felix:
it will probably be helpful to have these msgs types available in the simulator:
http://docs.ros.org/api/nav_msgs/html/msg/Odometry.html
http://docs.ros.org/api/sensor_msgs/html/msg/LaserScan.html
http://docs.ros.org/api/geometry_msgs/html/msg/PoseWithCovarianceStamped.html
http://docs.ros.org/api/sensor_msgs/html/msg/Imu.html
|
1.0
|
ros messages for csail-toyota simulator - from felix:
it will probably be helpful to have these msgs types available in the simulator:
http://docs.ros.org/api/nav_msgs/html/msg/Odometry.html
http://docs.ros.org/api/sensor_msgs/html/msg/LaserScan.html
http://docs.ros.org/api/geometry_msgs/html/msg/PoseWithCovarianceStamped.html
http://docs.ros.org/api/sensor_msgs/html/msg/Imu.html
|
non_process
|
ros messages for csail toyota simulator from felix it will probably be helpful to have these msgs types available in the simulator
| 0
|
382,226
| 11,302,687,981
|
IssuesEvent
|
2020-01-17 18:16:41
|
joyent/conch
|
https://api.github.com/repos/joyent/conch
|
closed
|
validation_state_member schema migration must be rewritten
|
database high priority v3.0
|
migration 0129 takes way too long to run now on production data. Mitigation steps must be taken to allow the v2->v3 migration to complete in a timely fashion.
I recommend:
- move validation_state_member and validation_result tables aside to a separate database, and truncate those tables in the db backup being restored to the main database
- after the api server is back up and running, perform the schema migrations offline, then load the new data into the production database.
The unfortunate side effect here is that validation results will be unavailable for all existing devices until the process is completed. However, new device reports and validations will continue to be processed, and a device's existing status can still be seen because validation results are copied into device.health.
|
1.0
|
validation_state_member schema migration must be rewritten - migration 0129 takes way too long to run now on production data. Mitigation steps must be taken to allow the v2->v3 migration to complete in a timely fashion.
I recommend:
- move validation_state_member and validation_result tables aside to a separate database, and truncate those tables in the db backup being restored to the main database
- after the api server is back up and running, perform the schema migrations offline, then load the new data into the production database.
The unfortunate side effect here is that validation results will be unavailable for all existing devices until the process is completed. However, new device reports and validations will continue to be processed, and a device's existing status can still be seen because validation results are copied into device.health.
|
non_process
|
validation state member schema migration must be rewritten migration takes way too long to run now on production data mitigation steps must be taken to allow the migration to complete in a timely fashion i recommend move validation state member and validation result tables aside to a separate database and truncate those tables in the db backup being restored to the main database after the api server is back up and running perform the schema migrations offline then load the new data into the production database the unfortunate side effect here is that validation results will be unavailable for all existing devices until the process is completed however new device reports and validations will continue to be processed and a device s existing status can still be seen because validation results are copied into device health
| 0
|
626,082
| 19,784,737,897
|
IssuesEvent
|
2022-01-18 04:29:21
|
lokka30/LevelledMobs
|
https://api.github.com/repos/lokka30/LevelledMobs
|
closed
|
Add Levelling Strategy: 'Ender Dragon Kill Count'
|
type: improvement priority: normal status: confirmed
|
Suggested by: `Hugo5551`
This makes the ender dragon's level depend on the amount of times it has been killed.
|
1.0
|
Add Levelling Strategy: 'Ender Dragon Kill Count' - Suggested by: `Hugo5551`
This makes the ender dragon's level depend on the amount of times it has been killed.
|
non_process
|
add levelling strategy ender dragon kill count suggested by this makes the ender dragon s level depend on the amount of times it has been killed
| 0
|
34,055
| 14,261,046,660
|
IssuesEvent
|
2020-11-20 10:43:30
|
SwissDataScienceCenter/renku-graph
|
https://api.github.com/repos/SwissDataScienceCenter/renku-graph
|
closed
|
Define default CPU requests/limits
|
all-graph-services improvement
|
It would be nice to define a default CPU limit (and potentially a request) to match the defined memory limit for the pods.
|
1.0
|
Define default CPU requests/limits - It would be nice to define a default CPU limit (and potentially a request) to match the defined memory limit for the pods.
|
non_process
|
define default cpu requests limits it would be nice to define a default cpu limit and potentially a request to match the defined memory limit for the pods
| 0
|
46,529
| 6,020,733,073
|
IssuesEvent
|
2017-06-07 17:05:35
|
calvaryQC/website-public
|
https://api.github.com/repos/calvaryQC/website-public
|
closed
|
Audio Sermons Page
|
✭ redesign
|
This is a page that will be the audio library of all sermons taught from Calvary Chapel of Queen Creek speakers for Sundays, Wednesdays, and special events.
Sermons should be:
- [x] Under the Media Menu Header
- [x] Move all current data to new site
- [x] Apply new layout
|
1.0
|
Audio Sermons Page - This is a page that will be the audio library of all sermons taught from Calvary Chapel of Queen Creek speakers for Sundays, Wednesdays, and special events.
Sermons should be:
- [x] Under the Media Menu Header
- [x] Move all current data to new site
- [x] Apply new layout
|
non_process
|
audio sermons page this is a page that will be the audio library of all sermons taught from calvary chapel of queen creek speakers for sundays wednesdays and special events sermons should be under the media menu header move all current data to new site apply new layout
| 0
|
32,909
| 27,086,553,488
|
IssuesEvent
|
2023-02-14 17:24:10
|
phpmyadmin/scripts
|
https://api.github.com/repos/phpmyadmin/scripts
|
closed
|
PHP upgrade is required for demo server
|
infrastructure
|
PHP 8.1.0+ is required.
Currently installed version is: 7.4.33
|
1.0
|
PHP upgrade is required for demo server - PHP 8.1.0+ is required.
Currently installed version is: 7.4.33
|
non_process
|
php upgrade is required for demo server php is required currently installed version is
| 0
|
130,331
| 18,155,767,335
|
IssuesEvent
|
2021-09-27 01:12:27
|
benlazarine/cas-overlay
|
https://api.github.com/repos/benlazarine/cas-overlay
|
opened
|
CVE-2019-12400 (Medium) detected in xmlsec-2.0.5.jar
|
security vulnerability
|
## CVE-2019-12400 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlsec-2.0.5.jar</b></p></summary>
<p>Apache XML Security for Java supports XML-Signature Syntax and Processing,
W3C Recommendation 12 February 2002, and XML Encryption Syntax and
Processing, W3C Recommendation 10 December 2002. As of version 1.4,
the library supports the standard Java API JSR-105: XML Digital Signature APIs.</p>
<p>Path to dependency file: cas-overlay/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/santuario/xmlsec/2.0.5/xmlsec-2.0.5.jar</p>
<p>
Dependency Hierarchy:
- cas-server-support-oauth-webflow-5.3.7.jar (Root Library)
- pac4j-saml-3.4.0.jar
- xmlsectool-2.0.0.jar
- :x: **xmlsec-2.0.5.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In version 2.0.3 Apache Santuario XML Security for Java, a caching mechanism was introduced to speed up creating new XML documents using a static pool of DocumentBuilders. However, if some untrusted code can register a malicious implementation with the thread context class loader first, then this implementation might be cached and re-used by Apache Santuario - XML Security for Java, leading to potential security flaws when validating signed documents, etc. The vulnerability affects Apache Santuario - XML Security for Java 2.0.x releases from 2.0.3 and all 2.1.x releases before 2.1.4.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12400>CVE-2019-12400</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://santuario.apache.org/secadv.data/CVE-2019-12400.asc?version=1&modificationDate=1566573083000&api=v2">http://santuario.apache.org/secadv.data/CVE-2019-12400.asc?version=1&modificationDate=1566573083000&api=v2</a></p>
<p>Release Date: 2019-08-23</p>
<p>Fix Resolution: 2.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-12400 (Medium) detected in xmlsec-2.0.5.jar - ## CVE-2019-12400 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlsec-2.0.5.jar</b></p></summary>
<p>Apache XML Security for Java supports XML-Signature Syntax and Processing,
W3C Recommendation 12 February 2002, and XML Encryption Syntax and
Processing, W3C Recommendation 10 December 2002. As of version 1.4,
the library supports the standard Java API JSR-105: XML Digital Signature APIs.</p>
<p>Path to dependency file: cas-overlay/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/apache/santuario/xmlsec/2.0.5/xmlsec-2.0.5.jar</p>
<p>
Dependency Hierarchy:
- cas-server-support-oauth-webflow-5.3.7.jar (Root Library)
- pac4j-saml-3.4.0.jar
- xmlsectool-2.0.0.jar
- :x: **xmlsec-2.0.5.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In version 2.0.3 Apache Santuario XML Security for Java, a caching mechanism was introduced to speed up creating new XML documents using a static pool of DocumentBuilders. However, if some untrusted code can register a malicious implementation with the thread context class loader first, then this implementation might be cached and re-used by Apache Santuario - XML Security for Java, leading to potential security flaws when validating signed documents, etc. The vulnerability affects Apache Santuario - XML Security for Java 2.0.x releases from 2.0.3 and all 2.1.x releases before 2.1.4.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12400>CVE-2019-12400</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://santuario.apache.org/secadv.data/CVE-2019-12400.asc?version=1&modificationDate=1566573083000&api=v2">http://santuario.apache.org/secadv.data/CVE-2019-12400.asc?version=1&modificationDate=1566573083000&api=v2</a></p>
<p>Release Date: 2019-08-23</p>
<p>Fix Resolution: 2.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in xmlsec jar cve medium severity vulnerability vulnerable library xmlsec jar apache xml security for java supports xml signature syntax and processing recommendation february and xml encryption syntax and processing recommendation december as of version the library supports the standard java api jsr xml digital signature apis path to dependency file cas overlay pom xml path to vulnerable library root repository org apache santuario xmlsec xmlsec jar dependency hierarchy cas server support oauth webflow jar root library saml jar xmlsectool jar x xmlsec jar vulnerable library vulnerability details in version apache santuario xml security for java a caching mechanism was introduced to speed up creating new xml documents using a static pool of documentbuilders however if some untrusted code can register a malicious implementation with the thread context class loader first then this implementation might be cached and re used by apache santuario xml security for java leading to potential security flaws when validating signed documents etc the vulnerability affects apache santuario xml security for java x releases from and all x releases before publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
13,884
| 16,654,744,413
|
IssuesEvent
|
2021-06-05 10:08:41
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issue > Multiple logos are getting displayed where the device width is small
|
Bug P2 Participant manager Process: Fixed Process: Tested dev
|
Responsive issue > Multiple logos are getting displayed where the device width is small

|
2.0
|
[PM] Responsive issue > Multiple logos are getting displayed where the device width is small - Responsive issue > Multiple logos are getting displayed where the device width is small

|
process
|
responsive issue multiple logos are getting displayed where the device width is small responsive issue multiple logos are getting displayed where the device width is small
| 1
|
30,046
| 5,996,271,758
|
IssuesEvent
|
2017-06-03 12:41:14
|
rekcuFniarB/forum-theprodigy-ru
|
https://api.github.com/repos/rekcuFniarB/forum-theprodigy-ru
|
opened
|
В профиле в списке последних постов и комментов длинные ссылки ломают структуру форума
|
bug Component-UI Milestone-Release2 Priority-High Type-Defect
|

собственно скрин
|
1.0
|
В профиле в списке последних постов и комментов длинные ссылки ломают структуру форума - 
собственно скрин
|
non_process
|
в профиле в списке последних постов и комментов длинные ссылки ломают структуру форума собственно скрин
| 0
|
7,650
| 10,738,746,747
|
IssuesEvent
|
2019-10-29 15:15:50
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Make Prisma generate independent of environment
|
bug/2-confirmed kind/bug process/candidate
|
`prisma generate` fails when a particular environment variable is not available. The reason(s) for this are:
```groovy
datasource ds {
provider = env("DB_PROVIDER")
url = env("DB_URL")
}
```
If `DB_PROVIDER` is used, then the generate CLI **needs** to resolve it to proceed as it needs to know the data source. A potential solution is to dis-allow environment variables in provider. This issue: https://github.com/prisma/photonjs/issues/184 also ties into this discussion.
**Decision:** Disallow env vars in `provider`
If `DB_URL` is used, then for SQLite, we need to be able to resolve the relative path to the SQLite file at generate time, to be able to use it at runtime. A potential solution is to limit SQLite to absolute paths but that is not the best DX.
**Decision:** Do not resolve SQLite path when generation
Had a quick chat with @schickling and the result is above marked decisions.
---
Another alternative is to fail only on the required env vars (like provider) but let others pass through. That would require the datamodel parser to bake the field information in the parse tree and that can be used to fail conditionally.
---
This information is from top of my head and might be outdated, I will sync with @timsuchanek on this.
|
1.0
|
Make Prisma generate independent of environment - `prisma generate` fails when a particular environment variable is not available. The reason(s) for this are:
```groovy
datasource ds {
provider = env("DB_PROVIDER")
url = env("DB_URL")
}
```
If `DB_PROVIDER` is used, then the generate CLI **needs** to resolve it to proceed as it needs to know the data source. A potential solution is to dis-allow environment variables in provider. This issue: https://github.com/prisma/photonjs/issues/184 also ties into this discussion.
**Decision:** Disallow env vars in `provider`
If `DB_URL` is used, then for SQLite, we need to be able to resolve the relative path to the SQLite file at generate time, to be able to use it at runtime. A potential solution is to limit SQLite to absolute paths but that is not the best DX.
**Decision:** Do not resolve SQLite path when generation
Had a quick chat with @schickling and the result is above marked decisions.
---
Another alternative is to fail only on the required env vars (like provider) but let others pass through. That would require the datamodel parser to bake the field information in the parse tree and that can be used to fail conditionally.
---
This information is from top of my head and might be outdated, I will sync with @timsuchanek on this.
|
process
|
make prisma generate independent of environment prisma generate fails when a particular environment variable is not available the reason s for this are groovy datasource ds provider env db provider url env db url if db provider is used then the generate cli needs to resolve it to proceed as it needs to know the data source a potential solution is to dis allow environment variables in provider this issue also ties into this discussion decision disallow env vars in provider if db url is used then for sqlite we need to be able to resolve the relative path to the sqlite file at generate time to be able to use it at runtime a potential solution is to limit sqlite to absolute paths but that is not the best dx decision do not resolve sqlite path when generation had a quick chat with schickling and the result is above marked decisions another alternative is to fail only on the required env vars like provider but let others pass through that would require the datamodel parser to bake the field information in the parse tree and that can be used to fail conditionally this information is from top of my head and might be outdated i will sync with timsuchanek on this
| 1
|
21,384
| 3,702,306,929
|
IssuesEvent
|
2016-02-29 16:23:05
|
owncloud/core
|
https://api.github.com/repos/owncloud/core
|
opened
|
Make it possible to unshare all shares, or a subset, in one click
|
app:files design enhancement - proposed feature:sharing
|
**User type**: Logged-in
**User level**: All
### Description
<!--
Please try to give as much information as you can about your request
-->
From 9.0, all reshares are shown in the owner's sharing tab and there is no visual distinction between shares and reshares. Owners can only guess to which of their sharee a specific reshare belongs to (if a sharee reshares outside of the group and so on)
#### Proposition
1. Allow a file owner to remove all shares and reshares in one go by ticking a box. Wasn't it like that prior to 9.0?
1. Group reshares under each top share in a collapsible tree view
1. Allow a file owner to remove one share and all its reshares (a branch of the tree) in one go by ticking a box instead of having to pick individually
### Benefit / value
<!--
Please explain how it could benefit users of the app, other apps or 3rd party services
-->
Making the life of sharers easier, by grouping shares to be able to quickly identify the top shares and letting owners perform some actions in one click.
----
@rullzer @PVince81 @MorrisJobke @schiesbn @jancborchardt
|
1.0
|
Make it possible to unshare all shares, or a subset, in one click - **User type**: Logged-in
**User level**: All
### Description
<!--
Please try to give as much information as you can about your request
-->
From 9.0, all reshares are shown in the owner's sharing tab and there is no visual distinction between shares and reshares. Owners can only guess to which of their sharee a specific reshare belongs to (if a sharee reshares outside of the group and so on)
#### Proposition
1. Allow a file owner to remove all shares and reshares in one go by ticking a box. Wasn't it like that prior to 9.0?
1. Group reshares under each top share in a collapsible tree view
1. Allow a file owner to remove one share and all its reshares (a branch of the tree) in one go by ticking a box instead of having to pick individually
### Benefit / value
<!--
Please explain how it could benefit users of the app, other apps or 3rd party services
-->
Making the life of sharers easier, by grouping shares to be able to quickly identify the top shares and letting owners perform some actions in one click.
----
@rullzer @PVince81 @MorrisJobke @schiesbn @jancborchardt
|
non_process
|
make it possible to unshare all shares or a subset in one click user type logged in user level all description please try to give as much information as you can about your request from all reshares are shown in the owner s sharing tab and there is no visual distinction between shares and reshares owners can only guess to which of their sharee a specific reshare belongs to if a sharee reshares outside of the group and so on proposition allow a file owner to remove all shares and reshares in one go by ticking a box wasn t it like that prior to group reshares under each top share in a collapsible tree view allow a file owner to remove one share and all its reshares a branch of the tree in one go by ticking a box instead of having to pick individually benefit value please explain how it could benefit users of the app other apps or party services making the life of sharers easier by grouping shares to be able to quickly identify the top shares and letting owners perform some actions in one click rullzer morrisjobke schiesbn jancborchardt
| 0
|
13,687
| 16,444,947,031
|
IssuesEvent
|
2021-05-20 18:24:32
|
googleapis/python-spanner-django
|
https://api.github.com/repos/googleapis/python-spanner-django
|
closed
|
Change tests parallelizing mechanism
|
api: spanner priority: p1 type: process
|
While working on the last PR I've got in the situation when kokoro checks passed (green), but after some time their status became red (because some parallelized tests failed - some can take [more than 20 min.](https://source.cloud.google.com/results/invocations/0417017c-7716-4076-87b7-d2a5dd5a1d48/targets)). This can cause problems with "automerge" (bot that runs checks and merges PR automatically, if they are green). Automerge [is used](https://github.com/googleapis/python-spanner/pull/133) in the original Spanner repo, which means this API will be using it too. And, after all, it's inconvenient when checks are green, but can become red in future (~35 min.).
|
1.0
|
Change tests parallelizing mechanism - While working on the last PR I've got in the situation when kokoro checks passed (green), but after some time their status became red (because some parallelized tests failed - some can take [more than 20 min.](https://source.cloud.google.com/results/invocations/0417017c-7716-4076-87b7-d2a5dd5a1d48/targets)). This can cause problems with "automerge" (bot that runs checks and merges PR automatically, if they are green). Automerge [is used](https://github.com/googleapis/python-spanner/pull/133) in the original Spanner repo, which means this API will be using it too. And, after all, it's inconvenient when checks are green, but can become red in future (~35 min.).
|
process
|
change tests parallelizing mechanism while working on the last pr i ve got in the situation when kokoro checks passed green but after some time their status became red because some parallelized tests failed some can take this can cause problems with automerge bot that runs checks and merges pr automatically if they are green automerge in the original spanner repo which means this api will be using it too and after all it s inconvenient when checks are green but can become red in future min
| 1
|
18,176
| 24,224,281,486
|
IssuesEvent
|
2022-09-26 13:21:17
|
altillimity/SatDump
|
https://api.github.com/repos/altillimity/SatDump
|
closed
|
multiple batch decoding no result
|
bug Processing
|
I have noticed this problem on earlier versions, but in #465 its real problem. I have batch file to offline decode recorded files (VHF, Meteor M2). Using satdump-ui.exe no problem, those are decoded each, one by one manually, but batch file processing fails to decode, created no files and .cadu have zero size. It looks like it needs that ui window to process it properly. You can test it with batch file decode.bat containing something like this:
satdump.exe meteor_m2_lrpt baseband meteor_2202250621_137102000Hz.u8.wav products out_satdump\meteor_2202250621_137102000Hz.u8 -samplerate 240000 -baseband_format w8
satdump.exe meteor_m2_lrpt baseband meteor_2202250443_137102000Hz.u8.wav products out_satdump\meteor_2202250443_137102000Hz.u8 -samplerate 240000 -baseband_format w8
when satdump.exe is replaced by satdump-ui.exe everyting is correctly decoded but windows needs to be manually closed to continue batch
|
1.0
|
multiple batch decoding no result - I have noticed this problem on earlier versions, but in #465 its real problem. I have batch file to offline decode recorded files (VHF, Meteor M2). Using satdump-ui.exe no problem, those are decoded each, one by one manually, but batch file processing fails to decode, created no files and .cadu have zero size. It looks like it needs that ui window to process it properly. You can test it with batch file decode.bat containing something like this:
satdump.exe meteor_m2_lrpt baseband meteor_2202250621_137102000Hz.u8.wav products out_satdump\meteor_2202250621_137102000Hz.u8 -samplerate 240000 -baseband_format w8
satdump.exe meteor_m2_lrpt baseband meteor_2202250443_137102000Hz.u8.wav products out_satdump\meteor_2202250443_137102000Hz.u8 -samplerate 240000 -baseband_format w8
when satdump.exe is replaced by satdump-ui.exe everyting is correctly decoded but windows needs to be manually closed to continue batch
|
process
|
multiple batch decoding no result i have noticed this problem on earlier versions but in its real problem i have batch file to offline decode recorded files vhf meteor using satdump ui exe no problem those are decoded each one by one manually but batch file processing fails to decode created no files and cadu have zero size it looks like it needs that ui window to process it properly you can test it with batch file decode bat containing something like this satdump exe meteor lrpt baseband meteor wav products out satdump meteor samplerate baseband format satdump exe meteor lrpt baseband meteor wav products out satdump meteor samplerate baseband format when satdump exe is replaced by satdump ui exe everyting is correctly decoded but windows needs to be manually closed to continue batch
| 1
|
521,578
| 15,111,914,991
|
IssuesEvent
|
2021-02-08 21:05:46
|
CDH-Studio/I-Talent
|
https://api.github.com/repos/CDH-Studio/I-Talent
|
closed
|
Frontend yarn related errors
|
bug high priority
|
**Describe the bug**
A clear and concise description of what the bug is.
there are currently 372 errors and 3 warnings from our packages.
**To Reproduce**
run yarn check
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
|
1.0
|
Frontend yarn related errors - **Describe the bug**
A clear and concise description of what the bug is.
there are currently 372 errors and 3 warnings from our packages.
**To Reproduce**
run yarn check
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
|
non_process
|
frontend yarn related errors describe the bug a clear and concise description of what the bug is there are currently errors and warnings from our packages to reproduce run yarn check expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here
| 0
|
576,056
| 17,070,133,098
|
IssuesEvent
|
2021-07-07 12:21:56
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
newstalk1130.iheart.com - site is not usable
|
browser-firefox-ios os-ios priority-normal
|
<!-- @browser: Firefox iOS 34.2 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/34.2 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/79345 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://newstalk1130.iheart.com/
**Browser / Version**: Firefox iOS 34.2
**Operating System**: iOS 14.6
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Tried to open from search. .
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
newstalk1130.iheart.com - site is not usable - <!-- @browser: Firefox iOS 34.2 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/34.2 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/79345 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://newstalk1130.iheart.com/
**Browser / Version**: Firefox iOS 34.2
**Operating System**: iOS 14.6
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Tried to open from search. .
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
iheart com site is not usable url browser version firefox ios operating system ios tested another browser no problem type site is not usable description page not loading correctly steps to reproduce tried to open from search browser configuration none from with ❤️
| 0
|
12,249
| 14,767,407,123
|
IssuesEvent
|
2021-01-10 06:34:08
|
Big-Joe-Channel/edit-repo
|
https://api.github.com/repos/Big-Joe-Channel/edit-repo
|
opened
|
EP
|
assigned new mission processing
|
# EP94
## Notices
- Video Title:工作去小公司还是大公司 (在2021年)/ Work for a small company or a large one?
- Thumbnail Title:小公司总监来告诉你升职之道
- Raw video: 2020-10-29_08-48-23 [点击下载](https://drive.google.com/file/d/1Mg8s0ufQuzOyCDO-CO9r4jaWOyrjp3B_/view?usp=sharing)
- Comments:bigjoe: zhiyao有时候有点啰嗦,该切就切,该加速就加速
## Todo-List
1. [ ] 海报制作,文案的字要醒目,且非常非常大
- [ ] raw文件调色,输出调色后待剪辑素材
- [ ] 添加片头
- [ ] 选择片头配乐
- [ ] 加工片首音乐的音量及渐入
- [ ] **剪辑正片**
- [ ] 插入High-Light字幕
- [ ] 插入配图
- [ ] 插入推荐点赞动画
- [ ] 插入点赞动画去底色
- [ ] 选择片尾音乐
- [ ] 加工片尾音乐的音量及渐入
- [ ] 插入推荐入群二维码动画
- [ ] 插入志愿者介绍
- [ ] 插入加入志愿者申请邮箱
- [ ] 制作片尾动画
- [ ] 插入片尾图片
- [ ] 添加转场动画
- [ ] 输出无字幕影片
- [ ] 使用《讯飞听见字幕》制作字幕,导入上一步结果,获取SRT
- [ ] 导入字幕SRT
- [ ] 校验字幕时间及内容
- [ ] 导出带字幕影片
- [ ] 导出SRT字幕文件
- [ ] 导出TXT字幕文件
- [ ] 上传影片、SRT、TXT、海报
- [ ] 向监制申请审核
|
1.0
|
EP - # EP94
## Notices
- Video Title:工作去小公司还是大公司 (在2021年)/ Work for a small company or a large one?
- Thumbnail Title:小公司总监来告诉你升职之道
- Raw video: 2020-10-29_08-48-23 [点击下载](https://drive.google.com/file/d/1Mg8s0ufQuzOyCDO-CO9r4jaWOyrjp3B_/view?usp=sharing)
- Comments:bigjoe: zhiyao有时候有点啰嗦,该切就切,该加速就加速
## Todo-List
1. [ ] 海报制作,文案的字要醒目,且非常非常大
- [ ] raw文件调色,输出调色后待剪辑素材
- [ ] 添加片头
- [ ] 选择片头配乐
- [ ] 加工片首音乐的音量及渐入
- [ ] **剪辑正片**
- [ ] 插入High-Light字幕
- [ ] 插入配图
- [ ] 插入推荐点赞动画
- [ ] 插入点赞动画去底色
- [ ] 选择片尾音乐
- [ ] 加工片尾音乐的音量及渐入
- [ ] 插入推荐入群二维码动画
- [ ] 插入志愿者介绍
- [ ] 插入加入志愿者申请邮箱
- [ ] 制作片尾动画
- [ ] 插入片尾图片
- [ ] 添加转场动画
- [ ] 输出无字幕影片
- [ ] 使用《讯飞听见字幕》制作字幕,导入上一步结果,获取SRT
- [ ] 导入字幕SRT
- [ ] 校验字幕时间及内容
- [ ] 导出带字幕影片
- [ ] 导出SRT字幕文件
- [ ] 导出TXT字幕文件
- [ ] 上传影片、SRT、TXT、海报
- [ ] 向监制申请审核
|
process
|
ep notices video title:工作去小公司还是大公司 ( ) work for a small company or a large one thumbnail title:小公司总监来告诉你升职之道 raw video comments:bigjoe zhiyao有时候有点啰嗦,该切就切,该加速就加速 todo list 海报制作,文案的字要醒目,且非常非常大 raw文件调色,输出调色后待剪辑素材 添加片头 选择片头配乐 加工片首音乐的音量及渐入 剪辑正片 插入high light字幕 插入配图 插入推荐点赞动画 插入点赞动画去底色 选择片尾音乐 加工片尾音乐的音量及渐入 插入推荐入群二维码动画 插入志愿者介绍 插入加入志愿者申请邮箱 制作片尾动画 插入片尾图片 添加转场动画 输出无字幕影片 使用《讯飞听见字幕》制作字幕,导入上一步结果,获取srt 导入字幕srt 校验字幕时间及内容 导出带字幕影片 导出srt字幕文件 导出txt字幕文件 上传影片、srt、txt、海报 向监制申请审核
| 1
|
4,587
| 4,467,217,523
|
IssuesEvent
|
2016-08-25 03:10:36
|
conda-forge/status
|
https://api.github.com/repos/conda-forge/status
|
closed
|
CircleCI login issues
|
CircleCI degraded performance
|
CircleCI has reported an increase in login issues. See this [announcement]( http://status.circleci.com/incidents/swr6g9rbjdbj ). If this is affecting you, please let us know.
|
True
|
CircleCI login issues - CircleCI has reported an increase in login issues. See this [announcement]( http://status.circleci.com/incidents/swr6g9rbjdbj ). If this is affecting you, please let us know.
|
non_process
|
circleci login issues circleci has reported an increase in login issues see this if this is affecting you please let us know
| 0
|
269,306
| 28,960,080,423
|
IssuesEvent
|
2023-05-10 01:13:30
|
Nivaskumark/kernel_v4.19.72
|
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72
|
reopened
|
CVE-2023-1118 (High) detected in linuxlinux-4.19.83
|
Mend: dependency security vulnerability
|
## CVE-2023-1118 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/7665378059da10e5ded7efbfccc88efce9e51dfc">7665378059da10e5ded7efbfccc88efce9e51dfc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/rc/ene_ir.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/rc/ene_ir.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw use after free in the Linux kernel integrated infrared receiver/transceiver driver was found in the way user detaching rc device. A local user could use this flaw to crash the system or potentially escalate their privileges on the system.
<p>Publish Date: 2023-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1118>CVE-2023-1118</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1118">https://www.linuxkernelcves.com/cves/CVE-2023-1118</a></p>
<p>Release Date: 2023-03-02</p>
<p>Fix Resolution: v4.14.308,v4.19.276,v5.4.235,v5.10.173,v5.15.99,v6.1.16,v6.2.3,v6.3-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-1118 (High) detected in linuxlinux-4.19.83 - ## CVE-2023-1118 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/7665378059da10e5ded7efbfccc88efce9e51dfc">7665378059da10e5ded7efbfccc88efce9e51dfc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/rc/ene_ir.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/rc/ene_ir.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw use after free in the Linux kernel integrated infrared receiver/transceiver driver was found in the way user detaching rc device. A local user could use this flaw to crash the system or potentially escalate their privileges on the system.
<p>Publish Date: 2023-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1118>CVE-2023-1118</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1118">https://www.linuxkernelcves.com/cves/CVE-2023-1118</a></p>
<p>Release Date: 2023-03-02</p>
<p>Fix Resolution: v4.14.308,v4.19.276,v5.4.235,v5.10.173,v5.15.99,v6.1.16,v6.2.3,v6.3-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files drivers media rc ene ir c drivers media rc ene ir c vulnerability details a flaw use after free in the linux kernel integrated infrared receiver transceiver driver was found in the way user detaching rc device a local user could use this flaw to crash the system or potentially escalate their privileges on the system publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
314,258
| 9,594,613,913
|
IssuesEvent
|
2019-05-09 14:20:57
|
geosolutions-it/geonode-mapstore-client
|
https://api.github.com/repos/geosolutions-it/geonode-mapstore-client
|
opened
|
Add /gs/ path in GeoServer regex to request capabilities for a single layer
|
Priority: High bug
|
Currently client send a request for gatCapabilities to the whole GeoServer if it deosn't match the default regex provided by MapStore.
The default regex is `/\/[\w- ]*geoserver[\w- ]*\//`, we need to change it to `/\/[\w- ]*geoserver[\w- ]*\/|\/[\w- ]*gs[\w- ]*\//` to support /gs/ requests.
requires https://github.com/geosolutions-it/MapStore2/issues/3738
|
1.0
|
Add /gs/ path in GeoServer regex to request capabilities for a single layer - Currently client send a request for gatCapabilities to the whole GeoServer if it deosn't match the default regex provided by MapStore.
The default regex is `/\/[\w- ]*geoserver[\w- ]*\//`, we need to change it to `/\/[\w- ]*geoserver[\w- ]*\/|\/[\w- ]*gs[\w- ]*\//` to support /gs/ requests.
requires https://github.com/geosolutions-it/MapStore2/issues/3738
|
non_process
|
add gs path in geoserver regex to request capabilities for a single layer currently client send a request for gatcapabilities to the whole geoserver if it deosn t match the default regex provided by mapstore the default regex is geoserver we need to change it to geoserver gs to support gs requests requires
| 0
|
127,931
| 12,343,434,628
|
IssuesEvent
|
2020-05-15 04:00:57
|
swimlane/PSAttck
|
https://api.github.com/repos/swimlane/PSAttck
|
opened
|
Update documentation to focus on contextual data
|
documentation
|
Per comment on Reddit, main documentation should focus more on the contextual data aspect instead of strictly MITRE ATT&CK access/data.
|
1.0
|
Update documentation to focus on contextual data - Per comment on Reddit, main documentation should focus more on the contextual data aspect instead of strictly MITRE ATT&CK access/data.
|
non_process
|
update documentation to focus on contextual data per comment on reddit main documentation should focus more on the contextual data aspect instead of strictly mitre att ck access data
| 0
|
39
| 2,496,105,586
|
IssuesEvent
|
2015-01-06 17:04:55
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Console text formatting issue with newlines present
|
Defect
|
Consider the following console option parser config:
```php
return parent::getOptionParser()
->description("A Shell to do some basic database maintenance for you. Bla blub.
Use -d -v (dry-run and verbose mode) to only display queries but not execute them.")
->...;
```
As you an see, the `Bla blub` is almost at the end of the line-length limit, but not quite.
We have a manual newline prior to the last sentence (before `Use -d -v ...`).
One would expect:
```
A Shell to do some basic database maintenance for you. Bla blub.
Use -d -v (dry-run and verbose mode) to only display queries but not
execute them.
```
The output, though is:
```
A Shell to do some basic database maintenance for you. Bla blub.
Use -d
-v (dry-run and verbose mode) to only display queries but not execute
them.
```
With longer parapgrahs the output can get really messy.
I suspect, because it thinks it needs to calculate it upon
A Shell to do some basic database maintenance for you. Bla blub.\nUse -d
as a non broken line.
It should probably explode first on `\n`, and then apply the line-length formatting.
This way existing newlines work side by side with the then introduced new ones.
|
1.0
|
Console text formatting issue with newlines present - Consider the following console option parser config:
```php
return parent::getOptionParser()
->description("A Shell to do some basic database maintenance for you. Bla blub.
Use -d -v (dry-run and verbose mode) to only display queries but not execute them.")
->...;
```
As you an see, the `Bla blub` is almost at the end of the line-length limit, but not quite.
We have a manual newline prior to the last sentence (before `Use -d -v ...`).
One would expect:
```
A Shell to do some basic database maintenance for you. Bla blub.
Use -d -v (dry-run and verbose mode) to only display queries but not
execute them.
```
The output, though is:
```
A Shell to do some basic database maintenance for you. Bla blub.
Use -d
-v (dry-run and verbose mode) to only display queries but not execute
them.
```
With longer parapgrahs the output can get really messy.
I suspect, because it thinks it needs to calculate it upon
A Shell to do some basic database maintenance for you. Bla blub.\nUse -d
as a non broken line.
It should probably explode first on `\n`, and then apply the line-length formatting.
This way existing newlines work side by side with the then introduced new ones.
|
non_process
|
console text formatting issue with newlines present consider the following console option parser config php return parent getoptionparser description a shell to do some basic database maintenance for you bla blub use d v dry run and verbose mode to only display queries but not execute them as you an see the bla blub is almost at the end of the line length limit but not quite we have a manual newline prior to the last sentence before use d v one would expect a shell to do some basic database maintenance for you bla blub use d v dry run and verbose mode to only display queries but not execute them the output though is a shell to do some basic database maintenance for you bla blub use d v dry run and verbose mode to only display queries but not execute them with longer parapgrahs the output can get really messy i suspect because it thinks it needs to calculate it upon a shell to do some basic database maintenance for you bla blub nuse d as a non broken line it should probably explode first on n and then apply the line length formatting this way existing newlines work side by side with the then introduced new ones
| 0
|
10,827
| 13,609,588,687
|
IssuesEvent
|
2020-09-23 05:40:07
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Wrong GRASS processing description files
|
Bug Feedback Processing
|
in line 6:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|0"
I think it should be something like this:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|False|0"
or:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|True|0"
|
1.0
|
Wrong GRASS processing description files - in line 6:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|0"
I think it should be something like this:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|False|0"
or:
"QgsProcessingParameterEnum|sort|Sort output by distance|asc;desc|True|0"
|
process
|
wrong grass processing description files in line qgsprocessingparameterenum sort sort output by distance asc desc i think it should be something like this qgsprocessingparameterenum sort sort output by distance asc desc false or qgsprocessingparameterenum sort sort output by distance asc desc true
| 1
|
8,920
| 12,031,685,927
|
IssuesEvent
|
2020-04-13 10:15:48
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Heatmap not correct
|
Bug High Priority Processing Regression
|
Noticed that the heatmap tool gives very different results from 3.10 to 3.12 - is it supposed to do that?
|
1.0
|
Heatmap not correct - Noticed that the heatmap tool gives very different results from 3.10 to 3.12 - is it supposed to do that?
|
process
|
heatmap not correct noticed that the heatmap tool gives very different results from to is it supposed to do that
| 1
|
16,528
| 21,554,931,507
|
IssuesEvent
|
2022-04-30 08:40:07
|
MSc-AI/aiproject
|
https://api.github.com/repos/MSc-AI/aiproject
|
closed
|
Data PreProcessing
|
enhancement FileOperations Pre Processing
|
- [x] Read Train Data Set
- [x] Read Test Data Set
- [x] Fill in blanks by averaging
- [x] Validation
- [x] Feature Selection
- [x] Feature Extraction
|
1.0
|
Data PreProcessing - - [x] Read Train Data Set
- [x] Read Test Data Set
- [x] Fill in blanks by averaging
- [x] Validation
- [x] Feature Selection
- [x] Feature Extraction
|
process
|
data preprocessing read train data set read test data set fill in blanks by averaging validation feature selection feature extraction
| 1
|
57,606
| 7,084,748,129
|
IssuesEvent
|
2018-01-11 08:26:35
|
BSData/wh40k
|
https://api.github.com/repos/BSData/wh40k
|
closed
|
[Anon] Bug report: Imperium - Astra Militarum.catz
|
Bug - choice Imperium - Astra Militarum Resolution - by design
|
**File:** Imperium - Astra Militarum.catz
**BattleScribe version:** 2.01.13
**Platform:** Android
**Dropbox:** Yes
**Description:** Tank commanders do not have any options for vehicle wargear such as track guards etc.
|
1.0
|
[Anon] Bug report: Imperium - Astra Militarum.catz - **File:** Imperium - Astra Militarum.catz
**BattleScribe version:** 2.01.13
**Platform:** Android
**Dropbox:** Yes
**Description:** Tank commanders do not have any options for vehicle wargear such as track guards etc.
|
non_process
|
bug report imperium astra militarum catz file imperium astra militarum catz battlescribe version platform android dropbox yes description tank commanders do not have any options for vehicle wargear such as track guards etc
| 0
|
311,173
| 26,773,923,132
|
IssuesEvent
|
2023-01-31 15:52:05
|
spring-projects/spring-framework
|
https://api.github.com/repos/spring-projects/spring-framework
|
closed
|
Access recorded ApplicationEvents across all threads in the TestContext framework
|
in: test type: enhancement
|
<!--
!!! For Security Vulnerabilities, please go to https://spring.io/security-policy !!!
-->
**Affects:** 5.3.22
---
<!--
Thanks for taking the time to create an issue. Please read the following:
- Questions should be asked on Stack Overflow.
- For bugs, specify affected versions and explain what you are trying to do.
- For enhancements, provide context and describe the problem.
Issue or Pull Request? Create only one, not both. GitHub treats them as the same.
If unsure, start with an issue, and if you submit a pull request later, the
issue will be closed as superseded.
-->
#### Feature Request:
When using `@RecordApplicationEvents`, currently the `ApplicationEventsHolder` uses a `ThreadLocal` to save published events. Can this be extended to also retrieve events across all threads?
Use cases:
- Assert on async events
- Whether or not the event itself was async, when using an async test library like Awaitility in this way:
`await().untilAsserted(() -> assertThat(applicationEvents.stream())..... );`
The assertion runs in a different thread (so that Awaitility can time and interrupt if necessary), so no events are found whatsoever.
|
1.0
|
Access recorded ApplicationEvents across all threads in the TestContext framework - <!--
!!! For Security Vulnerabilities, please go to https://spring.io/security-policy !!!
-->
**Affects:** 5.3.22
---
<!--
Thanks for taking the time to create an issue. Please read the following:
- Questions should be asked on Stack Overflow.
- For bugs, specify affected versions and explain what you are trying to do.
- For enhancements, provide context and describe the problem.
Issue or Pull Request? Create only one, not both. GitHub treats them as the same.
If unsure, start with an issue, and if you submit a pull request later, the
issue will be closed as superseded.
-->
#### Feature Request:
When using `@RecordApplicationEvents`, currently the `ApplicationEventsHolder` uses a `ThreadLocal` to save published events. Can this be extended to also retrieve events across all threads?
Use cases:
- Assert on async events
- Whether or not the event itself was async, when using an async test library like Awaitility in this way:
`await().untilAsserted(() -> assertThat(applicationEvents.stream())..... );`
The assertion runs in a different thread (so that Awaitility can time and interrupt if necessary), so no events are found whatsoever.
|
non_process
|
access recorded applicationevents across all threads in the testcontext framework for security vulnerabilities please go to affects thanks for taking the time to create an issue please read the following questions should be asked on stack overflow for bugs specify affected versions and explain what you are trying to do for enhancements provide context and describe the problem issue or pull request create only one not both github treats them as the same if unsure start with an issue and if you submit a pull request later the issue will be closed as superseded feature request when using recordapplicationevents currently the applicationeventsholder uses a threadlocal to save published events can this be extended to also retrieve events across all threads use cases assert on async events whether or not the event itself was async when using an async test library like awaitility in this way await untilasserted assertthat applicationevents stream the assertion runs in a different thread so that awaitility can time and interrupt if necessary so no events are found whatsoever
| 0
|
13,835
| 16,557,439,166
|
IssuesEvent
|
2021-05-28 15:25:57
|
fourMs/MGT-python
|
https://api.github.com/repos/fourMs/MGT-python
|
opened
|
ffmpeg_cmd and ffmpeg_cmd_async should raise exception if something goes wrong
|
bug compatibility enhancement
|
...instead of misleadingly printing 'Progress 100%'. The exception should print stderr.
|
True
|
ffmpeg_cmd and ffmpeg_cmd_async should raise exception if something goes wrong - ...instead of misleadingly printing 'Progress 100%'. The exception should print stderr.
|
non_process
|
ffmpeg cmd and ffmpeg cmd async should raise exception if something goes wrong instead of misleadingly printing progress the exception should print stderr
| 0
|
669,497
| 22,628,421,216
|
IssuesEvent
|
2022-06-30 12:47:46
|
ballerina-platform/ballerina-dev-website
|
https://api.github.com/repos/ballerina-platform/ballerina-dev-website
|
closed
|
Update Liquid Syntax Properties for the Swan Lake Releases
|
Priority/Highest Type/NewFeature Area/CommonPages
|
## Description
Need to update the liquid syntax properties (in the `meta-data.json` file) for the two update and patch Swan Lake releases.
## Related website/documentation area
Area/CommonPages
<!--Area/BBEs-->
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe the problem(s)
> A detailed description of the purpose of this new feature.
## Describe your solution(s)
> A detailed description of how this new feature will solve the problem described above.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
|
1.0
|
Update Liquid Syntax Properties for the Swan Lake Releases - ## Description
Need to update the liquid syntax properties (in the `meta-data.json` file) for the two update and patch Swan Lake releases.
## Related website/documentation area
Area/CommonPages
<!--Area/BBEs-->
<!--Area/HomePageSamples-->
<!--Area/LearnPages-->
<!--Area/CommonPages-->
<!--Area/Backend-->
<!--Area/UIUX-->
<!--Area/Workflows-->
<!--Area/Blog-->
## Describe the problem(s)
> A detailed description of the purpose of this new feature.
## Describe your solution(s)
> A detailed description of how this new feature will solve the problem described above.
## Related issue(s) (optional)
> Any related issues such as sub tasks and issues reported in other repositories (e.g., component repositories), similar problems, etc.
## Suggested label(s) (optional)
> Optional comma-separated list of suggested labels. Non committers can’t assign labels to issues, and thereby, this will help issue creators who are not a committer to suggest possible labels.
## Suggested assignee(s) (optional)
> Optional comma-separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, and thereby, this will help issue creators who are not a committer to suggest possible assignees.
|
non_process
|
update liquid syntax properties for the swan lake releases description need to update the liquid syntax properties in the meta data json file for the two update and patch swan lake releases related website documentation area area commonpages describe the problem s a detailed description of the purpose of this new feature describe your solution s a detailed description of how this new feature will solve the problem described above related issue s optional any related issues such as sub tasks and issues reported in other repositories e g component repositories similar problems etc suggested label s optional optional comma separated list of suggested labels non committers can’t assign labels to issues and thereby this will help issue creators who are not a committer to suggest possible labels suggested assignee s optional optional comma separated list of suggested team members who should attend the issue non committers can’t assign issues to assignees and thereby this will help issue creators who are not a committer to suggest possible assignees
| 0
|
105,482
| 16,652,721,692
|
IssuesEvent
|
2021-06-05 01:02:58
|
rsoreq/kendo-ui-core
|
https://api.github.com/repos/rsoreq/kendo-ui-core
|
opened
|
CVE-2020-28469 (Medium) detected in glob-parent-2.0.0.tgz
|
security vulnerability
|
## CVE-2020-28469 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: kendo-ui-core/package.json</p>
<p>Path to vulnerable library: kendo-ui-core/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.9.12.tgz (Root Library)
- chokidar-1.7.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"glob-parent","packageVersion":"2.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.9.12;chokidar:1.7.0;glob-parent:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"glob-parent - 5.1.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28469","vulnerabilityDetails":"This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-28469 (Medium) detected in glob-parent-2.0.0.tgz - ## CVE-2020-28469 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: kendo-ui-core/package.json</p>
<p>Path to vulnerable library: kendo-ui-core/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.9.12.tgz (Root Library)
- chokidar-1.7.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"glob-parent","packageVersion":"2.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"browser-sync:2.9.12;chokidar:1.7.0;glob-parent:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"glob-parent - 5.1.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28469","vulnerabilityDetails":"This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in glob parent tgz cve medium severity vulnerability vulnerable library glob parent tgz strips glob magic from a string to provide the parent path library home page a href path to dependency file kendo ui core package json path to vulnerable library kendo ui core node modules glob parent package json dependency hierarchy browser sync tgz root library chokidar tgz x glob parent tgz vulnerable library found in base branch master vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree browser sync chokidar glob parent isminimumfixversionavailable true minimumfixversion glob parent basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator vulnerabilityurl
| 0
|
14,538
| 17,650,733,209
|
IssuesEvent
|
2021-08-20 12:53:10
|
timo-reymann/intellij-mjml-support
|
https://api.github.com/repos/timo-reymann/intellij-mjml-support
|
closed
|
Make it easier to know which snapshot contains a feature
|
process
|
Currently whenever someone requests a feature its hard to tell which snapshot contains which changes.
It would be cool to have some kind of integration from the snapshot build to the github issue it is referencing.
Idea:
After each snapshot deploy:
1. Get version info from JetBrains
2. Check if there is a comment on the issue for the version
2.1 already exists -> delete comment
3. Create a comment stating sth like "hey the latest snapshot explicitly mentioning this feature is: xxx"
|
1.0
|
Make it easier to know which snapshot contains a feature - Currently whenever someone requests a feature its hard to tell which snapshot contains which changes.
It would be cool to have some kind of integration from the snapshot build to the github issue it is referencing.
Idea:
After each snapshot deploy:
1. Get version info from JetBrains
2. Check if there is a comment on the issue for the version
2.1 already exists -> delete comment
3. Create a comment stating sth like "hey the latest snapshot explicitly mentioning this feature is: xxx"
|
process
|
make it easier to know which snapshot contains a feature currently whenever someone requests a feature its hard to tell which snapshot contains which changes it would be cool to have some kind of integration from the snapshot build to the github issue it is referencing idea after each snapshot deploy get version info from jetbrains check if there is a comment on the issue for the version already exists delete comment create a comment stating sth like hey the latest snapshot explicitly mentioning this feature is xxx
| 1
|
9,657
| 12,640,293,943
|
IssuesEvent
|
2020-06-16 02:34:26
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
opened
|
Methods of Signal Decomposition (blind source separation)
|
signal processing :chart_with_upwards_trend:
|
## Multichannel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] ICA
## Single-channel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] Single-channel ICA (SCICA): [Davies, M. E., & James, C. J. (2007). Source separation using single channel ICA. Signal Processing, 87(8), 1819-1832.](https://www.sciencedirect.com/science/article/abs/pii/S0165168407000151)
- [ ] Ma's Method: [Ma, H. G., Jiang, Q. B., Liu, Z. Q., Liu, G., & Ma, Z. Y. (2010). A novel blind source separation method for single-channel signal. Signal processing, 90(12), 3232-3241.](https://www.sciencedirect.com/science/article/abs/pii/S0165168410002318)
- [ ] Lu's Method: [Lu, G., Xiao, M., Wei, P., & Zhang, H. (2015). A new method of blind source separation using single-channel ICA based on higher-order statistics. Mathematical Problems in Engineering, 2015.](https://www.hindawi.com/journals/mpe/2015/439264/)
|
1.0
|
Methods of Signal Decomposition (blind source separation) - ## Multichannel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] ICA
## Single-channel
- [x] Empirical Mode Decomposition (EMD): based on https://github.com/laszukdawid/PyEMD
- [ ] Single-channel ICA (SCICA): [Davies, M. E., & James, C. J. (2007). Source separation using single channel ICA. Signal Processing, 87(8), 1819-1832.](https://www.sciencedirect.com/science/article/abs/pii/S0165168407000151)
- [ ] Ma's Method: [Ma, H. G., Jiang, Q. B., Liu, Z. Q., Liu, G., & Ma, Z. Y. (2010). A novel blind source separation method for single-channel signal. Signal processing, 90(12), 3232-3241.](https://www.sciencedirect.com/science/article/abs/pii/S0165168410002318)
- [ ] Lu's Method: [Lu, G., Xiao, M., Wei, P., & Zhang, H. (2015). A new method of blind source separation using single-channel ICA based on higher-order statistics. Mathematical Problems in Engineering, 2015.](https://www.hindawi.com/journals/mpe/2015/439264/)
|
process
|
methods of signal decomposition blind source separation multichannel empirical mode decomposition emd based on ica single channel empirical mode decomposition emd based on single channel ica scica ma s method lu s method
| 1
|
88,070
| 3,771,254,589
|
IssuesEvent
|
2016-03-16 17:01:37
|
ox-it/ords
|
https://api.github.com/repos/ox-it/ords
|
opened
|
Leaving 'Name' field blank when adding a new primary key to a table causes problems
|
bug David Paine Priority-High
|
To reproduce this bug:
- Upload a database containing at least one table (I did this with CSV files; I haven't specifically tested with other formats).
- Open the schema designer. Click a table to select it, and click the 'Keys' button.
- Click 'Remove key' to remove the existing primary key.
- Cick 'Add key', select a field to be the new primary key from the list, then click the left arrows to move it into the 'Fields in key' column.
- Leave the 'Name' field blank.
- Click 'OK'
At this stage, you will get a pop-up error message saying 'There was an error when dealing with your request. See the error console for more details'. However, there's no further indication of what the error is, and no clues about where the error console is or how one might view it.
After dismissing the error message, it _looks_ as though everything has worked as it should: the field just selected as the new primary key is in bold.
However, if you save changes, exit the schema designer, and re-open it, it becomes plain that there _is_ a problem. The field name is no longer in bold - so the table no longer has a primary key.
This problem can be avoided by putting something in the 'Name' field when adding the new primary key: if that's done, the error message doesn't appear, and the primary key stays in place after exiting and re-entering the schema designer. However, it's far from obvious that this is what you need to do, and the current error message is not at all helpful.
There are two possible solutions to this that I can see:
1. The ORDS supplies a name for the key automatically, so the user doesn't have to type anything in. This would be my preferred option if it's possible: it's not at all clear why keys need a name anyway, or what purpose is served by giving them one. If this could all be handled without the user having to worry about it, that would smooth this process. (ORDS seems to manage to generate names for primary keys under other circumstances - it's only when an existing key is changed that it becomes an issue.)
2. Alternatively, if the user clicks 'OK' without supplying a key name, they get a warning message explaining what the problem is, and can't close the dialogue box without adding a name (or, I suppose, cancelling the whole operation). I am happy to write this warning message if I'm given access to it.
|
1.0
|
Leaving 'Name' field blank when adding a new primary key to a table causes problems - To reproduce this bug:
- Upload a database containing at least one table (I did this with CSV files; I haven't specifically tested with other formats).
- Open the schema designer. Click a table to select it, and click the 'Keys' button.
- Click 'Remove key' to remove the existing primary key.
- Cick 'Add key', select a field to be the new primary key from the list, then click the left arrows to move it into the 'Fields in key' column.
- Leave the 'Name' field blank.
- Click 'OK'
At this stage, you will get a pop-up error message saying 'There was an error when dealing with your request. See the error console for more details'. However, there's no further indication of what the error is, and no clues about where the error console is or how one might view it.
After dismissing the error message, it _looks_ as though everything has worked as it should: the field just selected as the new primary key is in bold.
However, if you save changes, exit the schema designer, and re-open it, it becomes plain that there _is_ a problem. The field name is no longer in bold - so the table no longer has a primary key.
This problem can be avoided by putting something in the 'Name' field when adding the new primary key: if that's done, the error message doesn't appear, and the primary key stays in place after exiting and re-entering the schema designer. However, it's far from obvious that this is what you need to do, and the current error message is not at all helpful.
There are two possible solutions to this that I can see:
1. The ORDS supplies a name for the key automatically, so the user doesn't have to type anything in. This would be my preferred option if it's possible: it's not at all clear why keys need a name anyway, or what purpose is served by giving them one. If this could all be handled without the user having to worry about it, that would smooth this process. (ORDS seems to manage to generate names for primary keys under other circumstances - it's only when an existing key is changed that it becomes an issue.)
2. Alternatively, if the user clicks 'OK' without supplying a key name, they get a warning message explaining what the problem is, and can't close the dialogue box without adding a name (or, I suppose, cancelling the whole operation). I am happy to write this warning message if I'm given access to it.
|
non_process
|
leaving name field blank when adding a new primary key to a table causes problems to reproduce this bug upload a database containing at least one table i did this with csv files i haven t specifically tested with other formats open the schema designer click a table to select it and click the keys button click remove key to remove the existing primary key cick add key select a field to be the new primary key from the list then click the left arrows to move it into the fields in key column leave the name field blank click ok at this stage you will get a pop up error message saying there was an error when dealing with your request see the error console for more details however there s no further indication of what the error is and no clues about where the error console is or how one might view it after dismissing the error message it looks as though everything has worked as it should the field just selected as the new primary key is in bold however if you save changes exit the schema designer and re open it it becomes plain that there is a problem the field name is no longer in bold so the table no longer has a primary key this problem can be avoided by putting something in the name field when adding the new primary key if that s done the error message doesn t appear and the primary key stays in place after exiting and re entering the schema designer however it s far from obvious that this is what you need to do and the current error message is not at all helpful there are two possible solutions to this that i can see the ords supplies a name for the key automatically so the user doesn t have to type anything in this would be my preferred option if it s possible it s not at all clear why keys need a name anyway or what purpose is served by giving them one if this could all be handled without the user having to worry about it that would smooth this process ords seems to manage to generate names for primary keys under other circumstances it s only when an existing key is changed that it becomes an issue alternatively if the user clicks ok without supplying a key name they get a warning message explaining what the problem is and can t close the dialogue box without adding a name or i suppose cancelling the whole operation i am happy to write this warning message if i m given access to it
| 0
|
15,712
| 19,848,791,291
|
IssuesEvent
|
2022-01-21 09:55:56
|
ooi-data/CE09OSSM-MFD35-05-PCO2WB000-recovered_inst-pco2w_abc_instrument_blank
|
https://api.github.com/repos/ooi-data/CE09OSSM-MFD35-05-PCO2WB000-recovered_inst-pco2w_abc_instrument_blank
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:55:55.374214.
## Details
Flow name: `CE09OSSM-MFD35-05-PCO2WB000-recovered_inst-pco2w_abc_instrument_blank`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:55:55.374214.
## Details
Flow name: `CE09OSSM-MFD35-05-PCO2WB000-recovered_inst-pco2w_abc_instrument_blank`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered inst abc instrument blank task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
11,379
| 14,221,474,502
|
IssuesEvent
|
2020-11-17 15:44:42
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Incremental builds broken when changing rules: Old target files still present in bazel-bin.
|
team-Core type: support / not a bug (process) untriaged
|
### Description of the problem / feature request:
I triggered a bug (see below of how to reproduce) where Bazel didn't gave me a safe incremental build. The issue is that I could make a mistake such that my build was all ok until made a fresh checkout.
The core reason is that old target files are left behind in bazel-bin even though a change to a BUIILD or .bzl file have renamed the output file. This confuses the user - which in my case is the old Makefile modified to call Bazel and copy some targets from the bazel-bin into, where Make expects the targets to end up, such we gradually can convert from Make to Bazel.
The main reason why I am looking at Bazel as an replacement for Make in the first place, is that we can rely on incremental builds: No need to clean our workspaces on the buildserver, Bazel is supposed to handle _all_ changes correctly. If a developer make a mistake in a BUILD or .bzl, file it will be caught in the pre-push build and not merged into master. The pre-push build is also just an incremental build.
With this bug, I have an example of an error which I (and therefore others) could make, where a broken build is left undiscovered, and merged into master and potentially first discovered, when someone makes a new checkout.
### Feature requests: what underlying problem are you trying to solve with this feature?
Ensure incremental builds when BUILD and .bzl files are changed.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
First create a BUILD file with
```
genrule(name="foo",
outs=["foo.txt"],
cmd="echo foobar > $@")
```
run ``bazel build foo.txt``
Now change (imagine a git commit here) the BUILD file to
```
genrule(name="foo.txt",
outs=["foo.txt.txt"],
cmd="echo something > $@")
```
run ``bazel build foo.txt``
Notice that foo.txt is _still_ present in bazel-bin - and the build command still works. There is no way the scripts or the developer running Bazel can catch the above error. The change to the BUILD file only takes effect when you make a clean checkout.
The key argument for Bazel, namely supporting safe incremental builds, is thus gone.
### What operating system are you running Bazel on?
Linux, Ubuntu 16.04
### What's the output of `bazel info release`?
release 3.7.0
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
### Have you found anything relevant by searching the web?
No
### Any other information, logs, or outputs that you want to share?
|
1.0
|
Incremental builds broken when changing rules: Old target files still present in bazel-bin. - ### Description of the problem / feature request:
I triggered a bug (see below of how to reproduce) where Bazel didn't gave me a safe incremental build. The issue is that I could make a mistake such that my build was all ok until made a fresh checkout.
The core reason is that old target files are left behind in bazel-bin even though a change to a BUIILD or .bzl file have renamed the output file. This confuses the user - which in my case is the old Makefile modified to call Bazel and copy some targets from the bazel-bin into, where Make expects the targets to end up, such we gradually can convert from Make to Bazel.
The main reason why I am looking at Bazel as an replacement for Make in the first place, is that we can rely on incremental builds: No need to clean our workspaces on the buildserver, Bazel is supposed to handle _all_ changes correctly. If a developer make a mistake in a BUILD or .bzl, file it will be caught in the pre-push build and not merged into master. The pre-push build is also just an incremental build.
With this bug, I have an example of an error which I (and therefore others) could make, where a broken build is left undiscovered, and merged into master and potentially first discovered, when someone makes a new checkout.
### Feature requests: what underlying problem are you trying to solve with this feature?
Ensure incremental builds when BUILD and .bzl files are changed.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
First create a BUILD file with
```
genrule(name="foo",
outs=["foo.txt"],
cmd="echo foobar > $@")
```
run ``bazel build foo.txt``
Now change (imagine a git commit here) the BUILD file to
```
genrule(name="foo.txt",
outs=["foo.txt.txt"],
cmd="echo something > $@")
```
run ``bazel build foo.txt``
Notice that foo.txt is _still_ present in bazel-bin - and the build command still works. There is no way the scripts or the developer running Bazel can catch the above error. The change to the BUILD file only takes effect when you make a clean checkout.
The key argument for Bazel, namely supporting safe incremental builds, is thus gone.
### What operating system are you running Bazel on?
Linux, Ubuntu 16.04
### What's the output of `bazel info release`?
release 3.7.0
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
### Have you found anything relevant by searching the web?
No
### Any other information, logs, or outputs that you want to share?
|
process
|
incremental builds broken when changing rules old target files still present in bazel bin description of the problem feature request i triggered a bug see below of how to reproduce where bazel didn t gave me a safe incremental build the issue is that i could make a mistake such that my build was all ok until made a fresh checkout the core reason is that old target files are left behind in bazel bin even though a change to a buiild or bzl file have renamed the output file this confuses the user which in my case is the old makefile modified to call bazel and copy some targets from the bazel bin into where make expects the targets to end up such we gradually can convert from make to bazel the main reason why i am looking at bazel as an replacement for make in the first place is that we can rely on incremental builds no need to clean our workspaces on the buildserver bazel is supposed to handle all changes correctly if a developer make a mistake in a build or bzl file it will be caught in the pre push build and not merged into master the pre push build is also just an incremental build with this bug i have an example of an error which i and therefore others could make where a broken build is left undiscovered and merged into master and potentially first discovered when someone makes a new checkout feature requests what underlying problem are you trying to solve with this feature ensure incremental builds when build and bzl files are changed bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible first create a build file with genrule name foo outs cmd echo foobar run bazel build foo txt now change imagine a git commit here the build file to genrule name foo txt outs cmd echo something run bazel build foo txt notice that foo txt is still present in bazel bin and the build command still works there is no way the scripts or the developer running bazel can catch the above error the change to the build file only takes effect when you make a clean checkout the key argument for bazel namely supporting safe incremental builds is thus gone what operating system are you running bazel on linux ubuntu what s the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel what s the output of git remote get url origin git rev parse master git rev parse head have you found anything relevant by searching the web no any other information logs or outputs that you want to share
| 1
|
308,656
| 26,620,926,379
|
IssuesEvent
|
2023-01-24 11:09:27
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Test: Detect unsafe terminal profiles
|
testplan-item
|
Refs: #167721, #125387
- [x] windows @dbaeumer
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172085%0A%0A&assignees=Tyriar)
---
Shells with "unsafe paths", those writable to by multiple users, can now be detected. Check https://github.com/microsoft/vscode/pull/170193 for detailed instructions on the new UX. To verify, install the following shells and ensure they are detected and warn the user that it's potentially unsafe:
- Cygwin installed to either `c:\cygwin` or `c:\cygwin64`
- MSYS2 bash installed to `c:\msys64`
- Cmder
- Installed to `c:\cmder` should warn
- Installed to a manually set up `CMDER_ROOT` variable should be detected and _not_ warn
|
1.0
|
Test: Detect unsafe terminal profiles - Refs: #167721, #125387
- [x] windows @dbaeumer
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23172085%0A%0A&assignees=Tyriar)
---
Shells with "unsafe paths", those writable to by multiple users, can now be detected. Check https://github.com/microsoft/vscode/pull/170193 for detailed instructions on the new UX. To verify, install the following shells and ensure they are detected and warn the user that it's potentially unsafe:
- Cygwin installed to either `c:\cygwin` or `c:\cygwin64`
- MSYS2 bash installed to `c:\msys64`
- Cmder
- Installed to `c:\cmder` should warn
- Installed to a manually set up `CMDER_ROOT` variable should be detected and _not_ warn
|
non_process
|
test detect unsafe terminal profiles refs windows dbaeumer complexity shells with unsafe paths those writable to by multiple users can now be detected check for detailed instructions on the new ux to verify install the following shells and ensure they are detected and warn the user that it s potentially unsafe cygwin installed to either c cygwin or c bash installed to c cmder installed to c cmder should warn installed to a manually set up cmder root variable should be detected and not warn
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.