Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
232,342
| 18,869,746,383
|
IssuesEvent
|
2021-11-13 01:19:58
|
haskell/haskell-language-server
|
https://api.github.com/repos/haskell/haskell-language-server
|
closed
|
Cannot apply hlint hints when a trailing comment is on the same line.
|
type: bug status: needs repro component: hlint plugin status: regression test needed
|
### Your environment
haskell-language-server 1.4
Which OS do you use: NixOS
Which lsp-client do you use: haskell-lsp in emacs
Describe your project (alternative: link to the project): https://github.com/shapr/hlsexamples/blob/main/src/Examples.hs#L14
### Steps to reproduce
Insert the line `biggest items = foldr1 max items -- the line above will show two hlint hints, "eta reduce" and "use maximum"
` into a file, and attempt to apply any of the hlint hints.
If you move the comment to a different line, the hlint hints apply with no problems.
### Expected behaviour
The hints should be applied.
### Actual behaviour
The hints are not applied.
|
1.0
|
Cannot apply hlint hints when a trailing comment is on the same line. -
### Your environment
haskell-language-server 1.4
Which OS do you use: NixOS
Which lsp-client do you use: haskell-lsp in emacs
Describe your project (alternative: link to the project): https://github.com/shapr/hlsexamples/blob/main/src/Examples.hs#L14
### Steps to reproduce
Insert the line `biggest items = foldr1 max items -- the line above will show two hlint hints, "eta reduce" and "use maximum"
` into a file, and attempt to apply any of the hlint hints.
If you move the comment to a different line, the hlint hints apply with no problems.
### Expected behaviour
The hints should be applied.
### Actual behaviour
The hints are not applied.
|
non_process
|
cannot apply hlint hints when a trailing comment is on the same line your environment haskell language server which os do you use nixos which lsp client do you use haskell lsp in emacs describe your project alternative link to the project steps to reproduce insert the line biggest items max items the line above will show two hlint hints eta reduce and use maximum into a file and attempt to apply any of the hlint hints if you move the comment to a different line the hlint hints apply with no problems expected behaviour the hints should be applied actual behaviour the hints are not applied
| 0
|
17,441
| 23,265,912,942
|
IssuesEvent
|
2022-08-04 17:21:28
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Transparência - Detalhes do coletor/Baixar Imagens
|
[1] Requisito [0] Desenvolvimento [2] Média Prioridade [3] Processamento Dinâmico
|
## Comportamento Esperado
Espera-se que a configuração de "baixar imagens" se aplique também às coletas que usam processamento dinâmico.
## Comportamento Atual
Ao configurar um coletor dinâmico com essa ferramenta, devemos garantir que todas as imagens das páginas, estando em elementos dinâmicos ou não, sejam baixadas.
## Passos para reproduzir o erro
Não se aplica.
## Sistema
- MP ou local: ambos
- Branch específica: master
- Sistema diferente: não
## Screenshots
Não se aplica.
|
1.0
|
Transparência - Detalhes do coletor/Baixar Imagens - ## Comportamento Esperado
Espera-se que a configuração de "baixar imagens" se aplique também às coletas que usam processamento dinâmico.
## Comportamento Atual
Ao configurar um coletor dinâmico com essa ferramenta, devemos garantir que todas as imagens das páginas, estando em elementos dinâmicos ou não, sejam baixadas.
## Passos para reproduzir o erro
Não se aplica.
## Sistema
- MP ou local: ambos
- Branch específica: master
- Sistema diferente: não
## Screenshots
Não se aplica.
|
process
|
transparência detalhes do coletor baixar imagens comportamento esperado espera se que a configuração de baixar imagens se aplique também às coletas que usam processamento dinâmico comportamento atual ao configurar um coletor dinâmico com essa ferramenta devemos garantir que todas as imagens das páginas estando em elementos dinâmicos ou não sejam baixadas passos para reproduzir o erro não se aplica sistema mp ou local ambos branch específica master sistema diferente não screenshots não se aplica
| 1
|
16,761
| 21,929,499,217
|
IssuesEvent
|
2022-05-23 08:29:41
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
[processing] Crash closing alg/model GUI after run from history (with workaround)
|
Processing High Priority Bug Crash/Data Corruption
|
**Describe the bug**
Systematic crash every time I close the algorithm after run ONLY IF algorithm is executed from processing history
**the workaround**
to avoid the crash follow instruction as in https://github.com/qgis/QGIS/issues/40258#issuecomment-853785613
**How to Reproduce**
Here a model with a kind of alternative execution basing on expression. This execution run correctly, but cash qgis after closing alg ONLY if alg is run from processing history!
[tests_history_crash.zip](https://github.com/qgis/QGIS/files/5598348/tests_history_crash.zip)
see also simplified model:
[tests_history_crash3.zip](https://github.com/qgis/QGIS/files/6585204/tests_history_crash3.zip)
steps to reproduce
1. install attached zipped model
2. create a polygon (the alg do a buffer)
3. run the model from processing, the algorithm end successfully
4. close alg interface => all it's ok
5. open processing history
6. open last executed algorithm
7. run the algorithm, the algorithm end successfully
8. close algorithm interface => crash!
**This is lldb traceback**
* thread #1, name = 'qgis', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
* frame #0: 0x00007ffff1feaf04 libstdc++.so.6`__dynamic_cast + 68
frame #1: 0x00007fffa8b64adf _analysis.so`::sipSubClass_QgsGraphDirector(sipCppRet=0x00007fffffff9690) at qgsgraphdirector.sip:28:59
frame #2: 0x00007fffaa7dcc69 sip.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol26$$sip.cpython-38-x86_64-linux-gnu.so + 169
frame #3: 0x00007fffaa7e43fb sip.cpython-38-x86_64-linux-gnu.so`sip_api_convert_from_type + 299
frame #4: 0x00007fffa3f028cb QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 203
frame #5: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #6: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #7: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #8: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #9: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #10: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #11: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #12: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #13: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #14: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #15: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #16: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #17: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #18: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #19: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #20: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #21: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #22: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #23: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #24: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #25: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #26: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #27: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #28: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #29: 0x00007fffaa7e0612 sip.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol114$$sip.cpython-38-x86_64-linux-gnu.so + 130
frame #30: 0x00007fffa28e98bc _gui.so`::meth_QgsProcessingAlgorithmDialogBase_results(sipSelf=0x00007fff54e45550, sipArgs=0x00007fffc8161040) at sip_guipart2.cpp:162751:41
frame #31: 0x00007fffa8552357 libpython3.8.so.1.0`___lldb_unnamed_symbol3727$$libpython3.8.so.1.0 + 103
frame #32: 0x00007fffa85527db libpython3.8.so.1.0`_PyObject_MakeTpCall + 171
frame #33: 0x00007fffa831edf3 libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 307
frame #34: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #35: 0x00007fffa8474d3b libpython3.8.so.1.0`_PyEval_EvalCodeWithName + 2299
frame #36: 0x00007fffa8551de4 libpython3.8.so.1.0`_PyFunction_Vectorcall + 148
frame #37: 0x00007fffa831ed6d libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 173
frame #38: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #39: 0x00007fffa8474d3b libpython3.8.so.1.0`_PyEval_EvalCodeWithName + 2299
frame #40: 0x00007fffa84750c2 libpython3.8.so.1.0`PyEval_EvalCodeEx + 66
frame #41: 0x00007fffa84754af libpython3.8.so.1.0`PyEval_EvalCode + 31
frame #42: 0x00007fffa8435f0c libpython3.8.so.1.0`___lldb_unnamed_symbol2209$$libpython3.8.so.1.0 + 124
frame #43: 0x00007fffa8436269 libpython3.8.so.1.0`PyRun_StringFlags + 121
frame #44: 0x00007fffa8479980 libpython3.8.so.1.0`___lldb_unnamed_symbol2482$$libpython3.8.so.1.0 + 272
frame #45: 0x00007fffa85093a7 libpython3.8.so.1.0`___lldb_unnamed_symbol3111$$libpython3.8.so.1.0 + 103
frame #46: 0x00007fffa831ed6d libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 173
frame #47: 0x00007fffa832046d libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 4861
frame #48: 0x00007fffa832a06b libpython3.8.so.1.0`___lldb_unnamed_symbol31$$libpython3.8.so.1.0 + 107
frame #49: 0x00007fffa8552aeb libpython3.8.so.1.0`___lldb_unnamed_symbol3728$$libpython3.8.so.1.0 + 459
frame #50: 0x00007fffa8552500 libpython3.8.so.1.0`PyVectorcall_Call + 96
frame #51: 0x00007fffa3f10070 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6306$$QtCore.cpython-38-x86_64-linux-gnu.so + 96
frame #52: 0x00007fffa3f10518 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6307$$QtCore.cpython-38-x86_64-linux-gnu.so + 264
frame #53: 0x00007fffa3f1080e QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6314$$QtCore.cpython-38-x86_64-linux-gnu.so + 94
frame #54: 0x00007fffa3f112d7 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6329$$QtCore.cpython-38-x86_64-linux-gnu.so + 103
frame #55: 0x00007ffff23d53bc libQt5Core.so.5`QMetaObject::activate(QObject*, int, int, void**) + 2188
frame #56: 0x00007ffff32352c9 libQt5Widgets.so.5`QAbstractItemView::doubleClicked(QModelIndex const&) + 57
frame #57: 0x00007ffff32af4ab libQt5Widgets.so.5`QTreeView::mouseDoubleClickEvent(QMouseEvent*) + 891
frame #58: 0x00007fffa1379ffb QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol2225$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 155
frame #59: 0x00007ffff301186b libQt5Widgets.so.5`QWidget::event(QEvent*) + 2107
frame #60: 0x00007ffff30bed52 libQt5Widgets.so.5`QFrame::event(QEvent*) + 34
frame #61: 0x00007ffff3243482 libQt5Widgets.so.5`QAbstractItemView::viewportEvent(QEvent*) + 114
frame #62: 0x00007ffff32b162f libQt5Widgets.so.5`QTreeView::viewportEvent(QEvent*) + 63
frame #63: 0x00007fffa137baeb QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol2257$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 155
frame #64: 0x00007ffff23a964b libQt5Core.so.5`QCoreApplicationPrivate::sendThroughObjectEventFilters(QObject*, QEvent*) + 155
frame #65: 0x00007ffff2fcea55 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 117
frame #66: 0x00007ffff2fd8343 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 1411
frame #67: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x0000555558b5b660, event=0x00007fffffffb2c0) at qgsapplication.cpp:458:32
frame #68: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #69: 0x00007ffff2fd7457 libQt5Widgets.so.5`QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool, bool) + 439
frame #70: 0x00007ffff302d35d libQt5Widgets.so.5`___lldb_unnamed_symbol975$$libQt5Widgets.so.5 + 717
frame #71: 0x00007ffff30301ec libQt5Widgets.so.5`___lldb_unnamed_symbol985$$libQt5Widgets.so.5 + 460
frame #72: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #73: 0x00007ffff2fd80f0 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 816
frame #74: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x000055555a0d8000, event=0x00007fffffffb850) at qgsapplication.cpp:458:32
frame #75: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #76: 0x00007ffff299f71c libQt5Gui.so.5`QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*) + 1580
frame #77: 0x00007ffff29a110b libQt5Gui.so.5`QGuiApplicationPrivate::processWindowSystemEvent(QWindowSystemInterfacePrivate::WindowSystemEvent*) + 603
frame #78: 0x00007ffff297b35b libQt5Gui.so.5`QWindowSystemInterface::sendWindowSystemEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 187
frame #79: 0x00007fffe25d732e libQt5XcbQpa.so.5`___lldb_unnamed_symbol448$$libQt5XcbQpa.so.5 + 30
frame #80: 0x00007fffeaa3ffbd libglib-2.0.so.0`g_main_context_dispatch + 637
frame #81: 0x00007fffeaa40240 libglib-2.0.so.0`___lldb_unnamed_symbol197$$libglib-2.0.so.0 + 528
frame #82: 0x00007fffeaa402e3 libglib-2.0.so.0`g_main_context_iteration + 51
frame #83: 0x00007ffff2401565 libQt5Core.so.5`QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 101
frame #84: 0x00007ffff23a84db libQt5Core.so.5`QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 299
frame #85: 0x00007ffff31d2c6d libQt5Widgets.so.5`QDialog::exec() + 461
frame #86: 0x00007fffa1577d28 QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol12894$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 408
frame #87: 0x00007fffa8552357 libpython3.8.so.1.0`___lldb_unnamed_symbol3727$$libpython3.8.so.1.0 + 103
frame #88: 0x00007fffa85527db libpython3.8.so.1.0`_PyObject_MakeTpCall + 171
frame #89: 0x00007fffa831edf3 libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 307
frame #90: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #91: 0x00007fffa832a06b libpython3.8.so.1.0`___lldb_unnamed_symbol31$$libpython3.8.so.1.0 + 107
frame #92: 0x00007fffa8552aeb libpython3.8.so.1.0`___lldb_unnamed_symbol3728$$libpython3.8.so.1.0 + 459
frame #93: 0x00007fffa8552500 libpython3.8.so.1.0`PyVectorcall_Call + 96
frame #94: 0x00007fffa3f10070 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6306$$QtCore.cpython-38-x86_64-linux-gnu.so + 96
frame #95: 0x00007fffa3f10518 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6307$$QtCore.cpython-38-x86_64-linux-gnu.so + 264
frame #96: 0x00007fffa3f1080e QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6314$$QtCore.cpython-38-x86_64-linux-gnu.so + 94
frame #97: 0x00007fffa3f112d7 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6329$$QtCore.cpython-38-x86_64-linux-gnu.so + 103
frame #98: 0x00007ffff23d53bc libQt5Core.so.5`QMetaObject::activate(QObject*, int, int, void**) + 2188
frame #99: 0x00007ffff2fc83e6 libQt5Widgets.so.5`QAction::triggered(bool) + 70
frame #100: 0x00007ffff2fcaaa2 libQt5Widgets.so.5`QAction::activate(QAction::ActionEvent) + 242
frame #101: 0x00007ffff30c4e10 libQt5Widgets.so.5`___lldb_unnamed_symbol1505$$libQt5Widgets.so.5 + 80
frame #102: 0x00007ffff30c5035 libQt5Widgets.so.5`QAbstractButton::mouseReleaseEvent(QMouseEvent*) + 229
frame #103: 0x00007ffff31b85fe libQt5Widgets.so.5`QToolButton::mouseReleaseEvent(QMouseEvent*) + 14
frame #104: 0x00007ffff30112b6 libQt5Widgets.so.5`QWidget::event(QEvent*) + 646
frame #105: 0x00007ffff31b86a8 libQt5Widgets.so.5`QToolButton::event(QEvent*) + 56
frame #106: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #107: 0x00007ffff2fd8343 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 1411
frame #108: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x00005555581da9a0, event=0x00007fffffffc820) at qgsapplication.cpp:458:32
frame #109: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #110: 0x00007ffff2fd7457 libQt5Widgets.so.5`QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool, bool) + 439
frame #111: 0x00007ffff302d35d libQt5Widgets.so.5`___lldb_unnamed_symbol975$$libQt5Widgets.so.5 + 717
frame #112: 0x00007ffff30301ec libQt5Widgets.so.5`___lldb_unnamed_symbol985$$libQt5Widgets.so.5 + 460
frame #113: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #114: 0x00007ffff2fd80f0 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 816
frame #115: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x000055555601ee40, event=0x00007fffffffcd40) at qgsapplication.cpp:458:32
frame #116: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #117: 0x00007ffff299f7d3 libQt5Gui.so.5`QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*) + 1763
frame #118: 0x00007ffff29a110b libQt5Gui.so.5`QGuiApplicationPrivate::processWindowSystemEvent(QWindowSystemInterfacePrivate::WindowSystemEvent*) + 603
frame #119: 0x00007ffff297b35b libQt5Gui.so.5`QWindowSystemInterface::sendWindowSystemEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 187
frame #120: 0x00007fffe25d732e libQt5XcbQpa.so.5`___lldb_unnamed_symbol448$$libQt5XcbQpa.so.5 + 30
frame #121: 0x00007fffeaa3ffbd libglib-2.0.so.0`g_main_context_dispatch + 637
frame #122: 0x00007fffeaa40240 libglib-2.0.so.0`___lldb_unnamed_symbol197$$libglib-2.0.so.0 + 528
frame #123: 0x00007fffeaa402e3 libglib-2.0.so.0`g_main_context_iteration + 51
frame #124: 0x00007ffff2401565 libQt5Core.so.5`QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 101
frame #125: 0x00007ffff23a84db libQt5Core.so.5`QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 299
frame #126: 0x00007ffff23b0246 libQt5Core.so.5`QCoreApplication::exec() + 150
frame #127: 0x000055555556aeb3 qgis`main(argc=1, argv=0x00007fffffffdcf8) at main.cpp:1637:26
frame #128: 0x00007ffff1d5c0b3 libc.so.6`__libc_start_main + 243
frame #129: 0x00005555555600ee qgis`_start + 46
**QGIS and OS versions**
QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
Active python plugins | plugin_reloader; IPyConsole; pluginbuilder3; copernicushub; firstaid; epyris; db_manager; processing; MetaSearch
|
1.0
|
[processing] Crash closing alg/model GUI after run from history (with workaround) - **Describe the bug**
Systematic crash every time I close the algorithm after run ONLY IF algorithm is executed from processing history
**the workaround**
to avoid the crash follow instruction as in https://github.com/qgis/QGIS/issues/40258#issuecomment-853785613
**How to Reproduce**
Here a model with a kind of alternative execution basing on expression. This execution run correctly, but cash qgis after closing alg ONLY if alg is run from processing history!
[tests_history_crash.zip](https://github.com/qgis/QGIS/files/5598348/tests_history_crash.zip)
see also simplified model:
[tests_history_crash3.zip](https://github.com/qgis/QGIS/files/6585204/tests_history_crash3.zip)
steps to reproduce
1. install attached zipped model
2. create a polygon (the alg do a buffer)
3. run the model from processing, the algorithm end successfully
4. close alg interface => all it's ok
5. open processing history
6. open last executed algorithm
7. run the algorithm, the algorithm end successfully
8. close algorithm interface => crash!
**This is lldb traceback**
* thread #1, name = 'qgis', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
* frame #0: 0x00007ffff1feaf04 libstdc++.so.6`__dynamic_cast + 68
frame #1: 0x00007fffa8b64adf _analysis.so`::sipSubClass_QgsGraphDirector(sipCppRet=0x00007fffffff9690) at qgsgraphdirector.sip:28:59
frame #2: 0x00007fffaa7dcc69 sip.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol26$$sip.cpython-38-x86_64-linux-gnu.so + 169
frame #3: 0x00007fffaa7e43fb sip.cpython-38-x86_64-linux-gnu.so`sip_api_convert_from_type + 299
frame #4: 0x00007fffa3f028cb QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 203
frame #5: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #6: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #7: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #8: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #9: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #10: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #11: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #12: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #13: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #14: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #15: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #16: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #17: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #18: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #19: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #20: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #21: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #22: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #23: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #24: 0x00007fffa3f02a62 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6127$$QtCore.cpython-38-x86_64-linux-gnu.so + 610
frame #25: 0x00007fffa3f02393 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6124$$QtCore.cpython-38-x86_64-linux-gnu.so + 275
frame #26: 0x00007fffa3f0261b QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6125$$QtCore.cpython-38-x86_64-linux-gnu.so + 171
frame #27: 0x00007fffa3f026d8 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6126$$QtCore.cpython-38-x86_64-linux-gnu.so + 104
frame #28: 0x00007fffa3f141af QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6364$$QtCore.cpython-38-x86_64-linux-gnu.so + 95
frame #29: 0x00007fffaa7e0612 sip.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol114$$sip.cpython-38-x86_64-linux-gnu.so + 130
frame #30: 0x00007fffa28e98bc _gui.so`::meth_QgsProcessingAlgorithmDialogBase_results(sipSelf=0x00007fff54e45550, sipArgs=0x00007fffc8161040) at sip_guipart2.cpp:162751:41
frame #31: 0x00007fffa8552357 libpython3.8.so.1.0`___lldb_unnamed_symbol3727$$libpython3.8.so.1.0 + 103
frame #32: 0x00007fffa85527db libpython3.8.so.1.0`_PyObject_MakeTpCall + 171
frame #33: 0x00007fffa831edf3 libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 307
frame #34: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #35: 0x00007fffa8474d3b libpython3.8.so.1.0`_PyEval_EvalCodeWithName + 2299
frame #36: 0x00007fffa8551de4 libpython3.8.so.1.0`_PyFunction_Vectorcall + 148
frame #37: 0x00007fffa831ed6d libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 173
frame #38: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #39: 0x00007fffa8474d3b libpython3.8.so.1.0`_PyEval_EvalCodeWithName + 2299
frame #40: 0x00007fffa84750c2 libpython3.8.so.1.0`PyEval_EvalCodeEx + 66
frame #41: 0x00007fffa84754af libpython3.8.so.1.0`PyEval_EvalCode + 31
frame #42: 0x00007fffa8435f0c libpython3.8.so.1.0`___lldb_unnamed_symbol2209$$libpython3.8.so.1.0 + 124
frame #43: 0x00007fffa8436269 libpython3.8.so.1.0`PyRun_StringFlags + 121
frame #44: 0x00007fffa8479980 libpython3.8.so.1.0`___lldb_unnamed_symbol2482$$libpython3.8.so.1.0 + 272
frame #45: 0x00007fffa85093a7 libpython3.8.so.1.0`___lldb_unnamed_symbol3111$$libpython3.8.so.1.0 + 103
frame #46: 0x00007fffa831ed6d libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 173
frame #47: 0x00007fffa832046d libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 4861
frame #48: 0x00007fffa832a06b libpython3.8.so.1.0`___lldb_unnamed_symbol31$$libpython3.8.so.1.0 + 107
frame #49: 0x00007fffa8552aeb libpython3.8.so.1.0`___lldb_unnamed_symbol3728$$libpython3.8.so.1.0 + 459
frame #50: 0x00007fffa8552500 libpython3.8.so.1.0`PyVectorcall_Call + 96
frame #51: 0x00007fffa3f10070 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6306$$QtCore.cpython-38-x86_64-linux-gnu.so + 96
frame #52: 0x00007fffa3f10518 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6307$$QtCore.cpython-38-x86_64-linux-gnu.so + 264
frame #53: 0x00007fffa3f1080e QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6314$$QtCore.cpython-38-x86_64-linux-gnu.so + 94
frame #54: 0x00007fffa3f112d7 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6329$$QtCore.cpython-38-x86_64-linux-gnu.so + 103
frame #55: 0x00007ffff23d53bc libQt5Core.so.5`QMetaObject::activate(QObject*, int, int, void**) + 2188
frame #56: 0x00007ffff32352c9 libQt5Widgets.so.5`QAbstractItemView::doubleClicked(QModelIndex const&) + 57
frame #57: 0x00007ffff32af4ab libQt5Widgets.so.5`QTreeView::mouseDoubleClickEvent(QMouseEvent*) + 891
frame #58: 0x00007fffa1379ffb QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol2225$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 155
frame #59: 0x00007ffff301186b libQt5Widgets.so.5`QWidget::event(QEvent*) + 2107
frame #60: 0x00007ffff30bed52 libQt5Widgets.so.5`QFrame::event(QEvent*) + 34
frame #61: 0x00007ffff3243482 libQt5Widgets.so.5`QAbstractItemView::viewportEvent(QEvent*) + 114
frame #62: 0x00007ffff32b162f libQt5Widgets.so.5`QTreeView::viewportEvent(QEvent*) + 63
frame #63: 0x00007fffa137baeb QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol2257$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 155
frame #64: 0x00007ffff23a964b libQt5Core.so.5`QCoreApplicationPrivate::sendThroughObjectEventFilters(QObject*, QEvent*) + 155
frame #65: 0x00007ffff2fcea55 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 117
frame #66: 0x00007ffff2fd8343 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 1411
frame #67: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x0000555558b5b660, event=0x00007fffffffb2c0) at qgsapplication.cpp:458:32
frame #68: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #69: 0x00007ffff2fd7457 libQt5Widgets.so.5`QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool, bool) + 439
frame #70: 0x00007ffff302d35d libQt5Widgets.so.5`___lldb_unnamed_symbol975$$libQt5Widgets.so.5 + 717
frame #71: 0x00007ffff30301ec libQt5Widgets.so.5`___lldb_unnamed_symbol985$$libQt5Widgets.so.5 + 460
frame #72: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #73: 0x00007ffff2fd80f0 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 816
frame #74: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x000055555a0d8000, event=0x00007fffffffb850) at qgsapplication.cpp:458:32
frame #75: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #76: 0x00007ffff299f71c libQt5Gui.so.5`QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*) + 1580
frame #77: 0x00007ffff29a110b libQt5Gui.so.5`QGuiApplicationPrivate::processWindowSystemEvent(QWindowSystemInterfacePrivate::WindowSystemEvent*) + 603
frame #78: 0x00007ffff297b35b libQt5Gui.so.5`QWindowSystemInterface::sendWindowSystemEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 187
frame #79: 0x00007fffe25d732e libQt5XcbQpa.so.5`___lldb_unnamed_symbol448$$libQt5XcbQpa.so.5 + 30
frame #80: 0x00007fffeaa3ffbd libglib-2.0.so.0`g_main_context_dispatch + 637
frame #81: 0x00007fffeaa40240 libglib-2.0.so.0`___lldb_unnamed_symbol197$$libglib-2.0.so.0 + 528
frame #82: 0x00007fffeaa402e3 libglib-2.0.so.0`g_main_context_iteration + 51
frame #83: 0x00007ffff2401565 libQt5Core.so.5`QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 101
frame #84: 0x00007ffff23a84db libQt5Core.so.5`QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 299
frame #85: 0x00007ffff31d2c6d libQt5Widgets.so.5`QDialog::exec() + 461
frame #86: 0x00007fffa1577d28 QtWidgets.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol12894$$QtWidgets.cpython-38-x86_64-linux-gnu.so + 408
frame #87: 0x00007fffa8552357 libpython3.8.so.1.0`___lldb_unnamed_symbol3727$$libpython3.8.so.1.0 + 103
frame #88: 0x00007fffa85527db libpython3.8.so.1.0`_PyObject_MakeTpCall + 171
frame #89: 0x00007fffa831edf3 libpython3.8.so.1.0`___lldb_unnamed_symbol25$$libpython3.8.so.1.0 + 307
frame #90: 0x00007fffa8326ef6 libpython3.8.so.1.0`_PyEval_EvalFrameDefault + 32134
frame #91: 0x00007fffa832a06b libpython3.8.so.1.0`___lldb_unnamed_symbol31$$libpython3.8.so.1.0 + 107
frame #92: 0x00007fffa8552aeb libpython3.8.so.1.0`___lldb_unnamed_symbol3728$$libpython3.8.so.1.0 + 459
frame #93: 0x00007fffa8552500 libpython3.8.so.1.0`PyVectorcall_Call + 96
frame #94: 0x00007fffa3f10070 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6306$$QtCore.cpython-38-x86_64-linux-gnu.so + 96
frame #95: 0x00007fffa3f10518 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6307$$QtCore.cpython-38-x86_64-linux-gnu.so + 264
frame #96: 0x00007fffa3f1080e QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6314$$QtCore.cpython-38-x86_64-linux-gnu.so + 94
frame #97: 0x00007fffa3f112d7 QtCore.cpython-38-x86_64-linux-gnu.so`___lldb_unnamed_symbol6329$$QtCore.cpython-38-x86_64-linux-gnu.so + 103
frame #98: 0x00007ffff23d53bc libQt5Core.so.5`QMetaObject::activate(QObject*, int, int, void**) + 2188
frame #99: 0x00007ffff2fc83e6 libQt5Widgets.so.5`QAction::triggered(bool) + 70
frame #100: 0x00007ffff2fcaaa2 libQt5Widgets.so.5`QAction::activate(QAction::ActionEvent) + 242
frame #101: 0x00007ffff30c4e10 libQt5Widgets.so.5`___lldb_unnamed_symbol1505$$libQt5Widgets.so.5 + 80
frame #102: 0x00007ffff30c5035 libQt5Widgets.so.5`QAbstractButton::mouseReleaseEvent(QMouseEvent*) + 229
frame #103: 0x00007ffff31b85fe libQt5Widgets.so.5`QToolButton::mouseReleaseEvent(QMouseEvent*) + 14
frame #104: 0x00007ffff30112b6 libQt5Widgets.so.5`QWidget::event(QEvent*) + 646
frame #105: 0x00007ffff31b86a8 libQt5Widgets.so.5`QToolButton::event(QEvent*) + 56
frame #106: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #107: 0x00007ffff2fd8343 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 1411
frame #108: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x00005555581da9a0, event=0x00007fffffffc820) at qgsapplication.cpp:458:32
frame #109: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #110: 0x00007ffff2fd7457 libQt5Widgets.so.5`QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool, bool) + 439
frame #111: 0x00007ffff302d35d libQt5Widgets.so.5`___lldb_unnamed_symbol975$$libQt5Widgets.so.5 + 717
frame #112: 0x00007ffff30301ec libQt5Widgets.so.5`___lldb_unnamed_symbol985$$libQt5Widgets.so.5 + 460
frame #113: 0x00007ffff2fcea66 libQt5Widgets.so.5`QApplicationPrivate::notify_helper(QObject*, QEvent*) + 134
frame #114: 0x00007ffff2fd80f0 libQt5Widgets.so.5`QApplication::notify(QObject*, QEvent*) + 816
frame #115: 0x00007ffff41c397f libqgis_core.so.3.16.1`QgsApplication::notify(this=0x00007fffffffd470, receiver=0x000055555601ee40, event=0x00007fffffffcd40) at qgsapplication.cpp:458:32
frame #116: 0x00007ffff23a993a libQt5Core.so.5`QCoreApplication::notifyInternal2(QObject*, QEvent*) + 394
frame #117: 0x00007ffff299f7d3 libQt5Gui.so.5`QGuiApplicationPrivate::processMouseEvent(QWindowSystemInterfacePrivate::MouseEvent*) + 1763
frame #118: 0x00007ffff29a110b libQt5Gui.so.5`QGuiApplicationPrivate::processWindowSystemEvent(QWindowSystemInterfacePrivate::WindowSystemEvent*) + 603
frame #119: 0x00007ffff297b35b libQt5Gui.so.5`QWindowSystemInterface::sendWindowSystemEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 187
frame #120: 0x00007fffe25d732e libQt5XcbQpa.so.5`___lldb_unnamed_symbol448$$libQt5XcbQpa.so.5 + 30
frame #121: 0x00007fffeaa3ffbd libglib-2.0.so.0`g_main_context_dispatch + 637
frame #122: 0x00007fffeaa40240 libglib-2.0.so.0`___lldb_unnamed_symbol197$$libglib-2.0.so.0 + 528
frame #123: 0x00007fffeaa402e3 libglib-2.0.so.0`g_main_context_iteration + 51
frame #124: 0x00007ffff2401565 libQt5Core.so.5`QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 101
frame #125: 0x00007ffff23a84db libQt5Core.so.5`QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 299
frame #126: 0x00007ffff23b0246 libQt5Core.so.5`QCoreApplication::exec() + 150
frame #127: 0x000055555556aeb3 qgis`main(argc=1, argv=0x00007fffffffdcf8) at main.cpp:1637:26
frame #128: 0x00007ffff1d5c0b3 libc.so.6`__libc_start_main + 243
frame #129: 0x00005555555600ee qgis`_start + 46
**QGIS and OS versions**
QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
Active python plugins | plugin_reloader; IPyConsole; pluginbuilder3; copernicushub; firstaid; epyris; db_manager; processing; MetaSearch
|
process
|
crash closing alg model gui after run from history with workaround describe the bug systematic crash every time i close the algorithm after run only if algorithm is executed from processing history the workaround to avoid the crash follow instruction as in how to reproduce here a model with a kind of alternative execution basing on expression this execution run correctly but cash qgis after closing alg only if alg is run from processing history see also simplified model steps to reproduce install attached zipped model create a polygon the alg do a buffer run the model from processing the algorithm end successfully close alg interface all it s ok open processing history open last executed algorithm run the algorithm the algorithm end successfully close algorithm interface crash this is lldb traceback thread name qgis stop reason signal sigsegv invalid address fault address frame libstdc so dynamic cast frame analysis so sipsubclass qgsgraphdirector sipcppret at qgsgraphdirector sip frame sip cpython linux gnu so lldb unnamed sip cpython linux gnu so frame sip cpython linux gnu so sip api convert from type frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame sip cpython linux gnu so lldb unnamed sip cpython linux gnu so frame gui so meth qgsprocessingalgorithmdialogbase results sipself sipargs at sip cpp frame so lldb unnamed so frame so pyobject maketpcall frame so lldb unnamed so frame so pyeval evalframedefault frame so pyeval evalcodewithname frame so pyfunction vectorcall frame so lldb unnamed so frame so pyeval evalframedefault frame so pyeval evalcodewithname frame so pyeval evalcodeex frame so pyeval evalcode frame so lldb unnamed so frame so pyrun stringflags frame so lldb unnamed so frame so lldb unnamed so frame so lldb unnamed so frame so pyeval evalframedefault frame so lldb unnamed so frame so lldb unnamed so frame so pyvectorcall call frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame so qmetaobject activate qobject int int void frame so qabstractitemview doubleclicked qmodelindex const frame so qtreeview mousedoubleclickevent qmouseevent frame qtwidgets cpython linux gnu so lldb unnamed qtwidgets cpython linux gnu so frame so qwidget event qevent frame so qframe event qevent frame so qabstractitemview viewportevent qevent frame so qtreeview viewportevent qevent frame qtwidgets cpython linux gnu so lldb unnamed qtwidgets cpython linux gnu so frame so qcoreapplicationprivate sendthroughobjecteventfilters qobject qevent frame so qapplicationprivate notify helper qobject qevent frame so qapplication notify qobject qevent frame libqgis core so qgsapplication notify this receiver event at qgsapplication cpp frame so qcoreapplication qobject qevent frame so qapplicationprivate sendmouseevent qwidget qmouseevent qwidget qwidget qwidget qpointer bool bool frame so lldb unnamed so frame so lldb unnamed so frame so qapplicationprivate notify helper qobject qevent frame so qapplication notify qobject qevent frame libqgis core so qgsapplication notify this receiver event at qgsapplication cpp frame so qcoreapplication qobject qevent frame so qguiapplicationprivate processmouseevent qwindowsysteminterfaceprivate mouseevent frame so qguiapplicationprivate processwindowsystemevent qwindowsysteminterfaceprivate windowsystemevent frame so qwindowsysteminterface sendwindowsystemevents qflags frame so lldb unnamed so frame libglib so g main context dispatch frame libglib so lldb unnamed libglib so frame libglib so g main context iteration frame so qeventdispatcherglib processevents qflags frame so qeventloop exec qflags frame so qdialog exec frame qtwidgets cpython linux gnu so lldb unnamed qtwidgets cpython linux gnu so frame so lldb unnamed so frame so pyobject maketpcall frame so lldb unnamed so frame so pyeval evalframedefault frame so lldb unnamed so frame so lldb unnamed so frame so pyvectorcall call frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame qtcore cpython linux gnu so lldb unnamed qtcore cpython linux gnu so frame so qmetaobject activate qobject int int void frame so qaction triggered bool frame so qaction activate qaction actionevent frame so lldb unnamed so frame so qabstractbutton mousereleaseevent qmouseevent frame so qtoolbutton mousereleaseevent qmouseevent frame so qwidget event qevent frame so qtoolbutton event qevent frame so qapplicationprivate notify helper qobject qevent frame so qapplication notify qobject qevent frame libqgis core so qgsapplication notify this receiver event at qgsapplication cpp frame so qcoreapplication qobject qevent frame so qapplicationprivate sendmouseevent qwidget qmouseevent qwidget qwidget qwidget qpointer bool bool frame so lldb unnamed so frame so lldb unnamed so frame so qapplicationprivate notify helper qobject qevent frame so qapplication notify qobject qevent frame libqgis core so qgsapplication notify this receiver event at qgsapplication cpp frame so qcoreapplication qobject qevent frame so qguiapplicationprivate processmouseevent qwindowsysteminterfaceprivate mouseevent frame so qguiapplicationprivate processwindowsystemevent qwindowsysteminterfaceprivate windowsystemevent frame so qwindowsysteminterface sendwindowsystemevents qflags frame so lldb unnamed so frame libglib so g main context dispatch frame libglib so lldb unnamed libglib so frame libglib so g main context iteration frame so qeventdispatcherglib processevents qflags frame so qeventloop exec qflags frame so qcoreapplication exec frame qgis main argc argv at main cpp frame libc so libc start main frame qgis start qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel february os version ubuntu lts this copy of qgis writes debugging output active python plugins plugin reloader ipyconsole copernicushub firstaid epyris db manager processing metasearch
| 1
|
7,467
| 10,563,401,502
|
IssuesEvent
|
2019-10-04 20:52:26
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Paris locality has different hiearchies in Geonames vs WOF
|
processed
|
Searching for [Paris, France](http://pelias.github.io/compare/#/v1/search%3Ftext=paris,%20france) results in
```
1) Paris, France (geonames locality) <--- see table below
2) Paris, France (geonames county)
3) Paris, France (whosonfirst locality) <--- see table below
4) Paris 15, Paris, France
5) Paris 18, Paris, France
6) Paris 20, Paris, France
7) Paris 19, Paris, France
8) Paris 13, Paris, France
9) Paris 16, Paris, France
10) Paris 17, Paris, France
```
| prop | val:geonames | val:whosonfirst |
| --- | --- | --- |
| layer | locality | locality |
| source | geonames | whosonfirst |
| name | Paris | Paris |
| country_gid | 85633147 | 85633147 |
| macroregion_gid | 404227465 | 404227465 |
| region_gid | 85683497 | 85683497 |
| macrocounty_gid | 404227749 | 404227749 |
| **_county**_ | **_Paris-4E-Arrondissement**_ | **_Paris-7E-Arrondissement**_ |
| **_county_gid**_ | **_102067591**_ | **_102068177**_ |
| locality | Paris | Paris |
| **_locality_gid**_ | **_whosonfirst:locality:2988506**_ | **_whosonfirst:locality:101751119**_ |
This causes the deduper to fail at query time and so we see what looks to be identical results.
|
1.0
|
Paris locality has different hiearchies in Geonames vs WOF - Searching for [Paris, France](http://pelias.github.io/compare/#/v1/search%3Ftext=paris,%20france) results in
```
1) Paris, France (geonames locality) <--- see table below
2) Paris, France (geonames county)
3) Paris, France (whosonfirst locality) <--- see table below
4) Paris 15, Paris, France
5) Paris 18, Paris, France
6) Paris 20, Paris, France
7) Paris 19, Paris, France
8) Paris 13, Paris, France
9) Paris 16, Paris, France
10) Paris 17, Paris, France
```
| prop | val:geonames | val:whosonfirst |
| --- | --- | --- |
| layer | locality | locality |
| source | geonames | whosonfirst |
| name | Paris | Paris |
| country_gid | 85633147 | 85633147 |
| macroregion_gid | 404227465 | 404227465 |
| region_gid | 85683497 | 85683497 |
| macrocounty_gid | 404227749 | 404227749 |
| **_county**_ | **_Paris-4E-Arrondissement**_ | **_Paris-7E-Arrondissement**_ |
| **_county_gid**_ | **_102067591**_ | **_102068177**_ |
| locality | Paris | Paris |
| **_locality_gid**_ | **_whosonfirst:locality:2988506**_ | **_whosonfirst:locality:101751119**_ |
This causes the deduper to fail at query time and so we see what looks to be identical results.
|
process
|
paris locality has different hiearchies in geonames vs wof searching for results in paris france geonames locality see table below paris france geonames county paris france whosonfirst locality see table below paris paris france paris paris france paris paris france paris paris france paris paris france paris paris france paris paris france prop val geonames val whosonfirst layer locality locality source geonames whosonfirst name paris paris country gid macroregion gid region gid macrocounty gid county paris arrondissement paris arrondissement county gid locality paris paris locality gid whosonfirst locality whosonfirst locality this causes the deduper to fail at query time and so we see what looks to be identical results
| 1
|
16,137
| 20,403,384,940
|
IssuesEvent
|
2022-02-23 00:29:12
|
BCDevOps/nr-apm-stack
|
https://api.github.com/repos/BCDevOps/nr-apm-stack
|
closed
|
Switch from daily to monthly index
|
stack/lambda/event-stream-processing stack/elastic-search sme-recommendation use-case/access-logs
|
- [ ] Update Index template from 1 to 4 shards
- [ ] Update Lambda Code index name assignment
|
1.0
|
Switch from daily to monthly index - - [ ] Update Index template from 1 to 4 shards
- [ ] Update Lambda Code index name assignment
|
process
|
switch from daily to monthly index update index template from to shards update lambda code index name assignment
| 1
|
12,968
| 15,344,521,900
|
IssuesEvent
|
2021-02-28 01:39:33
|
Today-I-Learn/backend-study
|
https://api.github.com/repos/Today-I-Learn/backend-study
|
closed
|
프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요?
|
OS process
|
### 프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요?
- [x] 단기, 중기, 장기 스케줄러에 대해서
- [x] 각각의 스케줄러를 구분하는 기준에 대해서
- [x] 프로세스의 상태와 스케줄러가 영향을 미치는 범위에 대해서
|
1.0
|
프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요? - ### 프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요?
- [x] 단기, 중기, 장기 스케줄러에 대해서
- [x] 각각의 스케줄러를 구분하는 기준에 대해서
- [x] 프로세스의 상태와 스케줄러가 영향을 미치는 범위에 대해서
|
process
|
프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요 프로세스 스케줄러의 종류에 대해서 설명해주실 수 있나요 단기 중기 장기 스케줄러에 대해서 각각의 스케줄러를 구분하는 기준에 대해서 프로세스의 상태와 스케줄러가 영향을 미치는 범위에 대해서
| 1
|
56,866
| 14,078,797,137
|
IssuesEvent
|
2020-11-04 14:03:50
|
delaval-htps/ProjetDevJava
|
https://api.github.com/repos/delaval-htps/ProjetDevJava
|
opened
|
CVE-2019-11358 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.2.1.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/constant-values.html</p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/config/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/config/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/controller/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/controller/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/dao/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/index-files/../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/external/jquery/jquery.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/service/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/entity/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/service/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/dao/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/entity/class-use/../../../../jquery/jquery-3.3.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.js</a></p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery-3.2.1.js,ProjetDevJava/docs/build/html/_static/jquery-3.2.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery.js,ProjetDevJava/docs/build/html/_static/jquery.js,ProjetDevJava/docs/build/html/_static/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/delaval-htps/ProjetDevJava/commit/427ee54f9d003857decfb23ad22eedb05a12faa1">427ee54f9d003857decfb23ad22eedb05a12faa1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-11358 (Medium) detected in multiple libraries - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.js</b>, <b>jquery-3.2.1.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/constant-values.html</p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/config/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/config/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/controller/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/controller/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/dao/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/index-files/../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/external/jquery/jquery.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/service/class-use/../../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/entity/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/service/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/dao/../../../jquery/jquery-3.3.1.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/javadoc/fr/doriandelaval/entity/class-use/../../../../jquery/jquery-3.3.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.js</a></p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery-3.2.1.js,ProjetDevJava/docs/build/html/_static/jquery-3.2.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to vulnerable library: ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery.js,ProjetDevJava/ProjetDevJava/src/main/webapp/resources/docs/html/_static/jquery.js,ProjetDevJava/docs/build/html/_static/jquery.js,ProjetDevJava/docs/build/html/_static/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/delaval-htps/ProjetDevJava/commit/427ee54f9d003857decfb23ad22eedb05a12faa1">427ee54f9d003857decfb23ad22eedb05a12faa1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery js jquery js jquery min js jquery js javascript library for dom operations library home page a href path to dependency file projetdevjava projetdevjava src main webapp resources docs javadoc constant values html path to vulnerable library projetdevjava projetdevjava src main webapp resources docs javadoc jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval config jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval config class use jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval controller class use jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval controller jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval dao class use jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc index files jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc jquery external jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval service class use jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval entity jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval service jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval dao jquery jquery js projetdevjava projetdevjava src main webapp resources docs javadoc fr doriandelaval entity class use jquery jquery js dependency hierarchy x jquery js vulnerable library jquery js javascript library for dom operations library home page a href path to vulnerable library projetdevjava projetdevjava src main webapp resources docs html static jquery js projetdevjava docs build html static jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library projetdevjava projetdevjava src main webapp resources docs html static jquery js projetdevjava projetdevjava src main webapp resources docs html static jquery js projetdevjava docs build html static jquery js projetdevjava docs build html static jquery js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
19,950
| 26,426,781,873
|
IssuesEvent
|
2023-01-14 09:17:31
|
aliasadidev/vscode-npm-gui
|
https://api.github.com/repos/aliasadidev/vscode-npm-gui
|
closed
|
[bug] csproj property being modified incorrectly
|
bug In Process
|
Not sure if this is related to #34 or was an existing issue. Also not sure if multiple properties are affected or this one is an edge case.
The empty `<OutputPath></OutputPath>` property is being changed to `<OutputPathh/>` with an extra 'h', and another 'h' is added each time the update button is used. (Even if on the latest version and nothing is actually updating).
This example csproj file with a diff after doing an update: (with additional entries for testing)
```diff
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<DebugType>embedded</DebugType>
<Version>1.0.0</Version>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPath></OutputPath>
+ <OutputPathh/>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPath/>
+ <OutputPathh/>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPathhhhhh/>
+ <OutputPathhhhhhh/>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Dapper" Version="2.0.123" />
<PackageReference Include="MySql.Data" Version="8.0.31" />
<PackageReference Include="Npgsql" Version="6.0.7" />
<PackageReference Include="System.Configuration.ConfigurationManager" Version="6.0.1" />
<PackageReference Include="System.Data.SqlClient" Version="4.8.4" />
</ItemGroup>
<Target Name="PrepublishScript" BeforeTargets="PrepareForPublish">
<ItemGroup>
<DocFile Include="bin\$(Configuration)\$(TargetFramework)\*.xml" />
</ItemGroup>
<Copy SourceFiles="@(DocFile)" DestinationFolder="$(PublishDir)" SkipUnchangedFiles="false" />
</Target>
</Project>
```
|
1.0
|
[bug] csproj property being modified incorrectly - Not sure if this is related to #34 or was an existing issue. Also not sure if multiple properties are affected or this one is an edge case.
The empty `<OutputPath></OutputPath>` property is being changed to `<OutputPathh/>` with an extra 'h', and another 'h' is added each time the update button is used. (Even if on the latest version and nothing is actually updating).
This example csproj file with a diff after doing an update: (with additional entries for testing)
```diff
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<DebugType>embedded</DebugType>
<Version>1.0.0</Version>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPath></OutputPath>
+ <OutputPathh/>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPath/>
+ <OutputPathh/>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'">
<DocumentationFile>.\Project.xml</DocumentationFile>
- <OutputPathhhhhh/>
+ <OutputPathhhhhhh/>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Dapper" Version="2.0.123" />
<PackageReference Include="MySql.Data" Version="8.0.31" />
<PackageReference Include="Npgsql" Version="6.0.7" />
<PackageReference Include="System.Configuration.ConfigurationManager" Version="6.0.1" />
<PackageReference Include="System.Data.SqlClient" Version="4.8.4" />
</ItemGroup>
<Target Name="PrepublishScript" BeforeTargets="PrepareForPublish">
<ItemGroup>
<DocFile Include="bin\$(Configuration)\$(TargetFramework)\*.xml" />
</ItemGroup>
<Copy SourceFiles="@(DocFile)" DestinationFolder="$(PublishDir)" SkipUnchangedFiles="false" />
</Target>
</Project>
```
|
process
|
csproj property being modified incorrectly not sure if this is related to or was an existing issue also not sure if multiple properties are affected or this one is an edge case the empty property is being changed to with an extra h and another h is added each time the update button is used even if on the latest version and nothing is actually updating this example csproj file with a diff after doing an update with additional entries for testing diff embedded project xml project xml project xml
| 1
|
3,120
| 6,151,194,507
|
IssuesEvent
|
2017-06-28 01:27:37
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Drag to filter not working for nested queries
|
Bug Nested Queries Query Processor
|
steps to repro that drag-to-filter issue:
- save a question that has at least a metric column and a date column (it might also be the case that it must have a date filter as part of its definition, e.g. “birthday is between 1980 and 1990”). mine was just the People table with that `between` filter on it
- create a new quesiton using that saved question, and do something like count of rows by birthdate, then plot it as a line or bar chart
- try using the drag-to-filter
- you’ll see two things that are wrong:
1. you’ll incorrectly get a second breakout instead of it just adding the filter (this is a separate issue that i reported already),
2. you’ll get this error message: “Must be an OrderableValue or OrderableValuePlaceholder”
|
1.0
|
Drag to filter not working for nested queries - steps to repro that drag-to-filter issue:
- save a question that has at least a metric column and a date column (it might also be the case that it must have a date filter as part of its definition, e.g. “birthday is between 1980 and 1990”). mine was just the People table with that `between` filter on it
- create a new quesiton using that saved question, and do something like count of rows by birthdate, then plot it as a line or bar chart
- try using the drag-to-filter
- you’ll see two things that are wrong:
1. you’ll incorrectly get a second breakout instead of it just adding the filter (this is a separate issue that i reported already),
2. you’ll get this error message: “Must be an OrderableValue or OrderableValuePlaceholder”
|
process
|
drag to filter not working for nested queries steps to repro that drag to filter issue save a question that has at least a metric column and a date column it might also be the case that it must have a date filter as part of its definition e g “birthday is between and ” mine was just the people table with that between filter on it create a new quesiton using that saved question and do something like count of rows by birthdate then plot it as a line or bar chart try using the drag to filter you’ll see two things that are wrong you’ll incorrectly get a second breakout instead of it just adding the filter this is a separate issue that i reported already you’ll get this error message “must be an orderablevalue or orderablevalueplaceholder”
| 1
|
9,031
| 8,507,691,046
|
IssuesEvent
|
2018-10-30 19:48:05
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
CloudWatch Alarm ignores Threshold when comparison op is LessThan*
|
bug service/cloudwatch
|
If you try to create a cloudwatch alarm that steps down on say, CPU Reservation for an ECS cluster, it creates something like this in AWS:
https://screencast.com/t/iNySsTAjcG
As compared to the GreaterThan version:
https://screencast.com/t/BZrmNmV4yQkB
### Terraform Version
10.8.0
### Affected Resource(s)
Please list the resources as a list, for example:
- aws_cloudwatch_metric_alarm
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
variable "name" {
description = "The name of the ECS Cluster."
}
variable "min_size" {
description = "The min number of EC2 Instances to run in the ECS Cluster."
default=1
}
variable "max_size" {
description = "The max number of EC2 Instances to run in the ECS Cluster."
default=2
}
variable "instance_type" {
description = "The type of EC2 Instance to deploy in the ECS Cluster (e.g. t2.micro)."
}
variable "vpc_id" {
description = "The ID of the VPC in which to deploy the ECS Cluster."
}
variable "subnet_ids" {
description = "The subnet IDs in which to deploy the EC2 Instances of the ECS Cluster."
}
variable "key_pair_name" {
description = "The name of an EC2 Key Pair to associate with each EC2 Instance in the ECS Cluster. Leave blank to not associate a Key Pair."
default = "amdirent-aws"
}
variable "allow_ssh_from_cidr_blocks" {
description = "The list of CIDR-formatted IP address ranges from which the EC2 Instances in the ECS Cluster should accept SSH connections."
type = "list"
default = ["0.0.0.0/0"]
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN ECS CLUSTER
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_ecs_cluster" "example_cluster" {
name = "${var.name}"
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY AN AUTO SCALING GROUP (ASG)
# Each EC2 Instance in the ASG will register as an ECS Cluster Instance.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_policy" "high_cpu_policy" {
name = "${var.name}-high-cpu-asg-policy"
adjustment_type = "ChangeInCapacity"
autoscaling_group_name = "${aws_autoscaling_group.ecs_cluster_instances.name}"
policy_type="StepScaling"
step_adjustment {
scaling_adjustment = -1
metric_interval_lower_bound = 0
}
}
resource "aws_cloudwatch_metric_alarm" "unused-cpu-trigger" {
alarm_name = "${var.name}-too-much-cpu-asg-alarm"
comparison_operator = "LessThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUReservation"
namespace = "AWS/ECS"
period = "60"
statistic = "Average"
threshold = "60"
dimensions {
ClusterName="${var.name}"
}
alarm_description = "This metric monitors too much CPU for new service tasks"
alarm_actions = ["${aws_autoscaling_policy.high_cpu_policy.arn}"]
}
resource "aws_autoscaling_group" "ecs_cluster_instances" {
name = "${var.name}-ecs-asg"
min_size = "${var.min_size}"
max_size = "${var.max_size}"
launch_configuration = "${aws_launch_configuration.ecs_instance.name}"
vpc_zone_identifier = ["${split(",", var.subnet_ids)}"]
tag {
key = "Name"
value = "${var.name}"
propagate_at_launch = true
}
enabled_metrics = ["GroupMinSize", "GroupMaxSize", "GroupDesiredCapacity", "GroupInServiceInstances", "GroupPendingInstances", "GroupTerminatingInstances", "GroupStandbyInstances", "GroupTotalInstances"]
}
# Fetch the AWS ECS Optimized Linux AMI. Note that if you've never launched this AMI before, you have to accept the
# terms and conditions on this webpage or the EC2 instances will fail to launch:
# https://aws.amazon.com/marketplace/pp/B00U6QTYI2
data "aws_ami" "ecs" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn-ami-*-amazon-ecs-optimized"]
}
}
resource "aws_launch_configuration" "ecs_instance" {
name_prefix = "${var.name}-ec2"
instance_type = "${var.instance_type}"
key_name = "${var.key_pair_name}"
iam_instance_profile = "${aws_iam_instance_profile.ecs_instance.name}"
security_groups = ["${aws_security_group.ecs_instance.id}"]
image_id = "${data.aws_ami.ecs.id}"
associate_public_ip_address = false
enable_monitoring = true
# A shell script that will execute when on each EC2 instance when it first boots to configure the ECS Agent to talk
# to the right ECS cluster
user_data = "${data.template_file.user_data.rendered}"
# https://terraform.io/docs/configuration/resources.html
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = <<EOF
#!/bin/bash
echo "ECS_CLUSTER=${var.name}" >> /etc/ecs/ecs.config
EOF
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN IAM ROLE FOR EACH INSTANCE IN THE CLUSTER
# We export the IAM role ID as an output variable so users of this module can attach custom policies.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_iam_role" "ecs_instance" {
name = "${var.name}_ecs_instance_role"
assume_role_policy = "${data.aws_iam_policy_document.ecs_instance.json}"
# aws_iam_instance_profile.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
data "aws_iam_policy_document" "ecs_instance" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com", "application-autoscaling.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy" "ecr_pull" {
name = "${var.name}-ecr-reader-for-ecs-instance-policy"
role = "${aws_iam_role.ecs_instance.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
EOF
}
# To attach an IAM Role to an EC2 Instance, you use an IAM Instance Profile
resource "aws_iam_instance_profile" "ecs_instance" {
name = "${var.name}"
role = "${aws_iam_role.ecs_instance.name}"
# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# ATTACH IAM POLICIES TO THE IAM ROLE
# The IAM policy allows an ECS Agent running on each EC2 Instance to communicate with the ECS scheduler.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_iam_role_policy" "ecs_cluster_permissions" {
name = "${var.name}-ecs-cluster-permissions"
role = "${aws_iam_role.ecs_instance.id}"
policy = "${data.aws_iam_policy_document.ecs_cluster_permissions.json}"
}
data "aws_iam_policy_document" "ecs_cluster_permissions" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:UpdateService",
"cloudwatch:DescribeAlarms",
"ecs:DescribeServices"
]
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE A SECURITY GROUP THAT CONTROLS WHAT TRAFFIC CAN GO IN AND OUT OF THE CLUSTER
# We export the ID of the group as an output variable so users of this module can attach custom rules.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_security_group" "ecs_instance" {
name = "${var.name} ECS Cluster"
description = "Security group for the EC2 instances in the ECS cluster ${var.name}"
vpc_id = "${var.vpc_id}"
# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group_rule" "all_outbound_all" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}
resource "aws_security_group_rule" "all_inbound_all" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}
```
### Debug Output
https://gist.github.com/bhgames/ca2fd82dac7ef64d08c37a1451689d53
### Panic Output
Nope
### Expected Behavior
Should have set "60" as the proper top threshold so I had 60 >= CPU Reservation >= -Infinity instead of the weird >= CPU Res >= -Infinity which doesnt work. Have to go in to edit by hand.
### Actual Behavior
Puts in >= CPU Res >= -Infinity which doesnt work. Have to go in to edit by hand to add the threshold.
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform apply`
### Important Factoids
Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?
### References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
- GH-1234
|
1.0
|
CloudWatch Alarm ignores Threshold when comparison op is LessThan* - If you try to create a cloudwatch alarm that steps down on say, CPU Reservation for an ECS cluster, it creates something like this in AWS:
https://screencast.com/t/iNySsTAjcG
As compared to the GreaterThan version:
https://screencast.com/t/BZrmNmV4yQkB
### Terraform Version
10.8.0
### Affected Resource(s)
Please list the resources as a list, for example:
- aws_cloudwatch_metric_alarm
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
variable "name" {
description = "The name of the ECS Cluster."
}
variable "min_size" {
description = "The min number of EC2 Instances to run in the ECS Cluster."
default=1
}
variable "max_size" {
description = "The max number of EC2 Instances to run in the ECS Cluster."
default=2
}
variable "instance_type" {
description = "The type of EC2 Instance to deploy in the ECS Cluster (e.g. t2.micro)."
}
variable "vpc_id" {
description = "The ID of the VPC in which to deploy the ECS Cluster."
}
variable "subnet_ids" {
description = "The subnet IDs in which to deploy the EC2 Instances of the ECS Cluster."
}
variable "key_pair_name" {
description = "The name of an EC2 Key Pair to associate with each EC2 Instance in the ECS Cluster. Leave blank to not associate a Key Pair."
default = "amdirent-aws"
}
variable "allow_ssh_from_cidr_blocks" {
description = "The list of CIDR-formatted IP address ranges from which the EC2 Instances in the ECS Cluster should accept SSH connections."
type = "list"
default = ["0.0.0.0/0"]
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN ECS CLUSTER
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_ecs_cluster" "example_cluster" {
name = "${var.name}"
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY AN AUTO SCALING GROUP (ASG)
# Each EC2 Instance in the ASG will register as an ECS Cluster Instance.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_policy" "high_cpu_policy" {
name = "${var.name}-high-cpu-asg-policy"
adjustment_type = "ChangeInCapacity"
autoscaling_group_name = "${aws_autoscaling_group.ecs_cluster_instances.name}"
policy_type="StepScaling"
step_adjustment {
scaling_adjustment = -1
metric_interval_lower_bound = 0
}
}
resource "aws_cloudwatch_metric_alarm" "unused-cpu-trigger" {
alarm_name = "${var.name}-too-much-cpu-asg-alarm"
comparison_operator = "LessThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUReservation"
namespace = "AWS/ECS"
period = "60"
statistic = "Average"
threshold = "60"
dimensions {
ClusterName="${var.name}"
}
alarm_description = "This metric monitors too much CPU for new service tasks"
alarm_actions = ["${aws_autoscaling_policy.high_cpu_policy.arn}"]
}
resource "aws_autoscaling_group" "ecs_cluster_instances" {
name = "${var.name}-ecs-asg"
min_size = "${var.min_size}"
max_size = "${var.max_size}"
launch_configuration = "${aws_launch_configuration.ecs_instance.name}"
vpc_zone_identifier = ["${split(",", var.subnet_ids)}"]
tag {
key = "Name"
value = "${var.name}"
propagate_at_launch = true
}
enabled_metrics = ["GroupMinSize", "GroupMaxSize", "GroupDesiredCapacity", "GroupInServiceInstances", "GroupPendingInstances", "GroupTerminatingInstances", "GroupStandbyInstances", "GroupTotalInstances"]
}
# Fetch the AWS ECS Optimized Linux AMI. Note that if you've never launched this AMI before, you have to accept the
# terms and conditions on this webpage or the EC2 instances will fail to launch:
# https://aws.amazon.com/marketplace/pp/B00U6QTYI2
data "aws_ami" "ecs" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn-ami-*-amazon-ecs-optimized"]
}
}
resource "aws_launch_configuration" "ecs_instance" {
name_prefix = "${var.name}-ec2"
instance_type = "${var.instance_type}"
key_name = "${var.key_pair_name}"
iam_instance_profile = "${aws_iam_instance_profile.ecs_instance.name}"
security_groups = ["${aws_security_group.ecs_instance.id}"]
image_id = "${data.aws_ami.ecs.id}"
associate_public_ip_address = false
enable_monitoring = true
# A shell script that will execute when on each EC2 instance when it first boots to configure the ECS Agent to talk
# to the right ECS cluster
user_data = "${data.template_file.user_data.rendered}"
# https://terraform.io/docs/configuration/resources.html
lifecycle {
create_before_destroy = true
}
}
data "template_file" "user_data" {
template = <<EOF
#!/bin/bash
echo "ECS_CLUSTER=${var.name}" >> /etc/ecs/ecs.config
EOF
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN IAM ROLE FOR EACH INSTANCE IN THE CLUSTER
# We export the IAM role ID as an output variable so users of this module can attach custom policies.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_iam_role" "ecs_instance" {
name = "${var.name}_ecs_instance_role"
assume_role_policy = "${data.aws_iam_policy_document.ecs_instance.json}"
# aws_iam_instance_profile.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
data "aws_iam_policy_document" "ecs_instance" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com", "application-autoscaling.amazonaws.com"]
}
}
}
resource "aws_iam_role_policy" "ecr_pull" {
name = "${var.name}-ecr-reader-for-ecs-instance-policy"
role = "${aws_iam_role.ecs_instance.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
EOF
}
# To attach an IAM Role to an EC2 Instance, you use an IAM Instance Profile
resource "aws_iam_instance_profile" "ecs_instance" {
name = "${var.name}"
role = "${aws_iam_role.ecs_instance.name}"
# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# ATTACH IAM POLICIES TO THE IAM ROLE
# The IAM policy allows an ECS Agent running on each EC2 Instance to communicate with the ECS scheduler.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_iam_role_policy" "ecs_cluster_permissions" {
name = "${var.name}-ecs-cluster-permissions"
role = "${aws_iam_role.ecs_instance.id}"
policy = "${data.aws_iam_policy_document.ecs_cluster_permissions.json}"
}
data "aws_iam_policy_document" "ecs_cluster_permissions" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:UpdateService",
"cloudwatch:DescribeAlarms",
"ecs:DescribeServices"
]
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE A SECURITY GROUP THAT CONTROLS WHAT TRAFFIC CAN GO IN AND OUT OF THE CLUSTER
# We export the ID of the group as an output variable so users of this module can attach custom rules.
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_security_group" "ecs_instance" {
name = "${var.name} ECS Cluster"
description = "Security group for the EC2 instances in the ECS cluster ${var.name}"
vpc_id = "${var.vpc_id}"
# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group_rule" "all_outbound_all" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}
resource "aws_security_group_rule" "all_inbound_all" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}
```
### Debug Output
https://gist.github.com/bhgames/ca2fd82dac7ef64d08c37a1451689d53
### Panic Output
Nope
### Expected Behavior
Should have set "60" as the proper top threshold so I had 60 >= CPU Reservation >= -Infinity instead of the weird >= CPU Res >= -Infinity which doesnt work. Have to go in to edit by hand.
### Actual Behavior
Puts in >= CPU Res >= -Infinity which doesnt work. Have to go in to edit by hand to add the threshold.
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform apply`
### Important Factoids
Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?
### References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
- GH-1234
|
non_process
|
cloudwatch alarm ignores threshold when comparison op is lessthan if you try to create a cloudwatch alarm that steps down on say cpu reservation for an ecs cluster it creates something like this in aws as compared to the greaterthan version terraform version affected resource s please list the resources as a list for example aws cloudwatch metric alarm if this issue appears to affect multiple resources it may be an issue with terraform s core so please mention this terraform configuration files hcl variable name description the name of the ecs cluster variable min size description the min number of instances to run in the ecs cluster default variable max size description the max number of instances to run in the ecs cluster default variable instance type description the type of instance to deploy in the ecs cluster e g micro variable vpc id description the id of the vpc in which to deploy the ecs cluster variable subnet ids description the subnet ids in which to deploy the instances of the ecs cluster variable key pair name description the name of an key pair to associate with each instance in the ecs cluster leave blank to not associate a key pair default amdirent aws variable allow ssh from cidr blocks description the list of cidr formatted ip address ranges from which the instances in the ecs cluster should accept ssh connections type list default create an ecs cluster resource aws ecs cluster example cluster name var name deploy an auto scaling group asg each instance in the asg will register as an ecs cluster instance resource aws autoscaling policy high cpu policy name var name high cpu asg policy adjustment type changeincapacity autoscaling group name aws autoscaling group ecs cluster instances name policy type stepscaling step adjustment scaling adjustment metric interval lower bound resource aws cloudwatch metric alarm unused cpu trigger alarm name var name too much cpu asg alarm comparison operator lessthanorequaltothreshold evaluation periods metric name cpureservation namespace aws ecs period statistic average threshold dimensions clustername var name alarm description this metric monitors too much cpu for new service tasks alarm actions resource aws autoscaling group ecs cluster instances name var name ecs asg min size var min size max size var max size launch configuration aws launch configuration ecs instance name vpc zone identifier tag key name value var name propagate at launch true enabled metrics fetch the aws ecs optimized linux ami note that if you ve never launched this ami before you have to accept the terms and conditions on this webpage or the instances will fail to launch data aws ami ecs most recent true owners filter name name values resource aws launch configuration ecs instance name prefix var name instance type var instance type key name var key pair name iam instance profile aws iam instance profile ecs instance name security groups image id data aws ami ecs id associate public ip address false enable monitoring true a shell script that will execute when on each instance when it first boots to configure the ecs agent to talk to the right ecs cluster user data data template file user data rendered lifecycle create before destroy true data template file user data template eof bin bash echo ecs cluster var name etc ecs ecs config eof create an iam role for each instance in the cluster we export the iam role id as an output variable so users of this module can attach custom policies resource aws iam role ecs instance name var name ecs instance role assume role policy data aws iam policy document ecs instance json aws iam instance profile ecs instance sets create before destroy to true which means every resource it depends on including this one must also set the create before destroy flag to true or you ll get a cyclic dependency error lifecycle create before destroy true data aws iam policy document ecs instance statement effect allow actions principals type service identifiers resource aws iam role policy ecr pull name var name ecr reader for ecs instance policy role aws iam role ecs instance id policy eof version statement effect allow action ecr getauthorizationtoken ecr batchchecklayeravailability ecr getdownloadurlforlayer ecr getrepositorypolicy ecr describerepositories ecr listimages ecr describeimages ecr batchgetimage resource eof to attach an iam role to an instance you use an iam instance profile resource aws iam instance profile ecs instance name var name role aws iam role ecs instance name aws launch configuration ecs instance sets create before destroy to true which means every resource it depends on including this one must also set the create before destroy flag to true or you ll get a cyclic dependency error lifecycle create before destroy true attach iam policies to the iam role the iam policy allows an ecs agent running on each instance to communicate with the ecs scheduler resource aws iam role policy ecs cluster permissions name var name ecs cluster permissions role aws iam role ecs instance id policy data aws iam policy document ecs cluster permissions json data aws iam policy document ecs cluster permissions statement effect allow resources actions ecs createcluster ecs deregistercontainerinstance ecs discoverpollendpoint ecs poll ecs registercontainerinstance ecs starttelemetrysession ecs submit ecs updateservice cloudwatch describealarms ecs describeservices create a security group that controls what traffic can go in and out of the cluster we export the id of the group as an output variable so users of this module can attach custom rules resource aws security group ecs instance name var name ecs cluster description security group for the instances in the ecs cluster var name vpc id var vpc id aws launch configuration ecs instance sets create before destroy to true which means every resource it depends on including this one must also set the create before destroy flag to true or you ll get a cyclic dependency error lifecycle create before destroy true resource aws security group rule all outbound all type egress from port to port protocol cidr blocks security group id aws security group ecs instance id resource aws security group rule all inbound all type ingress from port to port protocol cidr blocks security group id aws security group ecs instance id debug output panic output nope expected behavior should have set as the proper top threshold so i had cpu reservation infinity instead of the weird cpu res infinity which doesnt work have to go in to edit by hand actual behavior puts in cpu res infinity which doesnt work have to go in to edit by hand to add the threshold steps to reproduce please list the steps required to reproduce the issue for example terraform apply important factoids are there anything atypical about your accounts that we should know for example running in classic custom version of openstack tight acls references are there any other github issues open or closed or pull requests that should be linked here for example gh
| 0
|
7,840
| 11,083,161,794
|
IssuesEvent
|
2019-12-13 13:53:17
|
microsoft/botframework-solutions
|
https://api.github.com/repos/microsoft/botframework-solutions
|
closed
|
Productivity Skills - Proactive Scenarios
|
Needs Mockup Needs Requirements Needs User Story Status: Blocked
|
## User Story
As ...
I want ...
so that ...
## Acceptance Criteria
*List your requirements.*
- [ ] ...
- [ ] ...
- [ ] ...
## Resources
*Provide any screenshots or conversation mockups to support your request.*
*Learn how to mock up bot conversations with [.chat files](http://aka.ms/bfchatfile).*
## Notes
*Provide any additional information.*
<!-- Internal Use Only
### Time Estimations
⏱ Design: ___
⏱ Development: ___
⏱ Tests: ___
⏱ Documentation: ___
-->
Prioritise key scenarios with partner teams then design support for flows.
- [ ] Upcoming meeting
- [ ] Proactive new mail from your team
|
1.0
|
Productivity Skills - Proactive Scenarios - ## User Story
As ...
I want ...
so that ...
## Acceptance Criteria
*List your requirements.*
- [ ] ...
- [ ] ...
- [ ] ...
## Resources
*Provide any screenshots or conversation mockups to support your request.*
*Learn how to mock up bot conversations with [.chat files](http://aka.ms/bfchatfile).*
## Notes
*Provide any additional information.*
<!-- Internal Use Only
### Time Estimations
⏱ Design: ___
⏱ Development: ___
⏱ Tests: ___
⏱ Documentation: ___
-->
Prioritise key scenarios with partner teams then design support for flows.
- [ ] Upcoming meeting
- [ ] Proactive new mail from your team
|
non_process
|
productivity skills proactive scenarios user story as i want so that acceptance criteria list your requirements resources provide any screenshots or conversation mockups to support your request learn how to mock up bot conversations with notes provide any additional information internal use only time estimations ⏱ design ⏱ development ⏱ tests ⏱ documentation prioritise key scenarios with partner teams then design support for flows upcoming meeting proactive new mail from your team
| 0
|
70,651
| 13,520,838,780
|
IssuesEvent
|
2020-09-15 05:51:16
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
closed
|
[$30] Member's with special characters in their handle unable to register to any challenge
|
P1 Prod Env QA Pass V5-Challenge-API-Integration tcx_Assigned tcx_FixAccepted v5-intgration-sub-code waiting-for-api-fix
|
**Describe the bug**
On clicking on Register i get an error message saying "ERROR: Failed to register for challenge"
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://www.topcoder.com/challenges/c2a1465d-78cb-42b8-8f88-3d1ae386d3c5?tab=details
2. Click on Register
4. See error
**Expected behavior**
Getting registered for the challenge without any error
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux 18.04
- Browser chrome
**Additional context**
Attached chrome log
[log.txt](https://github.com/topcoder-platform/community-app/files/5100887/log.txt)
|
1.0
|
[$30] Member's with special characters in their handle unable to register to any challenge - **Describe the bug**
On clicking on Register i get an error message saying "ERROR: Failed to register for challenge"
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://www.topcoder.com/challenges/c2a1465d-78cb-42b8-8f88-3d1ae386d3c5?tab=details
2. Click on Register
4. See error
**Expected behavior**
Getting registered for the challenge without any error
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux 18.04
- Browser chrome
**Additional context**
Attached chrome log
[log.txt](https://github.com/topcoder-platform/community-app/files/5100887/log.txt)
|
non_process
|
member s with special characters in their handle unable to register to any challenge describe the bug on clicking on register i get an error message saying error failed to register for challenge to reproduce steps to reproduce the behavior go to click on register see error expected behavior getting registered for the challenge without any error screenshots desktop please complete the following information os linux browser chrome additional context attached chrome log
| 0
|
512,203
| 14,890,234,367
|
IssuesEvent
|
2021-01-20 22:41:49
|
PyTorchLightning/pytorch-lightning-bolts
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning-bolts
|
closed
|
can't import the VAE or CPCV2 model
|
Priority model
|
I want to use a pretrained VAE or CPCV2 model to generate features for a image.
According to [HOW TO USE MODELS](https://pytorch-lightning-bolts.readthedocs.io/en/latest/models_howto.html), I wrote the follow python file
```python
from pl_bolts.models.autoencoders import VAE
model1 = VAE(pretrained='imagenet2012')
encoder = model1.encoder
encoder.freeze()
```
However, error occurred
```shell
Traceback (most recent call last):
File "pytorch-lightning-feature-exactor.py", line 3, in <module>
model1 = VAE(pretrained='imagenet2012')
TypeError: __init__() missing 1 required positional argument: 'input_height'
```
And when I just import the CPCV2 model as follows:
```python
from pl_bolts.models.self_supervised import CPCV2
model2 = CPCV2(encoder='resnet18', pretrained='imagenet128').freeze()
```
also error occurred
```shell
Traceback (most recent call last):
File "pytorch-lightning-feature-exactor.py", line 11, in <module>
model2 = CPCV2(encoder='resnet18', pretrained='imagenet128').freeze()
File "/home/wt/.local/lib/python3.8/site-packages/pl_bolts/models/self_supervised/cpc/cpc_module.py", line 79, in __init__
assert datamodule
AssertionError
```
why?
My python and pytorch is
```shell
$ python --version
Python 3.8.3
>>> import torch
>>> torch.__version__
'1.7.1+cpu'
>>>
>>> import torchvision
>>> torchvision.__version__
'0.8.2+cpu'
>>>
```
and install the pytorch-lightning by
```shell
pip install pytorch-lightning
pip install pytorch-lightning-bolts
```
|
1.0
|
can't import the VAE or CPCV2 model - I want to use a pretrained VAE or CPCV2 model to generate features for a image.
According to [HOW TO USE MODELS](https://pytorch-lightning-bolts.readthedocs.io/en/latest/models_howto.html), I wrote the follow python file
```python
from pl_bolts.models.autoencoders import VAE
model1 = VAE(pretrained='imagenet2012')
encoder = model1.encoder
encoder.freeze()
```
However, error occurred
```shell
Traceback (most recent call last):
File "pytorch-lightning-feature-exactor.py", line 3, in <module>
model1 = VAE(pretrained='imagenet2012')
TypeError: __init__() missing 1 required positional argument: 'input_height'
```
And when I just import the CPCV2 model as follows:
```python
from pl_bolts.models.self_supervised import CPCV2
model2 = CPCV2(encoder='resnet18', pretrained='imagenet128').freeze()
```
also error occurred
```shell
Traceback (most recent call last):
File "pytorch-lightning-feature-exactor.py", line 11, in <module>
model2 = CPCV2(encoder='resnet18', pretrained='imagenet128').freeze()
File "/home/wt/.local/lib/python3.8/site-packages/pl_bolts/models/self_supervised/cpc/cpc_module.py", line 79, in __init__
assert datamodule
AssertionError
```
why?
My python and pytorch is
```shell
$ python --version
Python 3.8.3
>>> import torch
>>> torch.__version__
'1.7.1+cpu'
>>>
>>> import torchvision
>>> torchvision.__version__
'0.8.2+cpu'
>>>
```
and install the pytorch-lightning by
```shell
pip install pytorch-lightning
pip install pytorch-lightning-bolts
```
|
non_process
|
can t import the vae or model i want to use a pretrained vae or model to generate features for a image according to i wrote the follow python file python from pl bolts models autoencoders import vae vae pretrained encoder encoder encoder freeze however error occurred shell traceback most recent call last file pytorch lightning feature exactor py line in vae pretrained typeerror init missing required positional argument input height and when i just import the model as follows python from pl bolts models self supervised import encoder pretrained freeze also error occurred shell traceback most recent call last file pytorch lightning feature exactor py line in encoder pretrained freeze file home wt local lib site packages pl bolts models self supervised cpc cpc module py line in init assert datamodule assertionerror why? my python and pytorch is shell python version python import torch torch version cpu import torchvision torchvision version cpu and install the pytorch lightning by shell pip install pytorch lightning pip install pytorch lightning bolts
| 0
|
708
| 3,203,355,092
|
IssuesEvent
|
2015-10-02 18:34:53
|
johnhbenetech/listTest
|
https://api.github.com/repos/johnhbenetech/listTest
|
opened
|
product.SMS_Water_Pump
|
feature.Geospatial feature.Sensor process.Micro_Transaction technology.Cloud technology.Hardware technology.Narrow_Band_Communication user.Poverty_Global_South user.Underserved vertical.Environment vertical.Quality_of_Life
|
Impact
Need
Mission fit
Legalilty
Money: current
Money: future
Cost
Talent
Technical Risk
Champion
Community: users
Community: partners
Portfolio fit
Exit strategies
Data/proof
|
1.0
|
product.SMS_Water_Pump - Impact
Need
Mission fit
Legalilty
Money: current
Money: future
Cost
Talent
Technical Risk
Champion
Community: users
Community: partners
Portfolio fit
Exit strategies
Data/proof
|
process
|
product sms water pump impact need mission fit legalilty money current money future cost talent technical risk champion community users community partners portfolio fit exit strategies data proof
| 1
|
18,745
| 11,047,025,110
|
IssuesEvent
|
2019-12-09 18:04:10
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Rubric for project prioritization
|
Project: DTS Service Delivery Service: PM Type: Operations
|
We need a way to prioritize projects to ensure that DTS resources are allocated in an efficient, equitable, impactful way. Having a standard rubric where we can assign points based on predetermined criteria would bring consistency and transparency to the process.
For reference:
- [A rubric the Austin.gov team made back in the day to prioritize departments transitions to the new website](https://docs.google.com/document/d/19i3c9gwMoRhZWKflvge4natqcsuIk3cYwA1bjK7J-fA/edit).
- @danielhonker and @sarahscottr are working on a [Service Access project](https://cityofaustin.gitbook.io/service-access/) focused on assessing services for accessibility and prioritizing services for redesign and digitization. Here is their [framework for prioritization](https://docs.google.com/presentation/d/16WAgW2fV5xxdtuvIoTtIf0y6YKOAUrpsihuGsOmRiiA/edit#slide=id.g5d0db05e8e_0_112) and the [factors they are considering in each bucket](https://airtable.com/shrWn1TrwCKi8Mp8o/tblqUpz9CiyxuX4U1?blocks=hide).
- [Austin Strategic Mobility Plan](http://www.austintexas.gov/asmp) guides the City of Austin’s mobility-related policies, programs, projects, and investments for the next 20+ years.
- [Austin Strategic Direction 2023](https://www.austintexas.gov/financeonline/afo_content.cfm?s=73&p=160) outlines a shared vision and six priority Strategic Outcomes.
- Draft intake/eval process from USDS Veteran's Affairs:
> # Headlines
> * **Project name** -
> * **Project owner/discovery lead** -
> * **Top user stories** - "As a [user], in order to accomplish [a goal], I need a [thing/change/feature/product].”
> * **Initial "Definition of Done"** -
> * How do we know when we're ready for the next iteration/phase?
> * What next steps or resources are needed to advance the goals of the project?
> * **Notable updates** -
> # Discovery or project roles/skills needed
> These are necessary skills for a discovery sprint. Please list proposed team members, their ~% commitment, the skills/role they fulfill, and any gaps we could help fill:
> * Project owner/discovery lead -
> * Design (e.g. user experience, visual, content, etc.) -
> * Technology expertise of appropriate flavors -
> * Policy savvy -
> * Product management -
> * Communication/presentation skills -
> # Key factors for choosing work
> * **Impact** - Does this improve the lives of veterans and their communities? (directly or indirectly) This should be a descriptive mission statement that explains the potential impact to Veterans. If a holistic sense of impact is unknown pre-Discovery, use any data available to you, & update as you discover!
> * **Team interest** - Who from USDS is interested in working on this?
> * **Fit for USDS VA** - Our skills and [USDS values](https://www.usds.gov/values)
> * **Institutional support**- Who from the biz and IT will partner with us and continue to manage it post launch?
> # Additional factors to consider
> These help us evaluate the work and our options.
> * **Known** - How confident are we that we understand the shape of the problem? (Rate on a scale of 1-5?)
> * **Type of involvement** - What are different options for supporting the work?
> * DSVA Scopes/Discovery - What questions are you trying to answer?
> * DSVA Manages (BPA, BAC, Other Contract) - What are our acceptance criteria? How are we measuring success?
> * DSVA Supports (Other VA team)) - What needs to exist for us to feel comfortable stepping back?
> * **Estimated timeline, notable upcoming events** (especially new statutory deadlines, commitments made)
> * **Metrics** - Key Performance Indicators, metrics, data we can use to track goals
> * **Known risks** - What could lead to a negative, undesirable outcome for the project or people impacted? This is from lean product design that helps us identify red flags for further discovery (we don’t prioritize low risk; we track risk).
> # Reference links
> * GitHub repo, priorities.md, contract documentation, roadmaps and planning materials, key briefing materials
> * How can USDS folks best discuss or learn about the project? ex: slack channel, comment in issue, attend next biweekly meeting
## Tracy's Thoughts
**DTS Work Scoring System**
What is the purpose of scoring DTS work/issues?
- Determining a request's value, and prioritizing DTS work
- Should we have the purpose formally stated somewhere for any reason?
o Tracy's Thoughts: Yes, it should be stated formally somewhere (maybe project brief), so customers understand their place in the backlog, and DTS is able to defend its prioritization methods.
What work/issues should be scored?
- Projects, features, anything else?
What is our scoring method/approach?
- [Weighted Scoring](https://hygger.io/blog/prioritize-product-features-criteria-weighted-scoring/) -- based on scoring criteria
- [RICE](https://hygger.io/blog/4-powerful-factors-rice-scoring-model/) -- Reach, Impact, Confidence, Effort
- [ICE](https://hygger.io/blog/ice-method-helps-choose-better-product-features/) -- Impact, Confidence, Ease
When do we perform scoring?
- When completing the Project Intake Document during the Project Discovery Meeting?
o The doc may need updating to allow us to collect the scoring info needed.
o [Workflow Diagram](https://www.draw.io/?state=%7B%22ids%22:%5B%221CAxAypAhB8dve1FnthN4LvIyVfKpTFKA%22%5D,%22action%22:%22open%22,%22userId%22:%22104144569414369680128%22%7D#G1CAxAypAhB8dve1FnthN4LvIyVfKpTFKA)
o [Project Intake](https://docs.google.com/document/d/1hYIp2gMbeukzTsu7GF--mLwwKyQIE5FLtrKQZI0A5gY/edit#heading=h.b5b3w7oomxfq)
Does the scoring completely determine an issue's priority in the backlog?
- Tracy's Thoughts: We've had our discovery meeting, the scoring is complete.... What next? Will we reorganize the backlog according to the issue's score placement? Or will other things need to be taken into consideration?
- Add a step in the workflow diagram for prioritizing the issue?
SD23 & ASMP Discussion
- Determine the criteria that should be scored
- Who should be scoring these criteria?
o Us, or send it to ASMP staff to score?
|
2.0
|
Rubric for project prioritization - We need a way to prioritize projects to ensure that DTS resources are allocated in an efficient, equitable, impactful way. Having a standard rubric where we can assign points based on predetermined criteria would bring consistency and transparency to the process.
For reference:
- [A rubric the Austin.gov team made back in the day to prioritize departments transitions to the new website](https://docs.google.com/document/d/19i3c9gwMoRhZWKflvge4natqcsuIk3cYwA1bjK7J-fA/edit).
- @danielhonker and @sarahscottr are working on a [Service Access project](https://cityofaustin.gitbook.io/service-access/) focused on assessing services for accessibility and prioritizing services for redesign and digitization. Here is their [framework for prioritization](https://docs.google.com/presentation/d/16WAgW2fV5xxdtuvIoTtIf0y6YKOAUrpsihuGsOmRiiA/edit#slide=id.g5d0db05e8e_0_112) and the [factors they are considering in each bucket](https://airtable.com/shrWn1TrwCKi8Mp8o/tblqUpz9CiyxuX4U1?blocks=hide).
- [Austin Strategic Mobility Plan](http://www.austintexas.gov/asmp) guides the City of Austin’s mobility-related policies, programs, projects, and investments for the next 20+ years.
- [Austin Strategic Direction 2023](https://www.austintexas.gov/financeonline/afo_content.cfm?s=73&p=160) outlines a shared vision and six priority Strategic Outcomes.
- Draft intake/eval process from USDS Veteran's Affairs:
> # Headlines
> * **Project name** -
> * **Project owner/discovery lead** -
> * **Top user stories** - "As a [user], in order to accomplish [a goal], I need a [thing/change/feature/product].”
> * **Initial "Definition of Done"** -
> * How do we know when we're ready for the next iteration/phase?
> * What next steps or resources are needed to advance the goals of the project?
> * **Notable updates** -
> # Discovery or project roles/skills needed
> These are necessary skills for a discovery sprint. Please list proposed team members, their ~% commitment, the skills/role they fulfill, and any gaps we could help fill:
> * Project owner/discovery lead -
> * Design (e.g. user experience, visual, content, etc.) -
> * Technology expertise of appropriate flavors -
> * Policy savvy -
> * Product management -
> * Communication/presentation skills -
> # Key factors for choosing work
> * **Impact** - Does this improve the lives of veterans and their communities? (directly or indirectly) This should be a descriptive mission statement that explains the potential impact to Veterans. If a holistic sense of impact is unknown pre-Discovery, use any data available to you, & update as you discover!
> * **Team interest** - Who from USDS is interested in working on this?
> * **Fit for USDS VA** - Our skills and [USDS values](https://www.usds.gov/values)
> * **Institutional support**- Who from the biz and IT will partner with us and continue to manage it post launch?
> # Additional factors to consider
> These help us evaluate the work and our options.
> * **Known** - How confident are we that we understand the shape of the problem? (Rate on a scale of 1-5?)
> * **Type of involvement** - What are different options for supporting the work?
> * DSVA Scopes/Discovery - What questions are you trying to answer?
> * DSVA Manages (BPA, BAC, Other Contract) - What are our acceptance criteria? How are we measuring success?
> * DSVA Supports (Other VA team)) - What needs to exist for us to feel comfortable stepping back?
> * **Estimated timeline, notable upcoming events** (especially new statutory deadlines, commitments made)
> * **Metrics** - Key Performance Indicators, metrics, data we can use to track goals
> * **Known risks** - What could lead to a negative, undesirable outcome for the project or people impacted? This is from lean product design that helps us identify red flags for further discovery (we don’t prioritize low risk; we track risk).
> # Reference links
> * GitHub repo, priorities.md, contract documentation, roadmaps and planning materials, key briefing materials
> * How can USDS folks best discuss or learn about the project? ex: slack channel, comment in issue, attend next biweekly meeting
## Tracy's Thoughts
**DTS Work Scoring System**
What is the purpose of scoring DTS work/issues?
- Determining a request's value, and prioritizing DTS work
- Should we have the purpose formally stated somewhere for any reason?
o Tracy's Thoughts: Yes, it should be stated formally somewhere (maybe project brief), so customers understand their place in the backlog, and DTS is able to defend its prioritization methods.
What work/issues should be scored?
- Projects, features, anything else?
What is our scoring method/approach?
- [Weighted Scoring](https://hygger.io/blog/prioritize-product-features-criteria-weighted-scoring/) -- based on scoring criteria
- [RICE](https://hygger.io/blog/4-powerful-factors-rice-scoring-model/) -- Reach, Impact, Confidence, Effort
- [ICE](https://hygger.io/blog/ice-method-helps-choose-better-product-features/) -- Impact, Confidence, Ease
When do we perform scoring?
- When completing the Project Intake Document during the Project Discovery Meeting?
o The doc may need updating to allow us to collect the scoring info needed.
o [Workflow Diagram](https://www.draw.io/?state=%7B%22ids%22:%5B%221CAxAypAhB8dve1FnthN4LvIyVfKpTFKA%22%5D,%22action%22:%22open%22,%22userId%22:%22104144569414369680128%22%7D#G1CAxAypAhB8dve1FnthN4LvIyVfKpTFKA)
o [Project Intake](https://docs.google.com/document/d/1hYIp2gMbeukzTsu7GF--mLwwKyQIE5FLtrKQZI0A5gY/edit#heading=h.b5b3w7oomxfq)
Does the scoring completely determine an issue's priority in the backlog?
- Tracy's Thoughts: We've had our discovery meeting, the scoring is complete.... What next? Will we reorganize the backlog according to the issue's score placement? Or will other things need to be taken into consideration?
- Add a step in the workflow diagram for prioritizing the issue?
SD23 & ASMP Discussion
- Determine the criteria that should be scored
- Who should be scoring these criteria?
o Us, or send it to ASMP staff to score?
|
non_process
|
rubric for project prioritization we need a way to prioritize projects to ensure that dts resources are allocated in an efficient equitable impactful way having a standard rubric where we can assign points based on predetermined criteria would bring consistency and transparency to the process for reference danielhonker and sarahscottr are working on a focused on assessing services for accessibility and prioritizing services for redesign and digitization here is their and the guides the city of austin’s mobility related policies programs projects and investments for the next years outlines a shared vision and six priority strategic outcomes draft intake eval process from usds veteran s affairs headlines project name project owner discovery lead top user stories as a in order to accomplish i need a ” initial definition of done how do we know when we re ready for the next iteration phase what next steps or resources are needed to advance the goals of the project notable updates discovery or project roles skills needed these are necessary skills for a discovery sprint please list proposed team members their commitment the skills role they fulfill and any gaps we could help fill project owner discovery lead design e g user experience visual content etc technology expertise of appropriate flavors policy savvy product management communication presentation skills key factors for choosing work impact does this improve the lives of veterans and their communities directly or indirectly this should be a descriptive mission statement that explains the potential impact to veterans if a holistic sense of impact is unknown pre discovery use any data available to you update as you discover team interest who from usds is interested in working on this fit for usds va our skills and institutional support who from the biz and it will partner with us and continue to manage it post launch additional factors to consider these help us evaluate the work and our options known how confident are we that we understand the shape of the problem rate on a scale of type of involvement what are different options for supporting the work dsva scopes discovery what questions are you trying to answer dsva manages bpa bac other contract what are our acceptance criteria how are we measuring success dsva supports other va team what needs to exist for us to feel comfortable stepping back estimated timeline notable upcoming events especially new statutory deadlines commitments made metrics key performance indicators metrics data we can use to track goals known risks what could lead to a negative undesirable outcome for the project or people impacted this is from lean product design that helps us identify red flags for further discovery we don’t prioritize low risk we track risk reference links github repo priorities md contract documentation roadmaps and planning materials key briefing materials how can usds folks best discuss or learn about the project ex slack channel comment in issue attend next biweekly meeting tracy s thoughts dts work scoring system what is the purpose of scoring dts work issues determining a request s value and prioritizing dts work should we have the purpose formally stated somewhere for any reason o tracy s thoughts yes it should be stated formally somewhere maybe project brief so customers understand their place in the backlog and dts is able to defend its prioritization methods what work issues should be scored projects features anything else what is our scoring method approach based on scoring criteria reach impact confidence effort impact confidence ease when do we perform scoring when completing the project intake document during the project discovery meeting o the doc may need updating to allow us to collect the scoring info needed o o does the scoring completely determine an issue s priority in the backlog tracy s thoughts we ve had our discovery meeting the scoring is complete what next will we reorganize the backlog according to the issue s score placement or will other things need to be taken into consideration add a step in the workflow diagram for prioritizing the issue asmp discussion determine the criteria that should be scored who should be scoring these criteria o us or send it to asmp staff to score
| 0
|
3,849
| 6,808,543,088
|
IssuesEvent
|
2017-11-04 04:21:09
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
ethName: names.txt should not be in the cache
|
status-inprocess tools-ethName type-bug
|
It messes up the ability to do testing for one thing. If I delete the ~/.quickBlocks folder, and then put the /Volumes/ folder back in, it fails. Always look for names.txt in ~/.quickBlocks. Anything that cannot be reproduced from source should not be in the cache.
|
1.0
|
ethName: names.txt should not be in the cache - It messes up the ability to do testing for one thing. If I delete the ~/.quickBlocks folder, and then put the /Volumes/ folder back in, it fails. Always look for names.txt in ~/.quickBlocks. Anything that cannot be reproduced from source should not be in the cache.
|
process
|
ethname names txt should not be in the cache it messes up the ability to do testing for one thing if i delete the quickblocks folder and then put the volumes folder back in it fails always look for names txt in quickblocks anything that cannot be reproduced from source should not be in the cache
| 1
|
553,068
| 16,343,083,037
|
IssuesEvent
|
2021-05-13 01:52:29
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
Issue with Zero Checkout Subtotal Magento2.4
|
Component: Admin Component: Order grid Component: order status Issue: needs update Priority: P3 Progress: PR in progress Reported on 2.4.1 Reproduced on 2.4.x stale issue
|
**Magento version**
Magento2.4.1
**Step To Reproduce**
1 Please Set Zero Subtotal checkout new order status to Processing ( https://imgur.com/a/mrJQyTN )
2.create a virtual Product with Price 0
3.then add to cart that product and place an order with a total of 0
4. then check order status in admin orders grid ( https://imgur.com/a/syfqkkX )
**Actual Result:-**
Order Status is closed
**Expected Result:-**
Order Status Should be Processing
|
1.0
|
Issue with Zero Checkout Subtotal Magento2.4 - **Magento version**
Magento2.4.1
**Step To Reproduce**
1 Please Set Zero Subtotal checkout new order status to Processing ( https://imgur.com/a/mrJQyTN )
2.create a virtual Product with Price 0
3.then add to cart that product and place an order with a total of 0
4. then check order status in admin orders grid ( https://imgur.com/a/syfqkkX )
**Actual Result:-**
Order Status is closed
**Expected Result:-**
Order Status Should be Processing
|
non_process
|
issue with zero checkout subtotal magento version step to reproduce please set zero subtotal checkout new order status to processing create a virtual product with price then add to cart that product and place an order with a total of then check order status in admin orders grid actual result order status is closed expected result order status should be processing
| 0
|
3,127
| 6,159,364,785
|
IssuesEvent
|
2017-06-29 00:01:25
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Test, Bug fix, and Document controlling RBAC via auth json files
|
area/access-control kind/feature priority/-1 process/cherry-pick process/cherry-picked status/resolved status/to-test version/1.6
|
There is currently a feature that would allow admins to override our auth controls by bind mounting a json file into the rancher/server contianer.
The file would follow the same format as our [project-auth.json file](https://github.com/rancher/cattle/blob/4b2210b4eedf1790410ac51aa67a8100737a5cf1/resources/content/schema/project/project-auth.json).
We specifically want to prove that using this you can turn off the ability for users to create projects
|
2.0
|
Test, Bug fix, and Document controlling RBAC via auth json files - There is currently a feature that would allow admins to override our auth controls by bind mounting a json file into the rancher/server contianer.
The file would follow the same format as our [project-auth.json file](https://github.com/rancher/cattle/blob/4b2210b4eedf1790410ac51aa67a8100737a5cf1/resources/content/schema/project/project-auth.json).
We specifically want to prove that using this you can turn off the ability for users to create projects
|
process
|
test bug fix and document controlling rbac via auth json files there is currently a feature that would allow admins to override our auth controls by bind mounting a json file into the rancher server contianer the file would follow the same format as our we specifically want to prove that using this you can turn off the ability for users to create projects
| 1
|
11,823
| 14,645,377,892
|
IssuesEvent
|
2020-12-26 07:18:25
|
kubeflow/examples
|
https://api.github.com/repos/kubeflow/examples
|
closed
|
Cleanup OWNERs and orphaned examples
|
area/examples kind/process lifecycle/frozen lifecycle/stale priority/p2
|
I think we need to identify and cleanup orphaned examples.
Here's my suggestion for what a process might look like
1. Each example (i.e. subdirectory) should have an OWNERs file listing the OWNERs for that example
1. Any example for which we are unable to find active OWNERs committed to maintaining it we should delete
1. We should establish some expectation around maintenance; e.g. keep it updated to work with every minor release
* Ideally we'd require CI for each example but that might be too high bar.
Anyone interested in helping to drive this process?
/cc @jbottum @elviraux
|
1.0
|
Cleanup OWNERs and orphaned examples - I think we need to identify and cleanup orphaned examples.
Here's my suggestion for what a process might look like
1. Each example (i.e. subdirectory) should have an OWNERs file listing the OWNERs for that example
1. Any example for which we are unable to find active OWNERs committed to maintaining it we should delete
1. We should establish some expectation around maintenance; e.g. keep it updated to work with every minor release
* Ideally we'd require CI for each example but that might be too high bar.
Anyone interested in helping to drive this process?
/cc @jbottum @elviraux
|
process
|
cleanup owners and orphaned examples i think we need to identify and cleanup orphaned examples here s my suggestion for what a process might look like each example i e subdirectory should have an owners file listing the owners for that example any example for which we are unable to find active owners committed to maintaining it we should delete we should establish some expectation around maintenance e g keep it updated to work with every minor release ideally we d require ci for each example but that might be too high bar anyone interested in helping to drive this process cc jbottum elviraux
| 1
|
516,681
| 14,986,355,819
|
IssuesEvent
|
2021-01-28 21:08:07
|
docker-mailserver/docker-mailserver
|
https://api.github.com/repos/docker-mailserver/docker-mailserver
|
opened
|
[BUG] Using setup.sh config dkim results in a malformed mail.private RSA key
|
kind/bug meta/needs triage priority/medium
|
# Bug Report
The setup.sh tool seems to generate a private key file that results in the following error when attempting to send mail:
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: SSL error:0909006C:PEM routines:get_name:no start line
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: dkim_eom(): resource unavailable: PEM_read_bio_PrivateKey() failed
## Context
Use the setup.sh tool to generate the dkim private and public keys as so:
```BASH
./setup.sh config dkim {key size}
```END
Will result in both a mail.private and mail.txt files being created. Configure your dkim records on your DNS server as required and proceed to test with a mail client. Watch the logs from within the mail server container using
```BASH
tail -f /var/log/mail/mail.log
```END
As you attempt to send mail from your email client, if you see the following error in your logs, it's a sign of a malformed mail.private file
```
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: SSL error:0909006C:PEM routines:get_name:no start line
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: dkim_eom(): resource unavailable:
```
Note: A workaround is to manually edit the file on your docker host machine and add a carriage return after the space character after the -----BEGIN RSA PRIVATE KEY----- text string. And repeat before the space before the -----END RSA PRIVATE KEY-----
### What is affected by this bug?
Ability to send emails from a remote client using a pre-created email address.
### When does this occur?
Upon send of an email from a mail client
### Expected Behavior
A well formed private key that can be parsed by the SSL library
## Your Environment
- docker-compose: version 1.25.0
- server OS: ubuntu 20.04
|
1.0
|
[BUG] Using setup.sh config dkim results in a malformed mail.private RSA key - # Bug Report
The setup.sh tool seems to generate a private key file that results in the following error when attempting to send mail:
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: SSL error:0909006C:PEM routines:get_name:no start line
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: dkim_eom(): resource unavailable: PEM_read_bio_PrivateKey() failed
## Context
Use the setup.sh tool to generate the dkim private and public keys as so:
```BASH
./setup.sh config dkim {key size}
```END
Will result in both a mail.private and mail.txt files being created. Configure your dkim records on your DNS server as required and proceed to test with a mail client. Watch the logs from within the mail server container using
```BASH
tail -f /var/log/mail/mail.log
```END
As you attempt to send mail from your email client, if you see the following error in your logs, it's a sign of a malformed mail.private file
```
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: SSL error:0909006C:PEM routines:get_name:no start line
Jan 29 07:30:30 mail opendkim[240]: A0BBA1B5D94: dkim_eom(): resource unavailable:
```
Note: A workaround is to manually edit the file on your docker host machine and add a carriage return after the space character after the -----BEGIN RSA PRIVATE KEY----- text string. And repeat before the space before the -----END RSA PRIVATE KEY-----
### What is affected by this bug?
Ability to send emails from a remote client using a pre-created email address.
### When does this occur?
Upon send of an email from a mail client
### Expected Behavior
A well formed private key that can be parsed by the SSL library
## Your Environment
- docker-compose: version 1.25.0
- server OS: ubuntu 20.04
|
non_process
|
using setup sh config dkim results in a malformed mail private rsa key bug report the setup sh tool seems to generate a private key file that results in the following error when attempting to send mail jan mail opendkim ssl error pem routines get name no start line jan mail opendkim dkim eom resource unavailable pem read bio privatekey failed context use the setup sh tool to generate the dkim private and public keys as so bash setup sh config dkim key size end will result in both a mail private and mail txt files being created configure your dkim records on your dns server as required and proceed to test with a mail client watch the logs from within the mail server container using bash tail f var log mail mail log end as you attempt to send mail from your email client if you see the following error in your logs it s a sign of a malformed mail private file jan mail opendkim ssl error pem routines get name no start line jan mail opendkim dkim eom resource unavailable note a workaround is to manually edit the file on your docker host machine and add a carriage return after the space character after the begin rsa private key text string and repeat before the space before the end rsa private key what is affected by this bug ability to send emails from a remote client using a pre created email address when does this occur upon send of an email from a mail client expected behavior a well formed private key that can be parsed by the ssl library your environment docker compose version server os ubuntu
| 0
|
3,877
| 6,812,724,638
|
IssuesEvent
|
2017-11-06 05:20:12
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
getBlock (by hash) does not work
|
libs-etherlib status-inprocess type-bug
|
getBlock function that supports query by 'hash' calls into queryBlock which takes blockNumber as a string. This function will never work. It needs to call into the node using getBlockByHash. Note--quickBlocks will never be able to get this block from cache because we do not store a map from hash to number and we do store blocks by number.
//-------------------------------------------------------------------------
bool getBlock(CBlock& block, const SFHash& hash) {
bool ret = queryBlock(block, hash, true, true);
return ret;
}
|
1.0
|
getBlock (by hash) does not work - getBlock function that supports query by 'hash' calls into queryBlock which takes blockNumber as a string. This function will never work. It needs to call into the node using getBlockByHash. Note--quickBlocks will never be able to get this block from cache because we do not store a map from hash to number and we do store blocks by number.
//-------------------------------------------------------------------------
bool getBlock(CBlock& block, const SFHash& hash) {
bool ret = queryBlock(block, hash, true, true);
return ret;
}
|
process
|
getblock by hash does not work getblock function that supports query by hash calls into queryblock which takes blocknumber as a string this function will never work it needs to call into the node using getblockbyhash note quickblocks will never be able to get this block from cache because we do not store a map from hash to number and we do store blocks by number bool getblock cblock block const sfhash hash bool ret queryblock block hash true true return ret
| 1
|
11,832
| 14,655,429,539
|
IssuesEvent
|
2020-12-28 11:00:07
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Dev] Apps > App User Registry > Enrolled pop-up > Order of studies enrolled should be in descending order for iOS users
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:
1. From iOS user join study A
2. Again join study B
3. Navigate to Apps > App User Registry > Enrolled pop-up
4. Observe the order of studies displayed
A/R: Latest joined study displaying in bottom i.e B displaying below A
E/R: Latest joined/withdrawn study should be in descending order i.e latest should be on top
Note: Issue not observed for Android users

|
3.0
|
[PM] [Dev] Apps > App User Registry > Enrolled pop-up > Order of studies enrolled should be in descending order for iOS users - Steps:
1. From iOS user join study A
2. Again join study B
3. Navigate to Apps > App User Registry > Enrolled pop-up
4. Observe the order of studies displayed
A/R: Latest joined study displaying in bottom i.e B displaying below A
E/R: Latest joined/withdrawn study should be in descending order i.e latest should be on top
Note: Issue not observed for Android users

|
process
|
apps app user registry enrolled pop up order of studies enrolled should be in descending order for ios users steps from ios user join study a again join study b navigate to apps app user registry enrolled pop up observe the order of studies displayed a r latest joined study displaying in bottom i e b displaying below a e r latest joined withdrawn study should be in descending order i e latest should be on top note issue not observed for android users
| 1
|
129,087
| 18,070,804,732
|
IssuesEvent
|
2021-09-21 02:30:29
|
dreamboy9/ofbiz
|
https://api.github.com/repos/dreamboy9/ofbiz
|
closed
|
WS-2018-0232 (Medium) detected in underscore.string-2.1.1.tgz - autoclosed
|
security vulnerability
|
## WS-2018-0232 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore.string-2.1.1.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.1.1.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.1.1.tgz</a></p>
<p>Path to dependency file: ofbiz/framework/images/webapp/images/jquery/plugins/validate/package.json</p>
<p>Path to vulnerable library: ofbiz/framework/images/webapp/images/jquery/plugins/validate/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.3.17.tgz (Root Library)
- :x: **underscore.string-2.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/ofbiz/commit/9f6ed39589395d00f1d69228cb50a7987ba11512">9f6ed39589395d00f1d69228cb50a7987ba11512</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Underscore.string, before 3.3.5, is vulnerable to Regular Expression Denial of Service (ReDoS).
<p>Publish Date: 2018-10-03
<p>URL: <a href=https://github.com/epeli/underscore.string/commit/f486cd684c94c12db48b45d52b1472a1b9661029>WS-2018-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/745">https://www.npmjs.com/advisories/745</a></p>
<p>Release Date: 2018-12-30</p>
<p>Fix Resolution: 3.3.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2018-0232 (Medium) detected in underscore.string-2.1.1.tgz - autoclosed - ## WS-2018-0232 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore.string-2.1.1.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.1.1.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.1.1.tgz</a></p>
<p>Path to dependency file: ofbiz/framework/images/webapp/images/jquery/plugins/validate/package.json</p>
<p>Path to vulnerable library: ofbiz/framework/images/webapp/images/jquery/plugins/validate/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.3.17.tgz (Root Library)
- :x: **underscore.string-2.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/ofbiz/commit/9f6ed39589395d00f1d69228cb50a7987ba11512">9f6ed39589395d00f1d69228cb50a7987ba11512</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Underscore.string, before 3.3.5, is vulnerable to Regular Expression Denial of Service (ReDoS).
<p>Publish Date: 2018-10-03
<p>URL: <a href=https://github.com/epeli/underscore.string/commit/f486cd684c94c12db48b45d52b1472a1b9661029>WS-2018-0232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/745">https://www.npmjs.com/advisories/745</a></p>
<p>Release Date: 2018-12-30</p>
<p>Fix Resolution: 3.3.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in underscore string tgz autoclosed ws medium severity vulnerability vulnerable library underscore string tgz string manipulation extensions for underscore js javascript library library home page a href path to dependency file ofbiz framework images webapp images jquery plugins validate package json path to vulnerable library ofbiz framework images webapp images jquery plugins validate node modules underscore string package json dependency hierarchy grunt tgz root library x underscore string tgz vulnerable library found in head commit a href found in base branch trunk vulnerability details underscore string before is vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
4,083
| 7,036,837,859
|
IssuesEvent
|
2017-12-28 11:09:17
|
nerdalize/nerd
|
https://api.github.com/repos/nerdalize/nerd
|
opened
|
Install a CustomResourceDefinition for datasets during development
|
Dev Process
|
The basic idea is that we use Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) to store datasets in the kubernetes clusters. These dataset resources store metadata and are (a) reference(s) to were the actual data (blobs) lives. For now we only support our public-s3-sandbox-without-deduplication as a place for actual storage but in the future more can be supported. In terms of Metadata we would mostly want the size of a dataset to be stored but in the future one can image tags or up/down progress information to be available in dataset resources as well.
As an example the custom resource definition could look like this, take from [here](https://github.com/kubernetes/sample-controller/tree/master/artifacts/examples), also see: [here](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/):
```yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: datasets.nerdalize.com
spec:
group: nerdalize.com
version: v1alpha1
names:
kind: Dataset
plural: datasets
scope: Namespaced
```
|
1.0
|
Install a CustomResourceDefinition for datasets during development - The basic idea is that we use Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) to store datasets in the kubernetes clusters. These dataset resources store metadata and are (a) reference(s) to were the actual data (blobs) lives. For now we only support our public-s3-sandbox-without-deduplication as a place for actual storage but in the future more can be supported. In terms of Metadata we would mostly want the size of a dataset to be stored but in the future one can image tags or up/down progress information to be available in dataset resources as well.
As an example the custom resource definition could look like this, take from [here](https://github.com/kubernetes/sample-controller/tree/master/artifacts/examples), also see: [here](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/):
```yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: datasets.nerdalize.com
spec:
group: nerdalize.com
version: v1alpha1
names:
kind: Dataset
plural: datasets
scope: Namespaced
```
|
process
|
install a customresourcedefinition for datasets during development the basic idea is that we use kubernetes to store datasets in the kubernetes clusters these dataset resources store metadata and are a reference s to were the actual data blobs lives for now we only support our public sandbox without deduplication as a place for actual storage but in the future more can be supported in terms of metadata we would mostly want the size of a dataset to be stored but in the future one can image tags or up down progress information to be available in dataset resources as well as an example the custom resource definition could look like this take from also see yaml apiversion apiextensions io kind customresourcedefinition metadata name datasets nerdalize com spec group nerdalize com version names kind dataset plural datasets scope namespaced
| 1
|
461,346
| 13,228,757,056
|
IssuesEvent
|
2020-08-18 06:53:33
|
wso2/docs-apim
|
https://api.github.com/repos/wso2/docs-apim
|
closed
|
[3.2.0][Authentication][Federating OAuth Applications]-Missing SSO Documentation
|
API-M-3.2.0 Priority/Normal
|
### Description:
The SSO documentation under the topic "Pre-Requisite" is missing.
### Content Positioning in Documentation:
Root documentation: https://apim.docs.wso2.com/en/next/learn/api-security/api-authentication/advanced-topics/federating-oauth-applications/
SSO Documentation(Missing documentation): https://apim.docs.wso2.com/en/next/learn/api-security/api-authentication/advanced-topics/sso/okta-as-an-external-idp-using-oidc
|
1.0
|
[3.2.0][Authentication][Federating OAuth Applications]-Missing SSO Documentation - ### Description:
The SSO documentation under the topic "Pre-Requisite" is missing.
### Content Positioning in Documentation:
Root documentation: https://apim.docs.wso2.com/en/next/learn/api-security/api-authentication/advanced-topics/federating-oauth-applications/
SSO Documentation(Missing documentation): https://apim.docs.wso2.com/en/next/learn/api-security/api-authentication/advanced-topics/sso/okta-as-an-external-idp-using-oidc
|
non_process
|
missing sso documentation description the sso documentation under the topic pre requisite is missing content positioning in documentation root documentation sso documentation missing documentation
| 0
|
282,447
| 30,889,335,888
|
IssuesEvent
|
2023-08-04 02:34:29
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
reopened
|
CVE-2020-9391 (Medium) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2020-9391 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 5.4 and 5.5 through 5.5.6 on the AArch64 architecture. It ignores the top byte in the address passed to the brk system call, potentially moving the memory break downwards when the application expects it to move upwards, aka CID-dcde237319e6. This has been observed to cause heap corruption with the GNU C Library malloc implementation.
<p>Publish Date: 2020-02-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9391>CVE-2020-9391</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-9391">https://www.linuxkernelcves.com/cves/CVE-2020-9391</a></p>
<p>Release Date: 2020-02-25</p>
<p>Fix Resolution: v5.6-rc3,v5.4.23,v5.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-9391 (Medium) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33 - ## CVE-2020-9391 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 5.4 and 5.5 through 5.5.6 on the AArch64 architecture. It ignores the top byte in the address passed to the brk system call, potentially moving the memory break downwards when the application expects it to move upwards, aka CID-dcde237319e6. This has been observed to cause heap corruption with the GNU C Library malloc implementation.
<p>Publish Date: 2020-02-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9391>CVE-2020-9391</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-9391">https://www.linuxkernelcves.com/cves/CVE-2020-9391</a></p>
<p>Release Date: 2020-02-25</p>
<p>Fix Resolution: v5.6-rc3,v5.4.23,v5.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable linux stable cve medium severity vulnerability vulnerable libraries linux stable linux stable vulnerability details an issue was discovered in the linux kernel and through on the architecture it ignores the top byte in the address passed to the brk system call potentially moving the memory break downwards when the application expects it to move upwards aka cid this has been observed to cause heap corruption with the gnu c library malloc implementation publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
116,338
| 17,362,936,094
|
IssuesEvent
|
2021-07-30 00:28:58
|
RobKraft/SQLInjectionDetection
|
https://api.github.com/repos/RobKraft/SQLInjectionDetection
|
closed
|
CVE-2019-0548 (High) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2019-0548 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>microsoft.netcore.dotnethostresolver.2.1.0.nupkg</b>, <b>microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</b>, <b>microsoft.netcore.app.2.1.0.nupkg</b>, <b>microsoft.netcore.dotnetapphost.2.1.0.nupkg</b></p></summary>
<p>
<details><summary><b>microsoft.netcore.dotnethostresolver.2.1.0.nupkg</b></p></summary>
<p>Provides an implementation of framework resolution strategy used by Microsoft.NETCore.DotNetHost
ca...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnethostresolver.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnethostresolver.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnethostresolver/2.1.0/microsoft.netcore.dotnethostresolver.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- microsoft.netcore.dotnethostpolicy.2.1.0.nupkg
- :x: **microsoft.netcore.dotnethostresolver.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</b></p></summary>
<p>Provides a CoreCLR hosting policy implementation -- configuration settings, assembly paths and assem...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnethostpolicy/2.1.0/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- :x: **microsoft.netcore.dotnethostpolicy.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.app.2.1.0.nupkg</b></p></summary>
<p>A set of .NET API's that are included in the default .NET Core application model.
caa7b7e2bad98e56a...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.app.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.app.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: canner/.nuget/packages/microsoft.netcore.app/2.1.0/microsoft.netcore.app.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.netcore.app.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.dotnetapphost.2.1.0.nupkg</b></p></summary>
<p>Provides the .NET Core app bootstrapper intended for use in the application directory
caa7b7e2bad98...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnetapphost.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnetapphost.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnetapphost/2.1.0/microsoft.netcore.dotnetapphost.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- microsoft.netcore.dotnethostpolicy.2.1.0.nupkg
- microsoft.netcore.dotnethostresolver.2.1.0.nupkg
- :x: **microsoft.netcore.dotnetapphost.2.1.0.nupkg** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RobKraft/SQLInjectionDetection/commit/2c98fdaa19c7685b3ad73649ba836b018c42a9c4">2c98fdaa19c7685b3ad73649ba836b018c42a9c4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.2, ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0564.
<p>Publish Date: 2019-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0548>CVE-2019-0548</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/335">https://github.com/aspnet/Announcements/issues/335</a></p>
<p>Release Date: 2019-01-08</p>
<p>Fix Resolution: Microsoft.AspNetCore.SignalR - 1.1.0; Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets - 2.2.1; Microsoft.AspNetCore.Server.IIS - 2.2.1; Microsoft.AspNetCore.Server.IISIntegration - 2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-0548 (High) detected in multiple libraries - autoclosed - ## CVE-2019-0548 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>microsoft.netcore.dotnethostresolver.2.1.0.nupkg</b>, <b>microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</b>, <b>microsoft.netcore.app.2.1.0.nupkg</b>, <b>microsoft.netcore.dotnetapphost.2.1.0.nupkg</b></p></summary>
<p>
<details><summary><b>microsoft.netcore.dotnethostresolver.2.1.0.nupkg</b></p></summary>
<p>Provides an implementation of framework resolution strategy used by Microsoft.NETCore.DotNetHost
ca...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnethostresolver.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnethostresolver.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnethostresolver/2.1.0/microsoft.netcore.dotnethostresolver.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- microsoft.netcore.dotnethostpolicy.2.1.0.nupkg
- :x: **microsoft.netcore.dotnethostresolver.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</b></p></summary>
<p>Provides a CoreCLR hosting policy implementation -- configuration settings, assembly paths and assem...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnethostpolicy/2.1.0/microsoft.netcore.dotnethostpolicy.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- :x: **microsoft.netcore.dotnethostpolicy.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.app.2.1.0.nupkg</b></p></summary>
<p>A set of .NET API's that are included in the default .NET Core application model.
caa7b7e2bad98e56a...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.app.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.app.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: canner/.nuget/packages/microsoft.netcore.app/2.1.0/microsoft.netcore.app.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.netcore.app.2.1.0.nupkg** (Vulnerable Library)
</details>
<details><summary><b>microsoft.netcore.dotnetapphost.2.1.0.nupkg</b></p></summary>
<p>Provides the .NET Core app bootstrapper intended for use in the application directory
caa7b7e2bad98...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.netcore.dotnetapphost.2.1.0.nupkg">https://api.nuget.org/packages/microsoft.netcore.dotnetapphost.2.1.0.nupkg</a></p>
<p>Path to dependency file: SQLInjectionDetection/SampleSQLInjectionDetectionApp.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.netcore.dotnetapphost/2.1.0/microsoft.netcore.dotnetapphost.2.1.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.netcore.app.2.1.0.nupkg (Root Library)
- microsoft.netcore.dotnethostpolicy.2.1.0.nupkg
- microsoft.netcore.dotnethostresolver.2.1.0.nupkg
- :x: **microsoft.netcore.dotnetapphost.2.1.0.nupkg** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RobKraft/SQLInjectionDetection/commit/2c98fdaa19c7685b3ad73649ba836b018c42a9c4">2c98fdaa19c7685b3ad73649ba836b018c42a9c4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A denial of service vulnerability exists when ASP.NET Core improperly handles web requests, aka "ASP.NET Core Denial of Service Vulnerability." This affects ASP.NET Core 2.2, ASP.NET Core 2.1. This CVE ID is unique from CVE-2019-0564.
<p>Publish Date: 2019-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0548>CVE-2019-0548</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aspnet/Announcements/issues/335">https://github.com/aspnet/Announcements/issues/335</a></p>
<p>Release Date: 2019-01-08</p>
<p>Fix Resolution: Microsoft.AspNetCore.SignalR - 1.1.0; Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets - 2.2.1; Microsoft.AspNetCore.Server.IIS - 2.2.1; Microsoft.AspNetCore.Server.IISIntegration - 2.2.1;Microsoft.AspNetCore.Server.Kestrel.Core - 2.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries microsoft netcore dotnethostresolver nupkg microsoft netcore dotnethostpolicy nupkg microsoft netcore app nupkg microsoft netcore dotnetapphost nupkg microsoft netcore dotnethostresolver nupkg provides an implementation of framework resolution strategy used by microsoft netcore dotnethost ca library home page a href path to dependency file sqlinjectiondetection samplesqlinjectiondetectionapp csproj path to vulnerable library home wss scanner nuget packages microsoft netcore dotnethostresolver microsoft netcore dotnethostresolver nupkg dependency hierarchy microsoft netcore app nupkg root library microsoft netcore dotnethostpolicy nupkg x microsoft netcore dotnethostresolver nupkg vulnerable library microsoft netcore dotnethostpolicy nupkg provides a coreclr hosting policy implementation configuration settings assembly paths and assem library home page a href path to dependency file sqlinjectiondetection samplesqlinjectiondetectionapp csproj path to vulnerable library home wss scanner nuget packages microsoft netcore dotnethostpolicy microsoft netcore dotnethostpolicy nupkg dependency hierarchy microsoft netcore app nupkg root library x microsoft netcore dotnethostpolicy nupkg vulnerable library microsoft netcore app nupkg a set of net api s that are included in the default net core application model library home page a href path to dependency file sqlinjectiondetection samplesqlinjectiondetectionapp csproj path to vulnerable library canner nuget packages microsoft netcore app microsoft netcore app nupkg dependency hierarchy x microsoft netcore app nupkg vulnerable library microsoft netcore dotnetapphost nupkg provides the net core app bootstrapper intended for use in the application directory library home page a href path to dependency file sqlinjectiondetection samplesqlinjectiondetectionapp csproj path to vulnerable library home wss scanner nuget packages microsoft netcore dotnetapphost microsoft netcore dotnetapphost nupkg dependency hierarchy microsoft netcore app nupkg root library microsoft netcore dotnethostpolicy nupkg microsoft netcore dotnethostresolver nupkg x microsoft netcore dotnetapphost nupkg vulnerable library found in head commit a href found in base branch master vulnerability details a denial of service vulnerability exists when asp net core improperly handles web requests aka asp net core denial of service vulnerability this affects asp net core asp net core this cve id is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore signalr microsoft aspnetcore server kestrel transport sockets microsoft aspnetcore server iis microsoft aspnetcore server iisintegration microsoft aspnetcore server kestrel core step up your open source security game with whitesource
| 0
|
410,172
| 27,772,139,108
|
IssuesEvent
|
2023-03-16 15:03:54
|
clash-lang/clash-compiler
|
https://api.github.com/repos/clash-lang/clash-compiler
|
closed
|
Malformed haddock in `clash-prelude`
|
documentation
|
The API documentation for [`head` and `tail`](https://hackage.haskell.org/package/clash-prelude-1.6.3/docs/Clash-Sized-Vector.html#v:head) for the current stable version of Clash is slightly malformed, resulting in the documentation being rendered badly. I see:

Looking at the source, the functions causing problems attempt to use the preprocessor to change what is displayed:
```haskell
{- | Extract the first element of a vector
>>> head (1:>2:>3:>Nil)
1
#if __GLASGOW_HASKELL__ >= 900
>>> head Nil
<BLANKLINE>
<interactive>:...
• Couldn't match type ‘1’ with ‘0’
Expected: Vec (0 + 1) a
Actual: Vec 0 a
• In the first argument of ‘head’, namely ‘Nil’
In the expression: head Nil
In an equation for ‘it’: it = head Nil
#else
>>> head Nil
<BLANKLINE>
<interactive>:...
• Couldn't match type ‘1’ with ‘0’
Expected type: Vec (0 + 1) a
Actual type: Vec 0 a
• In the first argument of ‘head’, namely ‘Nil’
In the expression: head Nil
In an equation for ‘it’: it = head Nil
#endif
-}
```
|
1.0
|
Malformed haddock in `clash-prelude` - The API documentation for [`head` and `tail`](https://hackage.haskell.org/package/clash-prelude-1.6.3/docs/Clash-Sized-Vector.html#v:head) for the current stable version of Clash is slightly malformed, resulting in the documentation being rendered badly. I see:

Looking at the source, the functions causing problems attempt to use the preprocessor to change what is displayed:
```haskell
{- | Extract the first element of a vector
>>> head (1:>2:>3:>Nil)
1
#if __GLASGOW_HASKELL__ >= 900
>>> head Nil
<BLANKLINE>
<interactive>:...
• Couldn't match type ‘1’ with ‘0’
Expected: Vec (0 + 1) a
Actual: Vec 0 a
• In the first argument of ‘head’, namely ‘Nil’
In the expression: head Nil
In an equation for ‘it’: it = head Nil
#else
>>> head Nil
<BLANKLINE>
<interactive>:...
• Couldn't match type ‘1’ with ‘0’
Expected type: Vec (0 + 1) a
Actual type: Vec 0 a
• In the first argument of ‘head’, namely ‘Nil’
In the expression: head Nil
In an equation for ‘it’: it = head Nil
#endif
-}
```
|
non_process
|
malformed haddock in clash prelude the api documentation for for the current stable version of clash is slightly malformed resulting in the documentation being rendered badly i see looking at the source the functions causing problems attempt to use the preprocessor to change what is displayed haskell extract the first element of a vector head nil if glasgow haskell head nil • couldn t match type ‘ ’ with ‘ ’ expected vec a actual vec a • in the first argument of ‘head’ namely ‘nil’ in the expression head nil in an equation for ‘it’ it head nil else head nil • couldn t match type ‘ ’ with ‘ ’ expected type vec a actual type vec a • in the first argument of ‘head’ namely ‘nil’ in the expression head nil in an equation for ‘it’ it head nil endif
| 0
|
782,203
| 27,490,093,831
|
IssuesEvent
|
2023-03-04 14:10:23
|
clt313/SuperballVR
|
https://api.github.com/repos/clt313/SuperballVR
|
closed
|
Fix the player movement bug
|
priority: high bug
|
There's a common bug that involves players colliding with objects but then being pushed in some direction forever or until colliding with something else. This definitely can't be in the release candidate.
Idea for a first implementation: add a script to the XRRig prefab that sets the player's velocity to zero if they're not trying to move.
|
1.0
|
Fix the player movement bug - There's a common bug that involves players colliding with objects but then being pushed in some direction forever or until colliding with something else. This definitely can't be in the release candidate.
Idea for a first implementation: add a script to the XRRig prefab that sets the player's velocity to zero if they're not trying to move.
|
non_process
|
fix the player movement bug there s a common bug that involves players colliding with objects but then being pushed in some direction forever or until colliding with something else this definitely can t be in the release candidate idea for a first implementation add a script to the xrrig prefab that sets the player s velocity to zero if they re not trying to move
| 0
|
69,636
| 15,022,125,309
|
IssuesEvent
|
2021-02-01 16:35:16
|
MValle21/snipe-it
|
https://api.github.com/repos/MValle21/snipe-it
|
opened
|
WS-2020-0144 (High) detected in laravel/framework-v5.5.48
|
security vulnerability
|
## WS-2020-0144 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v5.5.48</b></p></summary>
<p></p>
<p>
Dependency Hierarchy:
- barryvdh/laravel-cors-v0.11.4 (Root Library)
- :x: **laravel/framework-v5.5.48** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/snipe-it/commit/6937a6413a72bc7060c19b8195035c2b9504cbd2">6937a6413a72bc7060c19b8195035c2b9504cbd2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Application's using the "cookie" session driver were the primary applications affected by this vulnerability. Since we have not yet released a security release for the Laravel 5.5 version of the framework, we recommend that all applications running Laravel 5.5 and earlier do not use the "cookie" session driver in their production deployments.
<p>Publish Date: 2020-07-27
<p>URL: <a href=https://blog.laravel.com/laravel-cookie-security-releases>WS-2020-0144</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"PHP","packageName":"laravel/framework","packageVersion":"v5.5.48","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"barryvdh/laravel-cors:v0.11.4;laravel/framework:v5.5.48","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0144","vulnerabilityDetails":"Application\u0027s using the \"cookie\" session driver were the primary applications affected by this vulnerability. Since we have not yet released a security release for the Laravel 5.5 version of the framework, we recommend that all applications running Laravel 5.5 and earlier do not use the \"cookie\" session driver in their production deployments.","vulnerabilityUrl":"https://blog.laravel.com/laravel-cookie-security-releases","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2020-0144 (High) detected in laravel/framework-v5.5.48 - ## WS-2020-0144 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v5.5.48</b></p></summary>
<p></p>
<p>
Dependency Hierarchy:
- barryvdh/laravel-cors-v0.11.4 (Root Library)
- :x: **laravel/framework-v5.5.48** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/MValle21/snipe-it/commit/6937a6413a72bc7060c19b8195035c2b9504cbd2">6937a6413a72bc7060c19b8195035c2b9504cbd2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Application's using the "cookie" session driver were the primary applications affected by this vulnerability. Since we have not yet released a security release for the Laravel 5.5 version of the framework, we recommend that all applications running Laravel 5.5 and earlier do not use the "cookie" session driver in their production deployments.
<p>Publish Date: 2020-07-27
<p>URL: <a href=https://blog.laravel.com/laravel-cookie-security-releases>WS-2020-0144</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"PHP","packageName":"laravel/framework","packageVersion":"v5.5.48","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"barryvdh/laravel-cors:v0.11.4;laravel/framework:v5.5.48","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0144","vulnerabilityDetails":"Application\u0027s using the \"cookie\" session driver were the primary applications affected by this vulnerability. Since we have not yet released a security release for the Laravel 5.5 version of the framework, we recommend that all applications running Laravel 5.5 and earlier do not use the \"cookie\" session driver in their production deployments.","vulnerabilityUrl":"https://blog.laravel.com/laravel-cookie-security-releases","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in laravel framework ws high severity vulnerability vulnerable library laravel framework dependency hierarchy barryvdh laravel cors root library x laravel framework vulnerable library found in head commit a href found in base branch master vulnerability details application s using the cookie session driver were the primary applications affected by this vulnerability since we have not yet released a security release for the laravel version of the framework we recommend that all applications running laravel and earlier do not use the cookie session driver in their production deployments publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree barryvdh laravel cors laravel framework isminimumfixversionavailable false basebranches vulnerabilityidentifier ws vulnerabilitydetails application using the cookie session driver were the primary applications affected by this vulnerability since we have not yet released a security release for the laravel version of the framework we recommend that all applications running laravel and earlier do not use the cookie session driver in their production deployments vulnerabilityurl
| 0
|
18,651
| 24,581,085,803
|
IssuesEvent
|
2022-10-13 15:39:43
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] [PM] Data sharing permissions is getting displayed as N/A in the participant manager in the following scenario
|
Bug P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Now, Go back to SB
5. Select 'Enforce e-consent flow again for enrolled participants' in the SB and Publish the study
6. Open the mobile app and submit the updated consent flow
7. Login to PM
8. Go to participants details screen and Verify
**AR:** Data sharing permissions is getting displayed as N/A in the participant manager in the following scenario
**ER:**
Data sharing permissions should get displayed as,
1. Provided > If participants has provided the data sharing permission in the mobile app
2. Not Provided > If participants has not provided the data sharing permission in the mobile app
3. Not Applicable > If there is no data sharing record available for the participant

|
3.0
|
[Consent API] [PM] Data sharing permissions is getting displayed as N/A in the participant manager in the following scenario - **Steps:**
1. Install the mobile app
2. Sign in / Sign up
3. Enroll to the study
4. Now, Go back to SB
5. Select 'Enforce e-consent flow again for enrolled participants' in the SB and Publish the study
6. Open the mobile app and submit the updated consent flow
7. Login to PM
8. Go to participants details screen and Verify
**AR:** Data sharing permissions is getting displayed as N/A in the participant manager in the following scenario
**ER:**
Data sharing permissions should get displayed as,
1. Provided > If participants has provided the data sharing permission in the mobile app
2. Not Provided > If participants has not provided the data sharing permission in the mobile app
3. Not Applicable > If there is no data sharing record available for the participant

|
process
|
data sharing permissions is getting displayed as n a in the participant manager in the following scenario steps install the mobile app sign in sign up enroll to the study now go back to sb select enforce e consent flow again for enrolled participants in the sb and publish the study open the mobile app and submit the updated consent flow login to pm go to participants details screen and verify ar data sharing permissions is getting displayed as n a in the participant manager in the following scenario er data sharing permissions should get displayed as provided if participants has provided the data sharing permission in the mobile app not provided if participants has not provided the data sharing permission in the mobile app not applicable if there is no data sharing record available for the participant
| 1
|
410,665
| 27,796,803,197
|
IssuesEvent
|
2023-03-17 13:09:27
|
unoplatform/uno
|
https://api.github.com/repos/unoplatform/uno
|
closed
|
AdaptiveTrigger option in Docs menu not clickable
|
triage/untriaged difficulty/tbd kind/documentation
|
<!-- Please only use this template for reporting issues with the documentation where the fix isn't clear. We greatly appreciate it when people send in pull-requests with fixes. If there's any friction, apart from knowledge, that's preventing you from doing so please let us know below. -->
## On which page?
Fundamentals > Development > Uno features > Uno.UI.Toolkit > AdaptiveTrigger
## What's wrong?
AdaptiveTrigger is not clickable
## Any feedback?
|
1.0
|
AdaptiveTrigger option in Docs menu not clickable - <!-- Please only use this template for reporting issues with the documentation where the fix isn't clear. We greatly appreciate it when people send in pull-requests with fixes. If there's any friction, apart from knowledge, that's preventing you from doing so please let us know below. -->
## On which page?
Fundamentals > Development > Uno features > Uno.UI.Toolkit > AdaptiveTrigger
## What's wrong?
AdaptiveTrigger is not clickable
## Any feedback?
|
non_process
|
adaptivetrigger option in docs menu not clickable on which page fundamentals development uno features uno ui toolkit adaptivetrigger what s wrong adaptivetrigger is not clickable any feedback
| 0
|
112,282
| 17,087,321,639
|
IssuesEvent
|
2021-07-08 13:26:24
|
jgeraigery/experian-java
|
https://api.github.com/repos/jgeraigery/experian-java
|
opened
|
CVE-2014-0114 (Medium) detected in commons-beanutils-1.9.3.jar
|
security vulnerability
|
## CVE-2014-0114 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.3.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-beanutils/">https://commons.apache.org/proper/commons-beanutils/</a></p>
<p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.9.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/9ade2a959068cca30ecfdbb254939af6f67affb1">9ade2a959068cca30ecfdbb254939af6f67affb1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.9.3","packageFilePaths":["/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.9.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2014-0114","vulnerabilityDetails":"Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to \"manipulate\" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2014-0114 (Medium) detected in commons-beanutils-1.9.3.jar - ## CVE-2014-0114 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.3.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-beanutils/">https://commons.apache.org/proper/commons-beanutils/</a></p>
<p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.9.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/9ade2a959068cca30ecfdbb254939af6f67affb1">9ade2a959068cca30ecfdbb254939af6f67affb1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.9.3","packageFilePaths":["/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.9.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2014-0114","vulnerabilityDetails":"Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to \"manipulate\" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in commons beanutils jar cve medium severity vulnerability vulnerable library commons beanutils jar apache commons beanutils provides an easy to use but flexible wrapper around reflection and introspection library home page a href path to dependency file experian java mavenworkspace bis services lib bis services base pom xml path to vulnerable library canner repository commons beanutils commons beanutils commons beanutils jar dependency hierarchy x commons beanutils jar vulnerable library found in head commit a href found in base branch master vulnerability details apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils org apache struts core isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree commons beanutils commons beanutils isminimumfixversionavailable true minimumfixversion commons beanutils commons beanutils org apache struts core basebranches vulnerabilityidentifier cve vulnerabilitydetails apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts vulnerabilityurl
| 0
|
823,310
| 30,989,635,069
|
IssuesEvent
|
2023-08-09 02:43:31
|
ferncabrera/open_ims
|
https://api.github.com/repos/ferncabrera/open_ims
|
closed
|
Setting up a production environment (for our app under development)
|
critical priority one ci/cd
|
This is a big one..... more of a placeholder ticket for now but we are leaning heavily towards using Skaffold to deploy our app in prod.
|
1.0
|
Setting up a production environment (for our app under development) - This is a big one..... more of a placeholder ticket for now but we are leaning heavily towards using Skaffold to deploy our app in prod.
|
non_process
|
setting up a production environment for our app under development this is a big one more of a placeholder ticket for now but we are leaning heavily towards using skaffold to deploy our app in prod
| 0
|
21,821
| 30,316,705,484
|
IssuesEvent
|
2023-07-10 16:02:59
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New Term - vitality
|
Term - add Class - Occurrence normative Process - complete Task Group - How Did It Die?
|
## New term
* Submitter: Sophia Ratcliffe
* Efficacy Justification (why is this term necessary?): There is a requirement that species records express whether the organism was dead or alive at the time of the observation or collection. Currently no such explicit term exists in Darwin Core.
* Demand Justification (name at least two organizations that independently need this term):
- NBN Trust
- Ocean Biodiversity Information System (OBIS)
- The Field Museum
- Global Genome Biodiversity Network (GGBN)
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: The term `dwciri:vitality` will be created simultaneously with the term in the main DwC namespace: `dwc:vitality`.
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): `vitality`
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): `Occurrence`
* Definition of the term (normative): An indication of whether the organism was alive or dead at the time of collection or observation
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. Intended to be used with records having a dwc:basisOfRecord of `PreservedSpecimen`, `MaterialSample`, or `HumanObservation`.
* Examples (not normative, for the `dwc:` namespace term only): `alive`, `dead`, `mixedLot`, `uncertain`, `notAssessed`
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): I was not able to find an equivalent term
NOTE:
The Darwin Core Maintenance Group has assessed that more work is required to have a consensus on a controlled vocabulary, so that part of this proposal did not pass the 2023-02-12 public review cycle. Because of this, and in an effort to make the term available as soon as possible, the following originally proposed Comments section has be replaced by what is seen in the final proposal, above:
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a term from the controlled vocabulary intended to be used with this term, listed at <http://rs.tdwg.org/dwc/doc/vit/> Generally intended to be used with a `dwc:basisOfRecord` of `PreservedSpecimen`, `MaterialSample`, or `HumanObservation`.
The Vitality Controlled Vocabulary to be used as values for this term can be viewed [here](https://github.com/tdwg/rs.tdwg.org/blob/vitality/process/page_build_scripts/vitality.md) (note: the URL listed in the comments will not dereference until after ratification). `dwc:vitality` will use the controlled value string for the appropriate concept in this vocabulary as its value. `dwciri:vitality` will use the unabbreviated term IRI as its value.
|
1.0
|
New Term - vitality - ## New term
* Submitter: Sophia Ratcliffe
* Efficacy Justification (why is this term necessary?): There is a requirement that species records express whether the organism was dead or alive at the time of the observation or collection. Currently no such explicit term exists in Darwin Core.
* Demand Justification (name at least two organizations that independently need this term):
- NBN Trust
- Ocean Biodiversity Information System (OBIS)
- The Field Museum
- Global Genome Biodiversity Network (GGBN)
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: The term `dwciri:vitality` will be created simultaneously with the term in the main DwC namespace: `dwc:vitality`.
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): `vitality`
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): `Occurrence`
* Definition of the term (normative): An indication of whether the organism was alive or dead at the time of collection or observation
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. Intended to be used with records having a dwc:basisOfRecord of `PreservedSpecimen`, `MaterialSample`, or `HumanObservation`.
* Examples (not normative, for the `dwc:` namespace term only): `alive`, `dead`, `mixedLot`, `uncertain`, `notAssessed`
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): I was not able to find an equivalent term
NOTE:
The Darwin Core Maintenance Group has assessed that more work is required to have a consensus on a controlled vocabulary, so that part of this proposal did not pass the 2023-02-12 public review cycle. Because of this, and in an effort to make the term available as soon as possible, the following originally proposed Comments section has be replaced by what is seen in the final proposal, above:
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a term from the controlled vocabulary intended to be used with this term, listed at <http://rs.tdwg.org/dwc/doc/vit/> Generally intended to be used with a `dwc:basisOfRecord` of `PreservedSpecimen`, `MaterialSample`, or `HumanObservation`.
The Vitality Controlled Vocabulary to be used as values for this term can be viewed [here](https://github.com/tdwg/rs.tdwg.org/blob/vitality/process/page_build_scripts/vitality.md) (note: the URL listed in the comments will not dereference until after ratification). `dwc:vitality` will use the controlled value string for the appropriate concept in this vocabulary as its value. `dwciri:vitality` will use the unabbreviated term IRI as its value.
|
process
|
new term vitality new term submitter sophia ratcliffe efficacy justification why is this term necessary there is a requirement that species records express whether the organism was dead or alive at the time of the observation or collection currently no such explicit term exists in darwin core demand justification name at least two organizations that independently need this term nbn trust ocean biodiversity information system obis the field museum global genome biodiversity network ggbn stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version the term dwciri vitality will be created simultaneously with the term in the main dwc namespace dwc vitality proposed attributes of the new term term name in lowercamelcase for properties uppercamelcase for classes vitality organized in class e g occurrence event location taxon occurrence definition of the term normative an indication of whether the organism was alive or dead at the time of collection or observation usage comments recommendations regarding content etc not normative recommended best practice is to use a controlled vocabulary intended to be used with records having a dwc basisofrecord of preservedspecimen materialsample or humanobservation examples not normative for the dwc namespace term only alive dead mixedlot uncertain notassessed refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative i was not able to find an equivalent term note the darwin core maintenance group has assessed that more work is required to have a consensus on a controlled vocabulary so that part of this proposal did not pass the public review cycle because of this and in an effort to make the term available as soon as possible the following originally proposed comments section has be replaced by what is seen in the final proposal above usage comments recommendations regarding content etc not normative recommended best practice is to use a term from the controlled vocabulary intended to be used with this term listed at generally intended to be used with a dwc basisofrecord of preservedspecimen materialsample or humanobservation the vitality controlled vocabulary to be used as values for this term can be viewed note the url listed in the comments will not dereference until after ratification dwc vitality will use the controlled value string for the appropriate concept in this vocabulary as its value dwciri vitality will use the unabbreviated term iri as its value
| 1
|
286,807
| 8,793,492,777
|
IssuesEvent
|
2018-12-21 20:09:18
|
OregonDigital/OD2
|
https://api.github.com/repos/OregonDigital/OD2
|
reopened
|
Plain-text passwords being leaked into request params
|
Bug Priority - High
|
### Descriptive summary
Request parameters for user logins from Devise are being submitted in plain-text in our Honeycomb events.

Honeycomb scrapes several objects within each request to build its events. this includes request.params, which is where user information is getting stored (request.params.user).
|
1.0
|
Plain-text passwords being leaked into request params - ### Descriptive summary
Request parameters for user logins from Devise are being submitted in plain-text in our Honeycomb events.

Honeycomb scrapes several objects within each request to build its events. this includes request.params, which is where user information is getting stored (request.params.user).
|
non_process
|
plain text passwords being leaked into request params descriptive summary request parameters for user logins from devise are being submitted in plain text in our honeycomb events honeycomb scrapes several objects within each request to build its events this includes request params which is where user information is getting stored request params user
| 0
|
16,459
| 21,336,885,847
|
IssuesEvent
|
2022-04-18 15:34:31
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] Issue with the automated notifications
|
Bug P1 iOS Android Process: Fixed Process: Tested QA Process: Tested dev
|
When new study is available for the mobile apps,
**AR:**
When clicked on automated notification , navigating to study list screen
**ER:**
When clicked on automated notification for Enrolled study , Should navigate to activities list screen
When clicked on automated notification for New study, Should navigate to overview screen
**Note:**
Issue also needs to be fixed when new activities are available for the mobile apps
https://user-images.githubusercontent.com/86007179/143416830-9fb32421-3c81-4d00-9aff-af73cd9512ff.mp4
|
3.0
|
[Android] Issue with the automated notifications - When new study is available for the mobile apps,
**AR:**
When clicked on automated notification , navigating to study list screen
**ER:**
When clicked on automated notification for Enrolled study , Should navigate to activities list screen
When clicked on automated notification for New study, Should navigate to overview screen
**Note:**
Issue also needs to be fixed when new activities are available for the mobile apps
https://user-images.githubusercontent.com/86007179/143416830-9fb32421-3c81-4d00-9aff-af73cd9512ff.mp4
|
process
|
issue with the automated notifications when new study is available for the mobile apps ar when clicked on automated notification navigating to study list screen er when clicked on automated notification for enrolled study should navigate to activities list screen when clicked on automated notification for new study should navigate to overview screen note issue also needs to be fixed when new activities are available for the mobile apps
| 1
|
13,387
| 15,864,708,615
|
IssuesEvent
|
2021-04-08 14:02:57
|
90301/TextReplace
|
https://api.github.com/repos/90301/TextReplace
|
closed
|
CSV Combine
|
Log Processor
|
Take X registers / files, go line by line and combine both files into a single CSV with a specified delimiter.
Ex:
```
CSVCombine(C:\Text1.txt,2,3,COMMA)
Line1,x,y
Line2,c,v
Line3,d,
```
|
1.0
|
CSV Combine - Take X registers / files, go line by line and combine both files into a single CSV with a specified delimiter.
Ex:
```
CSVCombine(C:\Text1.txt,2,3,COMMA)
Line1,x,y
Line2,c,v
Line3,d,
```
|
process
|
csv combine take x registers files go line by line and combine both files into a single csv with a specified delimiter ex csvcombine c txt comma x y c v d
| 1
|
16,740
| 21,900,314,877
|
IssuesEvent
|
2022-05-20 12:49:28
|
camunda/zeebe-process-test
|
https://api.github.com/repos/camunda/zeebe-process-test
|
opened
|
Assertion `.hasPassed()` should accept a list as parameter
|
kind/feature team/process-automation
|
**Description**
While `.isWaitingAtElements(String... elementIds)` is available with a list of elements, `.hasPassedElement(String elementId)` accepts only a single element.
It's more likely that you will check for a list of passed elements then for several elements waiting at when you progress in your process instance.
Please add an assertion like `.hasPassedElements(String... elementIds)`.
|
1.0
|
Assertion `.hasPassed()` should accept a list as parameter - **Description**
While `.isWaitingAtElements(String... elementIds)` is available with a list of elements, `.hasPassedElement(String elementId)` accepts only a single element.
It's more likely that you will check for a list of passed elements then for several elements waiting at when you progress in your process instance.
Please add an assertion like `.hasPassedElements(String... elementIds)`.
|
process
|
assertion haspassed should accept a list as parameter description while iswaitingatelements string elementids is available with a list of elements haspassedelement string elementid accepts only a single element it s more likely that you will check for a list of passed elements then for several elements waiting at when you progress in your process instance please add an assertion like haspassedelements string elementids
| 1
|
223,774
| 17,629,710,943
|
IssuesEvent
|
2021-08-19 06:04:04
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts - Visualize visualize feature controls security global visualize all privileges shows visualize navlink
|
Team:KibanaApp failed-test test-cloud :KibanaApp/fix-it-week v7.14.0
|
**Version: 7.14.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts**
**Stack Trace:**
```
Error: expected [ 'Overview', 'Visualize Library' ] to sort of equal [ 'Overview', 'Visualize Library', 'Stack Management' ]
at Assertion.assert (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/functional/apps/visualize/feature_controls/visualize_security.ts:84:29)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n "Overview"\n "Visualize Library"\n]',
expected: '[\n "Overview"\n "Visualize Library"\n "Stack Management"\n]',
showDiff: true
}
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2013/testReport/_
|
2.0
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts - Visualize visualize feature controls security global visualize all privileges shows visualize navlink - **Version: 7.14.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/visualize/feature_controls/visualize_security·ts**
**Stack Trace:**
```
Error: expected [ 'Overview', 'Visualize Library' ] to sort of equal [ 'Overview', 'Visualize Library', 'Stack Management' ]
at Assertion.assert (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/functional/apps/visualize/feature_controls/visualize_security.ts:84:29)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp3/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '[\n "Overview"\n "Visualize Library"\n]',
expected: '[\n "Overview"\n "Visualize Library"\n "Stack Management"\n]',
showDiff: true
}
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2013/testReport/_
|
non_process
|
chrome x pack ui functional x pack test functional apps visualize feature controls visualize security·ts visualize visualize feature controls security global visualize all privileges shows visualize navlink version class chrome x pack ui functional x pack test functional apps visualize feature controls visualize security·ts stack trace error expected to sort of equal at assertion assert var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules kbn expect expect js at assertion eql var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules kbn expect expect js at context test functional apps visualize feature controls visualize security ts at runmicrotasks at processticksandrejections internal process task queues js at object apply var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules kbn test target node functional test runner lib mocha wrap function js actual expected showdiff true other test failures test report
| 0
|
8,583
| 11,755,265,843
|
IssuesEvent
|
2020-03-13 09:10:48
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Implement application with two-task process
|
area/process kind/user-story solution/apps
|
## Description
To support Sirius and other applications with complex workflows we need to be able to show that an application with multiple steps is supported by Altinn Apps. An app with the illustrated workflow should be implemented, successfully tested and code made available for application developers that wish to implement a multistep application.
## Screenshots

## Considerations
## Acceptance criteria
- The two tasks are not of the same type
- There is a seperate GUI for each of the tasks
- There are restrictions on who can do what with various data elements on each step
## Development tasks
- [x] Create application in Altinn Studio
- [x] Update policy.xml
- [x] Update data restrictions in applicationmetadata.json
- [x] Deploy application
- [x] Fix all road blocks
- [x] Documentation; Application repository can be referred to for developers that wish to create a two-task process app.
- [x] Manual test
|
1.0
|
Implement application with two-task process - ## Description
To support Sirius and other applications with complex workflows we need to be able to show that an application with multiple steps is supported by Altinn Apps. An app with the illustrated workflow should be implemented, successfully tested and code made available for application developers that wish to implement a multistep application.
## Screenshots

## Considerations
## Acceptance criteria
- The two tasks are not of the same type
- There is a seperate GUI for each of the tasks
- There are restrictions on who can do what with various data elements on each step
## Development tasks
- [x] Create application in Altinn Studio
- [x] Update policy.xml
- [x] Update data restrictions in applicationmetadata.json
- [x] Deploy application
- [x] Fix all road blocks
- [x] Documentation; Application repository can be referred to for developers that wish to create a two-task process app.
- [x] Manual test
|
process
|
implement application with two task process description to support sirius and other applications with complex workflows we need to be able to show that an application with multiple steps is supported by altinn apps an app with the illustrated workflow should be implemented successfully tested and code made available for application developers that wish to implement a multistep application screenshots considerations acceptance criteria the two tasks are not of the same type there is a seperate gui for each of the tasks there are restrictions on who can do what with various data elements on each step development tasks create application in altinn studio update policy xml update data restrictions in applicationmetadata json deploy application fix all road blocks documentation application repository can be referred to for developers that wish to create a two task process app manual test
| 1
|
5,468
| 8,335,490,724
|
IssuesEvent
|
2018-09-28 02:36:59
|
mozilla-tw/ScreenshotGo
|
https://api.github.com/repos/mozilla-tw/ScreenshotGo
|
closed
|
Exclude tablet support since open Beta
|
P0 process
|
1. (UX) UX team to check, get a tablet to check if the layout looks ok.
2. (Product) decide if we will exclude tablet support in GooglePlay.
3. (Dev) configure the blacklist on GooglePlay.
|
1.0
|
Exclude tablet support since open Beta - 1. (UX) UX team to check, get a tablet to check if the layout looks ok.
2. (Product) decide if we will exclude tablet support in GooglePlay.
3. (Dev) configure the blacklist on GooglePlay.
|
process
|
exclude tablet support since open beta ux ux team to check get a tablet to check if the layout looks ok product decide if we will exclude tablet support in googleplay dev configure the blacklist on googleplay
| 1
|
77,187
| 3,506,269,927
|
IssuesEvent
|
2016-01-08 05:09:30
|
OregonCore/OregonCore
|
https://api.github.com/repos/OregonCore/OregonCore
|
closed
|
use munt (BB #248)
|
migrated Priority: Medium Type: Bug
|
This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 31.07.2010 19:00:17 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/248
<hr>
Can't use munt in some location ,like orgirimmar
|
1.0
|
use munt (BB #248) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 31.07.2010 19:00:17 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/248
<hr>
Can't use munt in some location ,like orgirimmar
|
non_process
|
use munt bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link can t use munt in some location like orgirimmar
| 0
|
160,161
| 20,099,649,786
|
IssuesEvent
|
2022-02-07 01:19:12
|
venkateshreddypala/AngOCR
|
https://api.github.com/repos/venkateshreddypala/AngOCR
|
opened
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-2.0.0.tgz
|
security vulnerability
|
## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/@angular-devkit/core/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.1.tgz (Root Library)
- chokidar-2.0.4.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-5.2.11.tgz (Root Library)
- chokidar-1.7.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (karma): 4.2.0</p><p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (@angular/compiler-cli): 9.0.0-rc.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-2.0.0.tgz - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/@angular-devkit/core/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.1.tgz (Root Library)
- chokidar-2.0.4.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-5.2.11.tgz (Root Library)
- chokidar-1.7.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (karma): 4.2.0</p><p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (@angular/compiler-cli): 9.0.0-rc.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file angocr ui package json path to vulnerable library ui node modules angular devkit core node modules glob parent package json dependency hierarchy karma tgz root library chokidar tgz x glob parent tgz vulnerable library glob parent tgz strips glob magic from a string to provide the parent path library home page a href path to dependency file angocr ui package json path to vulnerable library ui node modules glob parent package json dependency hierarchy compiler cli tgz root library chokidar tgz x glob parent tgz vulnerable library vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent direct dependency fix resolution karma fix resolution glob parent direct dependency fix resolution angular compiler cli rc step up your open source security game with whitesource
| 0
|
13,540
| 16,082,078,963
|
IssuesEvent
|
2021-04-26 06:39:57
|
Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
|
https://api.github.com/repos/Vanuatu-National-Statistics-Office/vnso-RAP-tradeStats-materials
|
opened
|
Updated code to use CSV
|
coding data processing help wanted
|
- Blank column in raw data that emerges, seems to be column X. How can I search Dataframe and remove empty column?
- Uploading in CSV loose leading zeros. Have changed format of columns in csv (ex 0000), however how can I do this in R?
- I've tried a few things but converting dates doesn't seem to be working. Closest I got was to have days with 4 figures (0008 for 8th)
- BEC still an issue have downloaded UN file and then mapped to our coding structure and definitions, there are duplicates though which mean when merge observations jumps up. I will go back and get rid of duplicates and re-try
|
1.0
|
Updated code to use CSV - - Blank column in raw data that emerges, seems to be column X. How can I search Dataframe and remove empty column?
- Uploading in CSV loose leading zeros. Have changed format of columns in csv (ex 0000), however how can I do this in R?
- I've tried a few things but converting dates doesn't seem to be working. Closest I got was to have days with 4 figures (0008 for 8th)
- BEC still an issue have downloaded UN file and then mapped to our coding structure and definitions, there are duplicates though which mean when merge observations jumps up. I will go back and get rid of duplicates and re-try
|
process
|
updated code to use csv blank column in raw data that emerges seems to be column x how can i search dataframe and remove empty column uploading in csv loose leading zeros have changed format of columns in csv ex however how can i do this in r i ve tried a few things but converting dates doesn t seem to be working closest i got was to have days with figures for bec still an issue have downloaded un file and then mapped to our coding structure and definitions there are duplicates though which mean when merge observations jumps up i will go back and get rid of duplicates and re try
| 1
|
5,508
| 8,376,905,833
|
IssuesEvent
|
2018-10-05 21:37:30
|
rchain/bounties
|
https://api.github.com/repos/rchain/bounties
|
closed
|
O> Preparation of invoices 201802
|
invoice-process zz-Operations
|
Prepare invoices for RAM's (RChain Active Members) for work done in February
Measure of Completion: Gsheet-invoices for 201802 in personal spreadsheet of RAM's
Date of Completion: 04-01-2018
Suggested Budget: $500
- [x] Create TAB "Rewards 201802" [Declaration](https://docs.google.com/spreadsheets/d/19Y5aEZelHa2JISCXB6jViTiRx9H0cCmha8arYz7vbg0/edit#gid=46872367&range=A1:G4) _Access is only for RChainFin_
- [x] Create personal TABs (201802-githubname) in [Declaration](https://docs.google.com/spreadsheets/d/19Y5aEZelHa2JISCXB6jViTiRx9H0cCmha8arYz7vbg0/edit#gid=1690363621&range=A2:C3)
- [x] Copy personal TABs to personal invoice Gsheet
- [x] Send personalized email to RAM's with Gsheet
- [x] ops@rchain.coop receives and checks PDF-invoices
- [ ] invoices@rchain.coop receives PDF-invoices and creates payment register
- [ ] Ops enters payments in Gnosis-Multisig
- [ ] COO confirms payments
|
1.0
|
O> Preparation of invoices 201802 - Prepare invoices for RAM's (RChain Active Members) for work done in February
Measure of Completion: Gsheet-invoices for 201802 in personal spreadsheet of RAM's
Date of Completion: 04-01-2018
Suggested Budget: $500
- [x] Create TAB "Rewards 201802" [Declaration](https://docs.google.com/spreadsheets/d/19Y5aEZelHa2JISCXB6jViTiRx9H0cCmha8arYz7vbg0/edit#gid=46872367&range=A1:G4) _Access is only for RChainFin_
- [x] Create personal TABs (201802-githubname) in [Declaration](https://docs.google.com/spreadsheets/d/19Y5aEZelHa2JISCXB6jViTiRx9H0cCmha8arYz7vbg0/edit#gid=1690363621&range=A2:C3)
- [x] Copy personal TABs to personal invoice Gsheet
- [x] Send personalized email to RAM's with Gsheet
- [x] ops@rchain.coop receives and checks PDF-invoices
- [ ] invoices@rchain.coop receives PDF-invoices and creates payment register
- [ ] Ops enters payments in Gnosis-Multisig
- [ ] COO confirms payments
|
process
|
o preparation of invoices prepare invoices for ram s rchain active members for work done in february measure of completion gsheet invoices for in personal spreadsheet of ram s date of completion suggested budget create tab rewards access is only for rchainfin create personal tabs githubname in copy personal tabs to personal invoice gsheet send personalized email to ram s with gsheet ops rchain coop receives and checks pdf invoices invoices rchain coop receives pdf invoices and creates payment register ops enters payments in gnosis multisig coo confirms payments
| 1
|
15,468
| 19,681,988,701
|
IssuesEvent
|
2022-01-11 17:39:48
|
slsa-framework/slsa
|
https://api.github.com/repos/slsa-framework/slsa
|
opened
|
Consider using Netlify for hosting
|
process
|
To make pull requests easier to review, I think we should consider setting up [Netlify](https://netlify.com).
* At a minimum, we can use a free personal account to enable automatic pull request previews. It's a really nice feature. For example, see https://github.com/MarkLodato/slsa/pull/14 (https://deploy-preview-14--marklodato-slsa.netlify.app/). This requires granting the Netlify app write permission to issues and PRs (but not commit access) so I'd like approval from @slsa-framework/slsa-steering-committee before doing this.
* We can also consider switching our hosting to Netlify instead of GitHub pages. This would have some fairly minor benefits, mostly making [redirects](https://docs.netlify.com/routing/redirects/) easier and allowing other templating engines (or custom Jekyll plugins) without a custom GitHub Actions setup. The cost would be to reconfigure our DNS. Not sure it's worth the effort.
* Finally, we could apply for a free [open source plan](https://www.netlify.com/legal/open-source-policy) if we put a Netlify blub on the homepage or footer.
Personally, I think we should start by enabling the PR previews and see if that's sufficient. I'd be the only admin since we don't have a paid (or open source) plan, but that is probably good enough for preview purposes.
Thoughts?
|
1.0
|
Consider using Netlify for hosting - To make pull requests easier to review, I think we should consider setting up [Netlify](https://netlify.com).
* At a minimum, we can use a free personal account to enable automatic pull request previews. It's a really nice feature. For example, see https://github.com/MarkLodato/slsa/pull/14 (https://deploy-preview-14--marklodato-slsa.netlify.app/). This requires granting the Netlify app write permission to issues and PRs (but not commit access) so I'd like approval from @slsa-framework/slsa-steering-committee before doing this.
* We can also consider switching our hosting to Netlify instead of GitHub pages. This would have some fairly minor benefits, mostly making [redirects](https://docs.netlify.com/routing/redirects/) easier and allowing other templating engines (or custom Jekyll plugins) without a custom GitHub Actions setup. The cost would be to reconfigure our DNS. Not sure it's worth the effort.
* Finally, we could apply for a free [open source plan](https://www.netlify.com/legal/open-source-policy) if we put a Netlify blub on the homepage or footer.
Personally, I think we should start by enabling the PR previews and see if that's sufficient. I'd be the only admin since we don't have a paid (or open source) plan, but that is probably good enough for preview purposes.
Thoughts?
|
process
|
consider using netlify for hosting to make pull requests easier to review i think we should consider setting up at a minimum we can use a free personal account to enable automatic pull request previews it s a really nice feature for example see this requires granting the netlify app write permission to issues and prs but not commit access so i d like approval from slsa framework slsa steering committee before doing this we can also consider switching our hosting to netlify instead of github pages this would have some fairly minor benefits mostly making easier and allowing other templating engines or custom jekyll plugins without a custom github actions setup the cost would be to reconfigure our dns not sure it s worth the effort finally we could apply for a free if we put a netlify blub on the homepage or footer personally i think we should start by enabling the pr previews and see if that s sufficient i d be the only admin since we don t have a paid or open source plan but that is probably good enough for preview purposes thoughts
| 1
|
319,196
| 23,760,135,980
|
IssuesEvent
|
2022-09-01 08:10:41
|
Unity-Technologies/com.unity.multiplayer.docs
|
https://api.github.com/repos/Unity-Technologies/com.unity.multiplayer.docs
|
closed
|
Bitesize samples links to broken Client Driven Sample page
|
documentation IN JIRA
|
Client Driven Sample link is broken depending on how you arrive at the About Bitesize Samples page.
If you navigate to About Bitesize Samples from the link on the homepage, or from pasting the URL in your browser (https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction) you get redirected to a URL with "index.html" postfix, and the Client Driven Sample link will be broken.
If you navigate to About Bitesize Samples from the getting started page (https://docs-multiplayer.unity3d.com/docs/getting-started/about) you will NOT be redirected to a URL with index.html, and the link to Client Driven Sample will work fine.
Repro Steps:
1. Visit https://docs-multiplayer.unity3d.com/
2. Click "Learn more about Bitesize"
3. Note you are redirected to https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction/index.html which ends in "index.html", even though the link didn't have this
4. Click on "Client Driven Sample"
Actual Result:
You are redirected to https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction/bitesize-clientdriven which results in a 404
Expected:
https://docs-multiplayer.unity3d.com/docs/learn/bitesize-clientdriven is loaded and you can view the content
|
1.0
|
Bitesize samples links to broken Client Driven Sample page - Client Driven Sample link is broken depending on how you arrive at the About Bitesize Samples page.
If you navigate to About Bitesize Samples from the link on the homepage, or from pasting the URL in your browser (https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction) you get redirected to a URL with "index.html" postfix, and the Client Driven Sample link will be broken.
If you navigate to About Bitesize Samples from the getting started page (https://docs-multiplayer.unity3d.com/docs/getting-started/about) you will NOT be redirected to a URL with index.html, and the link to Client Driven Sample will work fine.
Repro Steps:
1. Visit https://docs-multiplayer.unity3d.com/
2. Click "Learn more about Bitesize"
3. Note you are redirected to https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction/index.html which ends in "index.html", even though the link didn't have this
4. Click on "Client Driven Sample"
Actual Result:
You are redirected to https://docs-multiplayer.unity3d.com/docs/learn/bitesize-introduction/bitesize-clientdriven which results in a 404
Expected:
https://docs-multiplayer.unity3d.com/docs/learn/bitesize-clientdriven is loaded and you can view the content
|
non_process
|
bitesize samples links to broken client driven sample page client driven sample link is broken depending on how you arrive at the about bitesize samples page if you navigate to about bitesize samples from the link on the homepage or from pasting the url in your browser you get redirected to a url with index html postfix and the client driven sample link will be broken if you navigate to about bitesize samples from the getting started page you will not be redirected to a url with index html and the link to client driven sample will work fine repro steps visit click learn more about bitesize note you are redirected to which ends in index html even though the link didn t have this click on client driven sample actual result you are redirected to which results in a expected is loaded and you can view the content
| 0
|
325,713
| 24,058,620,126
|
IssuesEvent
|
2022-09-16 19:31:59
|
tablelandnetwork/js-tableland-cli
|
https://api.github.com/repos/tablelandnetwork/js-tableland-cli
|
opened
|
Feature: Support local fetch, diff, and upload of SQL tables
|
documentation enhancement
|
**Is your feature request related to a problem? Please describe.**
- It would be very useful to be able to fetch a (set of) table(s) from tableland, and load them locally in an SQLite database
- From here, users could mutate the database, calculate a diff from the original, and even normalize the statements
- After this, users should be able to then "push" the diff to tableland as a set of SQL statements
**Describe the solution you'd like**
It would be fine if the above things happen less automagically. For instance, perhaps there is a download to SQLite command. Then editing is manual. They could use standard SQLite tools to diff. Then use a custom "check" command to normalize the SQL statements. Then use the previously mentioned run feature to send the new diff commands from a file.
|
1.0
|
Feature: Support local fetch, diff, and upload of SQL tables - **Is your feature request related to a problem? Please describe.**
- It would be very useful to be able to fetch a (set of) table(s) from tableland, and load them locally in an SQLite database
- From here, users could mutate the database, calculate a diff from the original, and even normalize the statements
- After this, users should be able to then "push" the diff to tableland as a set of SQL statements
**Describe the solution you'd like**
It would be fine if the above things happen less automagically. For instance, perhaps there is a download to SQLite command. Then editing is manual. They could use standard SQLite tools to diff. Then use a custom "check" command to normalize the SQL statements. Then use the previously mentioned run feature to send the new diff commands from a file.
|
non_process
|
feature support local fetch diff and upload of sql tables is your feature request related to a problem please describe it would be very useful to be able to fetch a set of table s from tableland and load them locally in an sqlite database from here users could mutate the database calculate a diff from the original and even normalize the statements after this users should be able to then push the diff to tableland as a set of sql statements describe the solution you d like it would be fine if the above things happen less automagically for instance perhaps there is a download to sqlite command then editing is manual they could use standard sqlite tools to diff then use a custom check command to normalize the sql statements then use the previously mentioned run feature to send the new diff commands from a file
| 0
|
9,518
| 12,499,044,636
|
IssuesEvent
|
2020-06-01 19:25:20
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
BIGTIFF error is not obvious in processing log
|
Bug Feedback Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When running an algorithm that produces a tiff that is greater than 4GB an error is created in the log stating that BIGTIFF=YES must be used but the algorithm continues and switches back to the Parameters tab after finishing as would be expected in an error free run. While this is usually obvious because the resulting tiff will be missing parts there are many cases where it wouldn't be caught.
**How to Reproduce**
Run an algorithm that produces a TIFF that is greater that 4GB and observe log. (I'm not sure if you also need to change the desired compression from the default or not as it seems like this https://github.com/qgis/QGIS/issues/13496 would have set BIGTIFF to be used if needed in default)
Error in log:

Report at bottom of log:

**QGIS and OS versions**
QGIS version
3.12.2-București
QGIS code revision
8a1fb33634
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
DataPlotly;
GroupStats;
pickLayer;
PluginLoadTimes;
processing_r;
quick_map_services;
toolbargenerator;
workbench;
db_manager;
processing
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
<!-- Add any other context about the problem here. -->
|
1.0
|
BIGTIFF error is not obvious in processing log - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When running an algorithm that produces a tiff that is greater than 4GB an error is created in the log stating that BIGTIFF=YES must be used but the algorithm continues and switches back to the Parameters tab after finishing as would be expected in an error free run. While this is usually obvious because the resulting tiff will be missing parts there are many cases where it wouldn't be caught.
**How to Reproduce**
Run an algorithm that produces a TIFF that is greater that 4GB and observe log. (I'm not sure if you also need to change the desired compression from the default or not as it seems like this https://github.com/qgis/QGIS/issues/13496 would have set BIGTIFF to be used if needed in default)
Error in log:

Report at bottom of log:

**QGIS and OS versions**
QGIS version
3.12.2-București
QGIS code revision
8a1fb33634
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
DataPlotly;
GroupStats;
pickLayer;
PluginLoadTimes;
processing_r;
quick_map_services;
toolbargenerator;
workbench;
db_manager;
processing
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
<!-- Add any other context about the problem here. -->
|
process
|
bigtiff error is not obvious in processing log bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug when running an algorithm that produces a tiff that is greater than an error is created in the log stating that bigtiff yes must be used but the algorithm continues and switches back to the parameters tab after finishing as would be expected in an error free run while this is usually obvious because the resulting tiff will be missing parts there are many cases where it wouldn t be caught how to reproduce run an algorithm that produces a tiff that is greater that and observe log i m not sure if you also need to change the desired compression from the default or not as it seems like this would have set bigtiff to be used if needed in default error in log report at bottom of log qgis and os versions qgis version bucurești qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins dataplotly groupstats picklayer pluginloadtimes processing r quick map services toolbargenerator workbench db manager processing about click in the table ctrl a and then ctrl c finally paste here additional context
| 1
|
425,641
| 12,343,609,730
|
IssuesEvent
|
2020-05-15 04:37:29
|
eternagame/eternagame.org
|
https://api.github.com/repos/eternagame/eternagame.org
|
closed
|
"Enter Lab" carousel button should go to lab page, not puzzle progression
|
High Priority bug good first issue
|
1) For any existing player who currently has lab access, entering the lab from the main page should **not** go to the puzzle progression. It should go to some page that makes it clear what labs, if any, are currently open.
2)It looks like that would be http://eternadev.org/labs?size=27&filters=active, in the current design, but the filter doesn't seem to be implemented.
|
1.0
|
"Enter Lab" carousel button should go to lab page, not puzzle progression - 1) For any existing player who currently has lab access, entering the lab from the main page should **not** go to the puzzle progression. It should go to some page that makes it clear what labs, if any, are currently open.
2)It looks like that would be http://eternadev.org/labs?size=27&filters=active, in the current design, but the filter doesn't seem to be implemented.
|
non_process
|
enter lab carousel button should go to lab page not puzzle progression for any existing player who currently has lab access entering the lab from the main page should not go to the puzzle progression it should go to some page that makes it clear what labs if any are currently open it looks like that would be in the current design but the filter doesn t seem to be implemented
| 0
|
10,300
| 13,152,067,194
|
IssuesEvent
|
2020-08-09 20:06:48
|
oleksis/machine-learning-articles
|
https://api.github.com/repos/oleksis/machine-learning-articles
|
opened
|
Speed up your Data Cleaning and Preprocessing with klib
|
Data Preprocessing
|
## TL;DR
Use klib to speed up data cleaning and preprocessing
### Article Link
https://towardsdatascience.com/speed-up-your-data-cleaning-and-preprocessing-with-klib-97191d320f80
## Key Takeaways
This package seems useful but still needs to try out to know how useful it is
## Useful Tools
* [klib](https://github.com/akanz1/klib)
|
1.0
|
Speed up your Data Cleaning and Preprocessing with klib - ## TL;DR
Use klib to speed up data cleaning and preprocessing
### Article Link
https://towardsdatascience.com/speed-up-your-data-cleaning-and-preprocessing-with-klib-97191d320f80
## Key Takeaways
This package seems useful but still needs to try out to know how useful it is
## Useful Tools
* [klib](https://github.com/akanz1/klib)
|
process
|
speed up your data cleaning and preprocessing with klib tl dr use klib to speed up data cleaning and preprocessing article link key takeaways this package seems useful but still needs to try out to know how useful it is useful tools
| 1
|
13,047
| 15,387,857,700
|
IssuesEvent
|
2021-03-03 10:03:08
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
[Introspection] Error: [libs\sql-schema-describer\src\mysql.rs:351:34] non_unique
|
bug/1-repro-available kind/bug process/candidate status/needs-action team/migrations topic: introspection topic: introspection error reporting
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.6.2`
Binary Version: `6a8054bb549e4cc23f157b0010cb2e95cb2637fb`
Report: https://prisma-errors.netlify.app/report/11189
OS: `x64 win32 10.0.19042`
JS Stacktrace:
```
Error: [libs\sql-schema-describer\src\mysql.rs:351:34] non_unique
at IntrospectionEngine.<anonymous> (C:\temp\...\...\node_modules\@prisma\cli\build\index.js:2:1828886)
at Generator.throw (<anonymous>)
at rejected (C:\temp\...\...\node_modules\@prisma\cli\build\index.js:2:1825889)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
```
Rust Stacktrace:
```
0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: <unknown>
10: <unknown>
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <unknown>
29: <unknown>
30: <unknown>
31: <unknown>
32: BaseThreadInitThunk
33: RtlUserThreadStart
```
|
1.0
|
[Introspection] Error: [libs\sql-schema-describer\src\mysql.rs:351:34] non_unique - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.6.2`
Binary Version: `6a8054bb549e4cc23f157b0010cb2e95cb2637fb`
Report: https://prisma-errors.netlify.app/report/11189
OS: `x64 win32 10.0.19042`
JS Stacktrace:
```
Error: [libs\sql-schema-describer\src\mysql.rs:351:34] non_unique
at IntrospectionEngine.<anonymous> (C:\temp\...\...\node_modules\@prisma\cli\build\index.js:2:1828886)
at Generator.throw (<anonymous>)
at rejected (C:\temp\...\...\node_modules\@prisma\cli\build\index.js:2:1825889)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
```
Rust Stacktrace:
```
0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: <unknown>
10: <unknown>
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <unknown>
29: <unknown>
30: <unknown>
31: <unknown>
32: BaseThreadInitThunk
33: RtlUserThreadStart
```
|
process
|
error non unique command prisma introspect version binary version report os js stacktrace error non unique at introspectionengine c temp node modules prisma cli build index js at generator throw at rejected c temp node modules prisma cli build index js at processticksandrejections internal process task queues js rust stacktrace basethreadinitthunk rtluserthreadstart
| 1
|
713,377
| 24,526,909,713
|
IssuesEvent
|
2022-10-11 13:45:45
|
WordPress/openverse
|
https://api.github.com/repos/WordPress/openverse
|
closed
|
Add shellcheck to all repositories
|
good first issue help wanted 🟨 priority: medium ✨ goal: improvement 🤖 aspect: dx
|
## Problem
<!-- Describe a problem solved by this feature; or delete the section entirely. -->
We've made a few easy to avoid mistakes in shell scripts
https://github.com/WordPress/openverse-frontend/pull/1640
https://github.com/WordPress/openverse-api/pull/869
These can be avoided using shellcheck and following it's basic advice (always adding `set -e`, actual syntax checks, etc).
## Description
<!-- Describe the feature and how it solves the problem. -->
Add shellcheck to all repositories. Even if any of these repositories don't have shell scripts, let's add it now before we introduce a shell script and forget about this. It doesn't hurt, after all.
- [x] WordPress/openverse (https://github.com/WordPress/openverse/pull/300)
- [x] WordPress/openverse-catalog (https://github.com/WordPress/openverse-catalog/pull/718)
- [x] WordPress/openverse-api (https://github.com/WordPress/openverse-api/pull/922)
- [x] WordPress/openverse-frontend (https://github.com/WordPress/openverse-frontend/pull/1862)
## Side note
It'd be nice if we used pre-commit on all repositories instead of having the frontend repository be the odd ball using `husky`. If we used pre-commit everywhere then we could just sync the pre-commit configuration and automagically have things working for all languages across all repositories the same way. I'll open another issue to think about that for the frontend repo.
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in implementing this feature.
|
1.0
|
Add shellcheck to all repositories - ## Problem
<!-- Describe a problem solved by this feature; or delete the section entirely. -->
We've made a few easy to avoid mistakes in shell scripts
https://github.com/WordPress/openverse-frontend/pull/1640
https://github.com/WordPress/openverse-api/pull/869
These can be avoided using shellcheck and following it's basic advice (always adding `set -e`, actual syntax checks, etc).
## Description
<!-- Describe the feature and how it solves the problem. -->
Add shellcheck to all repositories. Even if any of these repositories don't have shell scripts, let's add it now before we introduce a shell script and forget about this. It doesn't hurt, after all.
- [x] WordPress/openverse (https://github.com/WordPress/openverse/pull/300)
- [x] WordPress/openverse-catalog (https://github.com/WordPress/openverse-catalog/pull/718)
- [x] WordPress/openverse-api (https://github.com/WordPress/openverse-api/pull/922)
- [x] WordPress/openverse-frontend (https://github.com/WordPress/openverse-frontend/pull/1862)
## Side note
It'd be nice if we used pre-commit on all repositories instead of having the frontend repository be the odd ball using `husky`. If we used pre-commit everywhere then we could just sync the pre-commit configuration and automagically have things working for all languages across all repositories the same way. I'll open another issue to think about that for the frontend repo.
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in implementing this feature.
|
non_process
|
add shellcheck to all repositories problem we ve made a few easy to avoid mistakes in shell scripts these can be avoided using shellcheck and following it s basic advice always adding set e actual syntax checks etc description add shellcheck to all repositories even if any of these repositories don t have shell scripts let s add it now before we introduce a shell script and forget about this it doesn t hurt after all wordpress openverse wordpress openverse catalog wordpress openverse api wordpress openverse frontend side note it d be nice if we used pre commit on all repositories instead of having the frontend repository be the odd ball using husky if we used pre commit everywhere then we could just sync the pre commit configuration and automagically have things working for all languages across all repositories the same way i ll open another issue to think about that for the frontend repo implementation 🙋 i would be interested in implementing this feature
| 0
|
53,293
| 6,715,294,151
|
IssuesEvent
|
2017-10-13 20:30:56
|
teamdigitale/teamdigitale.governo.it
|
https://api.github.com/repos/teamdigitale/teamdigitale.governo.it
|
closed
|
Voce di menù "blog"
|
bug design
|
La voce di menù "blog" risulta evidenziata in home page, nessuna voce di menù deve essere evidenziata in home page
|
1.0
|
Voce di menù "blog" - La voce di menù "blog" risulta evidenziata in home page, nessuna voce di menù deve essere evidenziata in home page
|
non_process
|
voce di menù blog la voce di menù blog risulta evidenziata in home page nessuna voce di menù deve essere evidenziata in home page
| 0
|
62,149
| 14,656,443,759
|
IssuesEvent
|
2020-12-28 13:26:18
|
fu1771695yongxie/angular.js
|
https://api.github.com/repos/fu1771695yongxie/angular.js
|
opened
|
CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz
|
security vulnerability
|
## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.7.tgz</b>, <b>bootstrap-3.1.1.tgz</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.7.tgz</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz">https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- angular-benchpress-0.2.2.tgz (Root Library)
- :x: **bootstrap-3.3.7.tgz** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.1.1.tgz</b></p></summary>
<p>Sleek, intuitive, and powerful front-end framework for faster and easier web development.</p>
<p>Library home page: <a href="https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz">https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/angular.js/commit/1b1b50185bc6258e96ba849a00be9ba44d9ac05c">1b1b50185bc6258e96ba849a00be9ba44d9ac05c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.tgz, bootstrap-3.1.1.tgz - ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.7.tgz</b>, <b>bootstrap-3.1.1.tgz</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.7.tgz</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz">https://registry.npmjs.org/bootstrap/-/bootstrap-3.3.7.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- angular-benchpress-0.2.2.tgz (Root Library)
- :x: **bootstrap-3.3.7.tgz** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.1.1.tgz</b></p></summary>
<p>Sleek, intuitive, and powerful front-end framework for faster and easier web development.</p>
<p>Library home page: <a href="https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz">https://registry.npmjs.org/bootstrap/-/bootstrap-3.1.1.tgz</a></p>
<p>Path to dependency file: angular.js/yarn.lock</p>
<p>Path to vulnerable library: angular.js/yarn.lock</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/angular.js/commit/1b1b50185bc6258e96ba849a00be9ba44d9ac05c">1b1b50185bc6258e96ba849a00be9ba44d9ac05c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/issues/20184">https://github.com/twbs/bootstrap/issues/20184</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in bootstrap tgz bootstrap tgz cve medium severity vulnerability vulnerable libraries bootstrap tgz bootstrap tgz bootstrap tgz the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file angular js yarn lock path to vulnerable library angular js yarn lock dependency hierarchy angular benchpress tgz root library x bootstrap tgz vulnerable library bootstrap tgz sleek intuitive and powerful front end framework for faster and easier web development library home page a href path to dependency file angular js yarn lock path to vulnerable library angular js yarn lock dependency hierarchy x bootstrap tgz vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
15,413
| 19,600,371,812
|
IssuesEvent
|
2022-01-06 00:07:52
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
composer.rest.get_dag_prefix_test: test_get_dag_prefix failed
|
priority: p2 type: process api: composer samples flakybot: issue
|
Note: #6369 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5ceff2520ecced7e1284f5ce0592b93dafcf3817
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/63899ce4-0591-4f40-81b5-ddef69dd0b95), [Sponge](http://sponge2/63899ce4-0591-4f40-81b5-ddef69dd0b95)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/composer/rest/get_dag_prefix_test.py", line 26, in test_get_dag_prefix
get_dag_prefix(PROJECT, COMPOSER_LOCATION, COMPOSER_ENVIRONMENT)
File "/workspace/composer/rest/get_dag_prefix.py", line 47, in get_dag_prefix
print(environment_data['config']['dagGcsPrefix'])
KeyError: 'config'</pre></details>
|
1.0
|
composer.rest.get_dag_prefix_test: test_get_dag_prefix failed - Note: #6369 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5ceff2520ecced7e1284f5ce0592b93dafcf3817
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/63899ce4-0591-4f40-81b5-ddef69dd0b95), [Sponge](http://sponge2/63899ce4-0591-4f40-81b5-ddef69dd0b95)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/composer/rest/get_dag_prefix_test.py", line 26, in test_get_dag_prefix
get_dag_prefix(PROJECT, COMPOSER_LOCATION, COMPOSER_ENVIRONMENT)
File "/workspace/composer/rest/get_dag_prefix.py", line 47, in get_dag_prefix
print(environment_data['config']['dagGcsPrefix'])
KeyError: 'config'</pre></details>
|
process
|
composer rest get dag prefix test test get dag prefix failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file workspace composer rest get dag prefix test py line in test get dag prefix get dag prefix project composer location composer environment file workspace composer rest get dag prefix py line in get dag prefix print environment data keyerror config
| 1
|
4,929
| 7,795,449,869
|
IssuesEvent
|
2018-06-08 08:09:45
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Update nightly tests to use plotgen 3 (Trac #145)
|
Migrated from Trac post_processing senkbeil@uwm.edu task
|
The nightly tests will need to be updated to use Plotgen 3. At minimum, the call to plotgen will need to be updated to:
```text
plotgen -an CLUBB_current/ CLUBB_previous/ output
```
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/145
```json
{
"status": "closed",
"changetime": "2009-09-02T20:41:22",
"description": "The nightly tests will need to be updated to use Plotgen 3. At minimum, the call to plotgen will need to be updated to:\n\n{{{\nplotgen -an CLUBB_current/ CLUBB_previous/ output\n}}}",
"reporter": "senkbeil@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251924082000000",
"component": "post_processing",
"summary": "Update nightly tests to use plotgen 3",
"priority": "major",
"keywords": "",
"time": "2009-07-23T21:50:20",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "task"
}
```
|
1.0
|
Update nightly tests to use plotgen 3 (Trac #145) - The nightly tests will need to be updated to use Plotgen 3. At minimum, the call to plotgen will need to be updated to:
```text
plotgen -an CLUBB_current/ CLUBB_previous/ output
```
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/145
```json
{
"status": "closed",
"changetime": "2009-09-02T20:41:22",
"description": "The nightly tests will need to be updated to use Plotgen 3. At minimum, the call to plotgen will need to be updated to:\n\n{{{\nplotgen -an CLUBB_current/ CLUBB_previous/ output\n}}}",
"reporter": "senkbeil@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251924082000000",
"component": "post_processing",
"summary": "Update nightly tests to use plotgen 3",
"priority": "major",
"keywords": "",
"time": "2009-07-23T21:50:20",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "task"
}
```
|
process
|
update nightly tests to use plotgen trac the nightly tests will need to be updated to use plotgen at minimum the call to plotgen will need to be updated to text plotgen an clubb current clubb previous output attachments migrated from json status closed changetime description the nightly tests will need to be updated to use plotgen at minimum the call to plotgen will need to be updated to n n nplotgen an clubb current clubb previous output n reporter senkbeil uwm edu cc resolution verified by v larson ts component post processing summary update nightly tests to use plotgen priority major keywords time milestone plotgen owner senkbeil uwm edu type task
| 1
|
10,986
| 13,783,700,906
|
IssuesEvent
|
2020-10-08 19:38:55
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
Project logo
|
process + tools
|
There's no rush but at some point it would be useful to have a project logo.
We can look through the NumFOCUS [sponsored](https://numfocus.org/sponsored-projects) and [affiliated](https://numfocus.org/sponsored-projects/affiliated-projects) projects for inspiration.
[Open Logos](http://openlogos.org/) is a cool project from [Aras Atasaygın](https://twitter.com/arasatasaygin) to make logos available for free to OSS projects.
99designs also collects [examples](https://99designs.com/inspiration/logos/open-source) of OSS logos created by their community.
Open Source Design has a [jobs board](https://opensourcedesign.net/jobs/) that seems to be a good place to post a request for a logo.
|
1.0
|
Project logo - There's no rush but at some point it would be useful to have a project logo.
We can look through the NumFOCUS [sponsored](https://numfocus.org/sponsored-projects) and [affiliated](https://numfocus.org/sponsored-projects/affiliated-projects) projects for inspiration.
[Open Logos](http://openlogos.org/) is a cool project from [Aras Atasaygın](https://twitter.com/arasatasaygin) to make logos available for free to OSS projects.
99designs also collects [examples](https://99designs.com/inspiration/logos/open-source) of OSS logos created by their community.
Open Source Design has a [jobs board](https://opensourcedesign.net/jobs/) that seems to be a good place to post a request for a logo.
|
process
|
project logo there s no rush but at some point it would be useful to have a project logo we can look through the numfocus and projects for inspiration is a cool project from to make logos available for free to oss projects also collects of oss logos created by their community open source design has a that seems to be a good place to post a request for a logo
| 1
|
12,655
| 15,025,277,266
|
IssuesEvent
|
2021-02-01 20:51:06
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Profiles from Lines, SAGA tool, produces invalid files.
|
Bug Processing
|
Author Name: **Gabriel De Luca** (@gabriel-de-luca)
Original Redmine Issue: [21294](https://issues.qgis.org/issues/21294)
Affected QGIS version: 3.4.4
Redmine category:processing/saga
Assignee: Giovanni Manghi
---
The _Profiles from Lines_ SAGA tool produces invalid files.
In SAGA GIS, at the command line (@ta_profile 4@):
* If you want to save _Each Line as a New Profile_, you need to use the `-SPLIT` and the `-PROFILES` options. Files will be saved as _profile_01.shp_, _profile_02.shp_, etc.
* If not, don't use `-SPLIT` option and use `-PROFILE` option instead. File will be saved as _profile.shp_.
In QGIS, processing toolbox:
* Both `-PROFILE` and `-PROFILES` options are always used, so one of them always returns an error (an empty layer that can not be loaded).
* If you tick on _Each Line as a New Profile_ (is ticked by default), `-SPLIT` option is used, and the files are saved as _PROFILE.shp1_, _PROFILE.shp2_, etc. That produced two errors (one for the `-PROFILE` empty layer and other for the invalid extensions).
|
1.0
|
Profiles from Lines, SAGA tool, produces invalid files. - Author Name: **Gabriel De Luca** (@gabriel-de-luca)
Original Redmine Issue: [21294](https://issues.qgis.org/issues/21294)
Affected QGIS version: 3.4.4
Redmine category:processing/saga
Assignee: Giovanni Manghi
---
The _Profiles from Lines_ SAGA tool produces invalid files.
In SAGA GIS, at the command line (@ta_profile 4@):
* If you want to save _Each Line as a New Profile_, you need to use the `-SPLIT` and the `-PROFILES` options. Files will be saved as _profile_01.shp_, _profile_02.shp_, etc.
* If not, don't use `-SPLIT` option and use `-PROFILE` option instead. File will be saved as _profile.shp_.
In QGIS, processing toolbox:
* Both `-PROFILE` and `-PROFILES` options are always used, so one of them always returns an error (an empty layer that can not be loaded).
* If you tick on _Each Line as a New Profile_ (is ticked by default), `-SPLIT` option is used, and the files are saved as _PROFILE.shp1_, _PROFILE.shp2_, etc. That produced two errors (one for the `-PROFILE` empty layer and other for the invalid extensions).
|
process
|
profiles from lines saga tool produces invalid files author name gabriel de luca gabriel de luca original redmine issue affected qgis version redmine category processing saga assignee giovanni manghi the profiles from lines saga tool produces invalid files in saga gis at the command line ta profile if you want to save each line as a new profile you need to use the split and the profiles options files will be saved as profile shp profile shp etc if not don t use split option and use profile option instead file will be saved as profile shp in qgis processing toolbox both profile and profiles options are always used so one of them always returns an error an empty layer that can not be loaded if you tick on each line as a new profile is ticked by default split option is used and the files are saved as profile profile etc that produced two errors one for the profile empty layer and other for the invalid extensions
| 1
|
19,760
| 26,131,381,909
|
IssuesEvent
|
2022-12-29 05:43:17
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
Entry points specified as absolute URLs fail to load
|
module process cli esm
|
### Version
19.3.0
### Platform
Darwin Geoffreys-MacBook-Pro.local 22.2.0 Darwin Kernel Version 22.2.0: Fri Nov 11 02:08:47 PST 2022; root:xnu-8792.61.2~4/RELEASE_X86_64 x86_64
### Subsystem
module, esm, process, cli
### What steps will reproduce the bug?
With a checkout of the `node` repo at `~/Sites/node`:
```shell
$ pwd
/Users/geoffrey
$ node file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs
node:internal/modules/cjs/loader:1042
throw err;
^
Error: Cannot find module '/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs'
at Module._resolveFilename (node:internal/modules/cjs/loader:1039:15)
at Module._load (node:internal/modules/cjs/loader:885:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:82:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v19.3.0
```
Note the `/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs` constructed specifier. This doesn’t make sense.
### How often does it reproduce? Is there a required condition?
Every time.
### What is the expected behavior?
Absolute file URLs should be allowable as program entry points.
### What do you see instead?
```
Error: Cannot find module '/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs'
```
### Additional information
I understand per https://nodejs.org/api/cli.html#program-entry-point the entry point is parsed by the _CommonJS_ resolver, even when it passes the criteria for being loaded by the ESM one; but it doesn’t make sense that either resolver would be constructing an invalid path.
It’s also counterintuitive that `file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs` should be acceptable input for `--loader` and `--import` but not as the main entry point. For example, all of these are valid:
- `node --import file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs --eval ';'`
- `node --loader file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs --eval ';'`
- `node --import file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs ./Sites/node/test/fixtures/es-modules/mjs-file.mjs`
- `node --loader file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs ./Sites/node/test/fixtures/es-modules/mjs-file.mjs`
cc @aduh95 @JakobJingleheimer @nodejs/modules @nodejs/loaders
|
1.0
|
Entry points specified as absolute URLs fail to load - ### Version
19.3.0
### Platform
Darwin Geoffreys-MacBook-Pro.local 22.2.0 Darwin Kernel Version 22.2.0: Fri Nov 11 02:08:47 PST 2022; root:xnu-8792.61.2~4/RELEASE_X86_64 x86_64
### Subsystem
module, esm, process, cli
### What steps will reproduce the bug?
With a checkout of the `node` repo at `~/Sites/node`:
```shell
$ pwd
/Users/geoffrey
$ node file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs
node:internal/modules/cjs/loader:1042
throw err;
^
Error: Cannot find module '/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs'
at Module._resolveFilename (node:internal/modules/cjs/loader:1039:15)
at Module._load (node:internal/modules/cjs/loader:885:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:82:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v19.3.0
```
Note the `/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs` constructed specifier. This doesn’t make sense.
### How often does it reproduce? Is there a required condition?
Every time.
### What is the expected behavior?
Absolute file URLs should be allowable as program entry points.
### What do you see instead?
```
Error: Cannot find module '/Users/geoffrey/file:/Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs'
```
### Additional information
I understand per https://nodejs.org/api/cli.html#program-entry-point the entry point is parsed by the _CommonJS_ resolver, even when it passes the criteria for being loaded by the ESM one; but it doesn’t make sense that either resolver would be constructing an invalid path.
It’s also counterintuitive that `file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs` should be acceptable input for `--loader` and `--import` but not as the main entry point. For example, all of these are valid:
- `node --import file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs --eval ';'`
- `node --loader file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs --eval ';'`
- `node --import file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs ./Sites/node/test/fixtures/es-modules/mjs-file.mjs`
- `node --loader file:///Users/geoffrey/Sites/node/test/fixtures/es-modules/mjs-file.mjs ./Sites/node/test/fixtures/es-modules/mjs-file.mjs`
cc @aduh95 @JakobJingleheimer @nodejs/modules @nodejs/loaders
|
process
|
entry points specified as absolute urls fail to load version platform darwin geoffreys macbook pro local darwin kernel version fri nov pst root xnu release subsystem module esm process cli what steps will reproduce the bug with a checkout of the node repo at sites node shell pwd users geoffrey node file users geoffrey sites node test fixtures es modules mjs file mjs node internal modules cjs loader throw err error cannot find module users geoffrey file users geoffrey sites node test fixtures es modules mjs file mjs at module resolvefilename node internal modules cjs loader at module load node internal modules cjs loader at function executeuserentrypoint node internal modules run main at node internal main run main module code module not found requirestack node js note the users geoffrey file users geoffrey sites node test fixtures es modules mjs file mjs constructed specifier this doesn’t make sense how often does it reproduce is there a required condition every time what is the expected behavior absolute file urls should be allowable as program entry points what do you see instead error cannot find module users geoffrey file users geoffrey sites node test fixtures es modules mjs file mjs additional information i understand per the entry point is parsed by the commonjs resolver even when it passes the criteria for being loaded by the esm one but it doesn’t make sense that either resolver would be constructing an invalid path it’s also counterintuitive that file users geoffrey sites node test fixtures es modules mjs file mjs should be acceptable input for loader and import but not as the main entry point for example all of these are valid node import file users geoffrey sites node test fixtures es modules mjs file mjs eval node loader file users geoffrey sites node test fixtures es modules mjs file mjs eval node import file users geoffrey sites node test fixtures es modules mjs file mjs sites node test fixtures es modules mjs file mjs node loader file users geoffrey sites node test fixtures es modules mjs file mjs sites node test fixtures es modules mjs file mjs cc jakobjingleheimer nodejs modules nodejs loaders
| 1
|
143,474
| 19,180,303,439
|
IssuesEvent
|
2021-12-04 09:05:02
|
AlexRogalskiy/java-patterns
|
https://api.github.com/repos/AlexRogalskiy/java-patterns
|
opened
|
CVE-2021-3795 (High) detected in semver-regex-1.0.0.tgz
|
security vulnerability
|
## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-1.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz</a></p>
<p>Path to dependency file: java-patterns/package.json</p>
<p>Path to vulnerable library: java-patterns/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- validate-commit-msg-2.14.0.tgz (Root Library)
- :x: **semver-regex-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/850df196d336da76298c7b0f9c02003e236df715">850df196d336da76298c7b0f9c02003e236df715</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3795 (High) detected in semver-regex-1.0.0.tgz - ## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-1.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz</a></p>
<p>Path to dependency file: java-patterns/package.json</p>
<p>Path to vulnerable library: java-patterns/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- validate-commit-msg-2.14.0.tgz (Root Library)
- :x: **semver-regex-1.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/850df196d336da76298c7b0f9c02003e236df715">850df196d336da76298c7b0f9c02003e236df715</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in semver regex tgz cve high severity vulnerability vulnerable library semver regex tgz regular expression for matching semver versions library home page a href path to dependency file java patterns package json path to vulnerable library java patterns node modules semver regex package json dependency hierarchy validate commit msg tgz root library x semver regex tgz vulnerable library found in head commit a href found in base branch master vulnerability details semver regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution semver regex step up your open source security game with whitesource
| 0
|
10,896
| 13,673,970,575
|
IssuesEvent
|
2020-09-29 10:34:56
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Membrane remodeling and Double membrane vesicle viral factory assembly
|
New term request multi-species process
|
Dear GO curators,
Would it be possible to create
1. new_GO:
Name: membrane remodeling activity
Ontology: molecular_function
Synonyms: Membrane rearrangement activity
Ex: PMID: 32555292 (title: GTP hydrolysis by Synechocystis IM30 does not decisively affect its membrane remodeling activity)
Definition: Any protein that can remodel or rearrange membranes.
----
And also GO has a cellular component termed :
"double membrane vesicle viral factory", would it be possible to create:
2. new_GO:
Name: double membrane vesicle viral factory assembly
Ontology Biological_process
Definition: Any protein involved in the assembly of a cytoplasmic viral factory that consists of a double-membrane bound vesicle.
Thanks for your help, you can ask if you need more information,
Patrick
|
1.0
|
Membrane remodeling and Double membrane vesicle viral factory assembly - Dear GO curators,
Would it be possible to create
1. new_GO:
Name: membrane remodeling activity
Ontology: molecular_function
Synonyms: Membrane rearrangement activity
Ex: PMID: 32555292 (title: GTP hydrolysis by Synechocystis IM30 does not decisively affect its membrane remodeling activity)
Definition: Any protein that can remodel or rearrange membranes.
----
And also GO has a cellular component termed :
"double membrane vesicle viral factory", would it be possible to create:
2. new_GO:
Name: double membrane vesicle viral factory assembly
Ontology Biological_process
Definition: Any protein involved in the assembly of a cytoplasmic viral factory that consists of a double-membrane bound vesicle.
Thanks for your help, you can ask if you need more information,
Patrick
|
process
|
membrane remodeling and double membrane vesicle viral factory assembly dear go curators would it be possible to create new go name membrane remodeling activity ontology molecular function synonyms membrane rearrangement activity ex pmid title gtp hydrolysis by synechocystis does not decisively affect its membrane remodeling activity definition any protein that can remodel or rearrange membranes and also go has a cellular component termed double membrane vesicle viral factory would it be possible to create new go name double membrane vesicle viral factory assembly ontology biological process definition any protein involved in the assembly of a cytoplasmic viral factory that consists of a double membrane bound vesicle thanks for your help you can ask if you need more information patrick
| 1
|
9,393
| 12,395,541,245
|
IssuesEvent
|
2020-05-20 18:51:11
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
experiment with google cloud build
|
area/testing kind/process priority/backlog
|
https://cloud.google.com/cloud-build/docs/automating-builds/run-builds-on-github
I am curious if we can offload some of our tests to google cloud build.
to take out as much out of jenkins to more transparent process which is in the github source control.
|
1.0
|
experiment with google cloud build - https://cloud.google.com/cloud-build/docs/automating-builds/run-builds-on-github
I am curious if we can offload some of our tests to google cloud build.
to take out as much out of jenkins to more transparent process which is in the github source control.
|
process
|
experiment with google cloud build i am curious if we can offload some of our tests to google cloud build to take out as much out of jenkins to more transparent process which is in the github source control
| 1
|
4,724
| 5,258,360,502
|
IssuesEvent
|
2017-02-02 23:07:32
|
librenms/librenms
|
https://api.github.com/repos/librenms/librenms
|
closed
|
Restricted user seeing information about routers he does not have access too under Devices/Ports
|
Security WebUI
|
I want to give my customers access to use Librenms. But I only want to limit the user to see specific routers.
When I do that, and log in as that user. Under Devices/Ports on the device, I can see information on the right side about routers within the same Subnet or using same RFC 1918 subnet. Then it lists up our MGMT IP addresses/hostnames that we use in Librenms.
So for example Vlan 1, that is on many routers. And they all have 192.168.1.0/24 subnet. So that router sees every single other router information because they are using same IP subnet.
That is ok, except I only want the routers he has access to too be listed up there. Not those he does not have access too.
There is one other router in this picture that is 'bold' that he has access too. And it is active link. Other routers are just gray and not link.

|
True
|
Restricted user seeing information about routers he does not have access too under Devices/Ports - I want to give my customers access to use Librenms. But I only want to limit the user to see specific routers.
When I do that, and log in as that user. Under Devices/Ports on the device, I can see information on the right side about routers within the same Subnet or using same RFC 1918 subnet. Then it lists up our MGMT IP addresses/hostnames that we use in Librenms.
So for example Vlan 1, that is on many routers. And they all have 192.168.1.0/24 subnet. So that router sees every single other router information because they are using same IP subnet.
That is ok, except I only want the routers he has access to too be listed up there. Not those he does not have access too.
There is one other router in this picture that is 'bold' that he has access too. And it is active link. Other routers are just gray and not link.

|
non_process
|
restricted user seeing information about routers he does not have access too under devices ports i want to give my customers access to use librenms but i only want to limit the user to see specific routers when i do that and log in as that user under devices ports on the device i can see information on the right side about routers within the same subnet or using same rfc subnet then it lists up our mgmt ip addresses hostnames that we use in librenms so for example vlan that is on many routers and they all have subnet so that router sees every single other router information because they are using same ip subnet that is ok except i only want the routers he has access to too be listed up there not those he does not have access too there is one other router in this picture that is bold that he has access too and it is active link other routers are just gray and not link
| 0
|
14,622
| 17,766,735,930
|
IssuesEvent
|
2021-08-30 08:30:03
|
googleapis/nodejs-googleapis-common
|
https://api.github.com/repos/googleapis/nodejs-googleapis-common
|
closed
|
Dependency Dashboard
|
type: process
|
This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/333)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/392) (`mocha`, `@types/mocha`)
- [ ] <!-- recreate-branch=renovate/ts-loader-9.x -->[chore(deps): update dependency ts-loader to v9](../pull/381)
- [ ] <!-- recreate-branch=renovate/webpack-5.x -->[chore(deps): update dependency webpack to v5](../pull/334)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/333)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/392) (`mocha`, `@types/mocha`)
- [ ] <!-- recreate-branch=renovate/ts-loader-9.x -->[chore(deps): update dependency ts-loader to v9](../pull/381)
- [ ] <!-- recreate-branch=renovate/webpack-5.x -->[chore(deps): update dependency webpack to v5](../pull/334)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now chore deps update actions setup node action to ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull mocha types mocha pull pull check this box to trigger a request for renovate to run again on this repository
| 1
|
17,774
| 23,701,687,353
|
IssuesEvent
|
2022-08-29 19:38:47
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
storage: remove OLM Age condition reflection
|
api: storage type: process
|
Once the autogen breaking change is made to make Age in OLM conditions from `int64` to `*int64` we should revert changes made in https://github.com/googleapis/google-cloud-go/pull/6230
|
1.0
|
storage: remove OLM Age condition reflection - Once the autogen breaking change is made to make Age in OLM conditions from `int64` to `*int64` we should revert changes made in https://github.com/googleapis/google-cloud-go/pull/6230
|
process
|
storage remove olm age condition reflection once the autogen breaking change is made to make age in olm conditions from to we should revert changes made in
| 1
|
14,399
| 17,421,637,314
|
IssuesEvent
|
2021-08-04 02:35:01
|
Figma-Linux/figma-linux
|
https://api.github.com/repos/Figma-Linux/figma-linux
|
closed
|
Window does not snap to edges
|
Kind: Renderer Process Priority: Low Status: To Do Type: Enhancement
|
* App version: 0.7.1
* The output of screenfetch:
```
/:-------------:\ noelle@fedora
:-------------------:: -------------
:-----------/shhOHbmp---:\ OS: Fedora 33 (Workstation Edition) x86_64
/-----------omMMMNNNMMD ---: Host: AB350M-DS3H V2
:-----------sMMMMNMNMP. ---: Kernel: 5.10.7-200.fc33.x86_64
:-----------:MMMdP------- ---\ Uptime: 2 hours, 2 mins
,------------:MMMd-------- ---: Packages: 1750 (rpm), 4 (flatpak)
:------------:MMMd------- .---: Shell: bash 5.0.17
:---- oNMMMMMMMMMNho .----: Resolution: 1920x1080
:-- .+shhhMMMmhhy++ .------/ DE: GNOME 3.38.3
:- -------:MMMd--------------: WM: Mutter
:- --------/MMMd-------------; WM Theme: Adwaita
:- ------/hMMMy------------: Theme: Adwaita [GTK2/3]
:-- :dMNdhhdNMMNo------------; Icons: Adwaita [GTK2/3]
:---:sdNMMMMNds:------------: Terminal: gnome-terminal
:------:://:-------------:: CPU: AMD Ryzen 5 2600 (12) @ 3.600GHz
:---------------------:// GPU: AMD ATI Radeon RX 460/560D / Pro 450/
Memory: 3626MiB / 7963MiB
```
* Type of installed package (Snap, AppImage, deb, rpm, pacman): RPM
**Bug description**
When dragging the window to the left, right or top edge of the screen, the window does not tile or maximize.
**Screenshots**

Expected behavior:

**Additional Context**
The window snaps properly when dragging with the Super button held down.
|
1.0
|
Window does not snap to edges - * App version: 0.7.1
* The output of screenfetch:
```
/:-------------:\ noelle@fedora
:-------------------:: -------------
:-----------/shhOHbmp---:\ OS: Fedora 33 (Workstation Edition) x86_64
/-----------omMMMNNNMMD ---: Host: AB350M-DS3H V2
:-----------sMMMMNMNMP. ---: Kernel: 5.10.7-200.fc33.x86_64
:-----------:MMMdP------- ---\ Uptime: 2 hours, 2 mins
,------------:MMMd-------- ---: Packages: 1750 (rpm), 4 (flatpak)
:------------:MMMd------- .---: Shell: bash 5.0.17
:---- oNMMMMMMMMMNho .----: Resolution: 1920x1080
:-- .+shhhMMMmhhy++ .------/ DE: GNOME 3.38.3
:- -------:MMMd--------------: WM: Mutter
:- --------/MMMd-------------; WM Theme: Adwaita
:- ------/hMMMy------------: Theme: Adwaita [GTK2/3]
:-- :dMNdhhdNMMNo------------; Icons: Adwaita [GTK2/3]
:---:sdNMMMMNds:------------: Terminal: gnome-terminal
:------:://:-------------:: CPU: AMD Ryzen 5 2600 (12) @ 3.600GHz
:---------------------:// GPU: AMD ATI Radeon RX 460/560D / Pro 450/
Memory: 3626MiB / 7963MiB
```
* Type of installed package (Snap, AppImage, deb, rpm, pacman): RPM
**Bug description**
When dragging the window to the left, right or top edge of the screen, the window does not tile or maximize.
**Screenshots**

Expected behavior:

**Additional Context**
The window snaps properly when dragging with the Super button held down.
|
process
|
window does not snap to edges app version the output of screenfetch noelle fedora shhohbmp os fedora workstation edition ommmmnnnmmd host smmmmnmnmp kernel mmmdp uptime hours mins mmmd packages rpm flatpak mmmd shell bash onmmmmmmmmmnho resolution shhhmmmmhhy de gnome mmmd wm mutter mmmd wm theme adwaita hmmmy theme adwaita dmndhhdnmmno icons adwaita sdnmmmmnds terminal gnome terminal cpu amd ryzen gpu amd ati radeon rx pro memory type of installed package snap appimage deb rpm pacman rpm bug description when dragging the window to the left right or top edge of the screen the window does not tile or maximize screenshots expected behavior additional context the window snaps properly when dragging with the super button held down
| 1
|
17,794
| 23,722,967,962
|
IssuesEvent
|
2022-08-30 16:52:45
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.62
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.62.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.62 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.62.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm staging deploy to kubernetes eu deploy to kubernetes na testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
6,377
| 9,428,635,908
|
IssuesEvent
|
2019-04-12 02:06:56
|
googleapis/nodejs-bigtable
|
https://api.github.com/repos/googleapis/nodejs-bigtable
|
closed
|
Convert library to TypeScript
|
type: feature request type: process
|
This needs to happen in steps:
- [x] Rename all the files to `*.ts` and introduce a compile step
- [x] Get the linter and formatter working
- [ ] Enable noImplicitAny and types for all src/ files
- [ ] Update the package.json to ship the types
- [ ] Enable noImplicitAny everywhere
|
1.0
|
Convert library to TypeScript - This needs to happen in steps:
- [x] Rename all the files to `*.ts` and introduce a compile step
- [x] Get the linter and formatter working
- [ ] Enable noImplicitAny and types for all src/ files
- [ ] Update the package.json to ship the types
- [ ] Enable noImplicitAny everywhere
|
process
|
convert library to typescript this needs to happen in steps rename all the files to ts and introduce a compile step get the linter and formatter working enable noimplicitany and types for all src files update the package json to ship the types enable noimplicitany everywhere
| 1
|
577,195
| 17,105,068,879
|
IssuesEvent
|
2021-07-09 16:24:57
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
opened
|
NameX Advanced Search - remove timeout, and add "stop search" button
|
ENTITY NameX Priority2
|
## background:
Staff concern:
> In Namex, the search function is very sensitive and it won’t find part of a name, which we need to be able to continue to do. If we’re trying to search “Maple Leaf Brewing”, and type “Maple Leaf Brew” in the search, it won’t find the NR(s). What should generate is the screenshot in the example above^, and instead, this is what we see, “No data available”:
- this is the result of the application timing out, as before in good practise should reduce the submitted by time, we can either add the submitted by default time to a shorter period like 5 years would return quicker results. Also can add a timeout pop up so you know why it didn’t return a result
* Would you be ok with default setting to 5 years, so it doesn’t time out on you every time?
* Elizabeth usually sets it to 90 days if it is recent. But often she uses “all”. Not all the time, but i sometimes do set it when i need to search all the history.
* I think knowing that 5 years will be faster, is good to know.
* Maybe we can just increase the timeout so the “all” works?
* Sounds like everyone agrees that 5 years is a good default setting for the search.
* **Scott proposal: remove the timeout completely and add a stop search button**
TO DO:
- [ ]
|
1.0
|
NameX Advanced Search - remove timeout, and add "stop search" button - ## background:
Staff concern:
> In Namex, the search function is very sensitive and it won’t find part of a name, which we need to be able to continue to do. If we’re trying to search “Maple Leaf Brewing”, and type “Maple Leaf Brew” in the search, it won’t find the NR(s). What should generate is the screenshot in the example above^, and instead, this is what we see, “No data available”:
- this is the result of the application timing out, as before in good practise should reduce the submitted by time, we can either add the submitted by default time to a shorter period like 5 years would return quicker results. Also can add a timeout pop up so you know why it didn’t return a result
* Would you be ok with default setting to 5 years, so it doesn’t time out on you every time?
* Elizabeth usually sets it to 90 days if it is recent. But often she uses “all”. Not all the time, but i sometimes do set it when i need to search all the history.
* I think knowing that 5 years will be faster, is good to know.
* Maybe we can just increase the timeout so the “all” works?
* Sounds like everyone agrees that 5 years is a good default setting for the search.
* **Scott proposal: remove the timeout completely and add a stop search button**
TO DO:
- [ ]
|
non_process
|
namex advanced search remove timeout and add stop search button background staff concern in namex the search function is very sensitive and it won’t find part of a name which we need to be able to continue to do if we’re trying to search “maple leaf brewing” and type “maple leaf brew” in the search it won’t find the nr s what should generate is the screenshot in the example above and instead this is what we see “no data available” this is the result of the application timing out as before in good practise should reduce the submitted by time we can either add the submitted by default time to a shorter period like years would return quicker results also can add a timeout pop up so you know why it didn’t return a result would you be ok with default setting to years so it doesn’t time out on you every time elizabeth usually sets it to days if it is recent but often she uses “all” not all the time but i sometimes do set it when i need to search all the history i think knowing that years will be faster is good to know maybe we can just increase the timeout so the “all” works sounds like everyone agrees that years is a good default setting for the search scott proposal remove the timeout completely and add a stop search button to do
| 0
|
435,352
| 30,495,543,180
|
IssuesEvent
|
2023-07-18 10:32:39
|
Orange-OpenSource/floss-toolbox
|
https://api.github.com/repos/Orange-OpenSource/floss-toolbox
|
opened
|
Project - Add security policy file
|
documentation next-release
|
A SECURITY policy file is missing and it could to have one in case of security issues
|
1.0
|
Project - Add security policy file - A SECURITY policy file is missing and it could to have one in case of security issues
|
non_process
|
project add security policy file a security policy file is missing and it could to have one in case of security issues
| 0
|
6,214
| 9,125,296,101
|
IssuesEvent
|
2019-02-24 12:33:09
|
FACK1/ReservationSystem
|
https://api.github.com/repos/FACK1/ReservationSystem
|
opened
|
General Bugs
|
bug inProcess
|
- [ ] Logo bug.
- [ ] Handle error in every axios endpoint catch.
- [ ] Hide approve button when the user is not admin.
|
1.0
|
General Bugs - - [ ] Logo bug.
- [ ] Handle error in every axios endpoint catch.
- [ ] Hide approve button when the user is not admin.
|
process
|
general bugs logo bug handle error in every axios endpoint catch hide approve button when the user is not admin
| 1
|
185,536
| 15,024,871,772
|
IssuesEvent
|
2021-02-01 20:15:29
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
Privileged access model for community partners
|
documentation security
|
**Describe the issue**
Description and communication of access requirements and reasoning around granting privileged platform access to community partners
**Which Sprint Priority is this issue related to?**
N/A
**Additional context**
Add any other context, attachments or screenshots
**Definition of done**
- [x] Description documented
- [ ] Communication relayed to community (RC, Sprint Review)
- [ ] Tracking and review process of privileged access
|
1.0
|
Privileged access model for community partners - **Describe the issue**
Description and communication of access requirements and reasoning around granting privileged platform access to community partners
**Which Sprint Priority is this issue related to?**
N/A
**Additional context**
Add any other context, attachments or screenshots
**Definition of done**
- [x] Description documented
- [ ] Communication relayed to community (RC, Sprint Review)
- [ ] Tracking and review process of privileged access
|
non_process
|
privileged access model for community partners describe the issue description and communication of access requirements and reasoning around granting privileged platform access to community partners which sprint priority is this issue related to n a additional context add any other context attachments or screenshots definition of done description documented communication relayed to community rc sprint review tracking and review process of privileged access
| 0
|
783,794
| 27,546,340,272
|
IssuesEvent
|
2023-03-07 12:04:38
|
carvel-dev/kapp-controller
|
https://api.github.com/repos/carvel-dev/kapp-controller
|
closed
|
dry-run option for "install" and "add" commands
|
enhancement carvel-accepted cli priority/unprioritized-backlog
|
**Describe the problem/challenge you have**
When teaching new folks about what kctrl is doing behind the scenes to get them more familiar with the gitops workflows it enables, it's a two step process currently.
1. Run install package / add repository commands
2. kubectl get those items and explain what they are
**Describe the solution you'd like**
Ideally we'd have a dry run option that simply outputs the yaml that it's attempting to install/add.
We could eventually consider adding this to update commands as well, but I think initially it'd be most useful and simplest to dry-run on the create commands.
---
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
|
1.0
|
dry-run option for "install" and "add" commands - **Describe the problem/challenge you have**
When teaching new folks about what kctrl is doing behind the scenes to get them more familiar with the gitops workflows it enables, it's a two step process currently.
1. Run install package / add repository commands
2. kubectl get those items and explain what they are
**Describe the solution you'd like**
Ideally we'd have a dry run option that simply outputs the yaml that it's attempting to install/add.
We could eventually consider adding this to update commands as well, but I think initially it'd be most useful and simplest to dry-run on the create commands.
---
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
|
non_process
|
dry run option for install and add commands describe the problem challenge you have when teaching new folks about what kctrl is doing behind the scenes to get them more familiar with the gitops workflows it enables it s a two step process currently run install package add repository commands kubectl get those items and explain what they are describe the solution you d like ideally we d have a dry run option that simply outputs the yaml that it s attempting to install add we could eventually consider adding this to update commands as well but i think initially it d be most useful and simplest to dry run on the create commands vote on this request this is an invitation to the community to vote on issues to help us prioritize our backlog use the smiley face up to the right of this comment to vote 👍 i would like to see this addressed as soon as possible 👎 there are other more important things to focus on right now we are also happy to receive and review pull requests if you want to help working on this issue
| 0
|
10,363
| 13,185,666,800
|
IssuesEvent
|
2020-08-12 21:55:45
|
googleapis/python-container
|
https://api.github.com/repos/googleapis/python-container
|
closed
|
Release as production/stable
|
api: container type: process
|
Package name: **FIXME**
Current release: **alpha**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Release as production/stable - Package name: **FIXME**
Current release: **alpha**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
release as production stable package name fixme current release alpha proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
988
| 4,756,341,225
|
IssuesEvent
|
2016-10-24 13:45:39
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Fix minor typos in modules
|
affects_2.3 docs_report networking waiting_on_maintainer
|
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ios_template
The ios_template module has the following text:
```
Deprecated in 2.2. Use eos_config instead
```
I'm guessing this wasn't an attempt to sell more Arista switches :-)
|
True
|
Fix minor typos in modules - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ios_template
The ios_template module has the following text:
```
Deprecated in 2.2. Use eos_config instead
```
I'm guessing this wasn't an attempt to sell more Arista switches :-)
|
non_process
|
fix minor typos in modules issue type documentation report component name ios template the ios template module has the following text deprecated in use eos config instead i m guessing this wasn t an attempt to sell more arista switches
| 0
|
126,048
| 10,374,419,926
|
IssuesEvent
|
2019-09-09 09:34:01
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
Hide the paging when there is less than a page of disks on one resource group editor
|
:gear: disks 🧪 testing
|
**Storage Explorer Version:** 1.10.0
**Build:** 20190909.2
**Branch:** master
**Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand 'Disks' node -> Open an empty resource group.
2. Check the bottom-right of the opened editor.
**Expect Experience:**
There is no paging.
**Actual Experience:**
The paging shows but it is disabled.

|
1.0
|
Hide the paging when there is less than a page of disks on one resource group editor - **Storage Explorer Version:** 1.10.0
**Build:** 20190909.2
**Branch:** master
**Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand 'Disks' node -> Open an empty resource group.
2. Check the bottom-right of the opened editor.
**Expect Experience:**
There is no paging.
**Actual Experience:**
The paging shows but it is disabled.

|
non_process
|
hide the paging when there is less than a page of disks on one resource group editor storage explorer version build branch master platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce expand disks node open an empty resource group check the bottom right of the opened editor expect experience there is no paging actual experience the paging shows but it is disabled
| 0
|
9,321
| 12,338,260,809
|
IssuesEvent
|
2020-05-14 16:10:45
|
DiSSCo/user-stories
|
https://api.github.com/repos/DiSSCo/user-stories
|
opened
|
to select all DiSSCo records from a certain species
|
2. Collection Management 2. University/Research institute 4. Data processing ICEDIG-SURVEY Specimen level
|
As a Scientist I want to cross-check data between specimens of the same species so that I can flag outliers and correct record data where necessary for this I need to select all DiSSCo records from a certain species
|
1.0
|
to select all DiSSCo records from a certain species - As a Scientist I want to cross-check data between specimens of the same species so that I can flag outliers and correct record data where necessary for this I need to select all DiSSCo records from a certain species
|
process
|
to select all dissco records from a certain species as a scientist i want to cross check data between specimens of the same species so that i can flag outliers and correct record data where necessary for this i need to select all dissco records from a certain species
| 1
|
17,083
| 22,587,525,965
|
IssuesEvent
|
2022-06-28 16:29:15
|
GoogleCloudPlatform/emblem
|
https://api.github.com/repos/GoogleCloudPlatform/emblem
|
opened
|
New Process: Merger verifies staging
|
type: process priority: p0 persona: maintainer
|
Adopt & document a new process that either the PR author or merger must verify a change is shipped to staging as expected, or reopen / create an issue to mark that a change didn't deploy as expected.
This will ensure changes ship as expected, indicating the implementation is complete.
|
1.0
|
New Process: Merger verifies staging - Adopt & document a new process that either the PR author or merger must verify a change is shipped to staging as expected, or reopen / create an issue to mark that a change didn't deploy as expected.
This will ensure changes ship as expected, indicating the implementation is complete.
|
process
|
new process merger verifies staging adopt document a new process that either the pr author or merger must verify a change is shipped to staging as expected or reopen create an issue to mark that a change didn t deploy as expected this will ensure changes ship as expected indicating the implementation is complete
| 1
|
237,158
| 26,078,827,939
|
IssuesEvent
|
2022-12-25 01:22:23
|
Nivaskumark/CVE-2020-0133-packages_apps_Settings_fix
|
https://api.github.com/repos/Nivaskumark/CVE-2020-0133-packages_apps_Settings_fix
|
reopened
|
CVE-2020-0024 (High) detected in Settingsandroid-10.0.0_r46
|
security vulnerability
|
## CVE-2020-0024 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r46</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0133-packages_apps_Settings/commit/1748b6f1992cad25d616560fd635a4df1dd27d36">1748b6f1992cad25d616560fd635a4df1dd27d36</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/SettingsActivity.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In onCreate of SettingsBaseActivity.java, there is a possible unauthorized setting modification due to a permissions bypass. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-8.0Android ID: A-137015265
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0024>CVE-2020-0024</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-05-05</p>
<p>Fix Resolution: android-10.0.0_r34,android-8.1.0_r73</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-0024 (High) detected in Settingsandroid-10.0.0_r46 - ## CVE-2020-0024 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r46</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0133-packages_apps_Settings/commit/1748b6f1992cad25d616560fd635a4df1dd27d36">1748b6f1992cad25d616560fd635a4df1dd27d36</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/SettingsActivity.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In onCreate of SettingsBaseActivity.java, there is a possible unauthorized setting modification due to a permissions bypass. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-8.0Android ID: A-137015265
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0024>CVE-2020-0024</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-05-05</p>
<p>Fix Resolution: android-10.0.0_r34,android-8.1.0_r73</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in settingsandroid cve high severity vulnerability vulnerable library settingsandroid library home page a href found in head commit a href found in base branch master vulnerable source files src com android settings settingsactivity java vulnerability details in oncreate of settingsbaseactivity java there is a possible unauthorized setting modification due to a permissions bypass this could lead to local escalation of privilege with no additional execution privileges needed user interaction is needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution android android step up your open source security game with mend
| 0
|
168,270
| 14,144,543,906
|
IssuesEvent
|
2020-11-10 16:34:40
|
alteryx/evalml
|
https://api.github.com/repos/alteryx/evalml
|
closed
|
Update documentation from np.array to np.ndarray
|
documentation
|
Nuanced and maybe unnecessary, but np.array refers to [the method used to create an array](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and np.ndarray is the actual data structure.
|
1.0
|
Update documentation from np.array to np.ndarray - Nuanced and maybe unnecessary, but np.array refers to [the method used to create an array](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and np.ndarray is the actual data structure.
|
non_process
|
update documentation from np array to np ndarray nuanced and maybe unnecessary but np array refers to and np ndarray is the actual data structure
| 0
|
217,532
| 7,324,954,567
|
IssuesEvent
|
2018-03-03 02:30:51
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Data diretory
|
Category: metacat Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Bug
|
---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 465, https://projects.ecoinformatics.org/ecoinfo/issues/465
Original Date: 2002-04-08
Original Assignee: Jing Tao
---
Now if the Metacat doesn't have permssion, it could not create the data
direcotry automaticly. Uer should create the data directory and change it
permission manually.
We should let it created automaticly.
|
1.0
|
Data diretory - ---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 465, https://projects.ecoinformatics.org/ecoinfo/issues/465
Original Date: 2002-04-08
Original Assignee: Jing Tao
---
Now if the Metacat doesn't have permssion, it could not create the data
direcotry automaticly. Uer should create the data directory and change it
permission manually.
We should let it created automaticly.
|
non_process
|
data diretory author name jing tao jing tao original redmine issue original date original assignee jing tao now if the metacat doesn t have permssion it could not create the data direcotry automaticly uer should create the data directory and change it permission manually we should let it created automaticly
| 0
|
4,483
| 7,344,516,233
|
IssuesEvent
|
2018-03-07 14:54:12
|
UKHomeOffice/dq-aws-transition
|
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
|
opened
|
Test End-to-End Job_55_SMM_ACL Wherescape Job in Prod
|
DQ Data Pipeline Production SSM processing
|
Task Estimate: 3 hours
All tasks complete and expected files and data
- [x] End-to-End Job_55_SMM_ACL tested
- [x] Batch 1 data tested
- [x] Batch 2, 3 data tested
- [x] Batch 4 data tested
- [x] Job running in Prod from Wherescape
|
1.0
|
Test End-to-End Job_55_SMM_ACL Wherescape Job in Prod - Task Estimate: 3 hours
All tasks complete and expected files and data
- [x] End-to-End Job_55_SMM_ACL tested
- [x] Batch 1 data tested
- [x] Batch 2, 3 data tested
- [x] Batch 4 data tested
- [x] Job running in Prod from Wherescape
|
process
|
test end to end job smm acl wherescape job in prod task estimate hours all tasks complete and expected files and data end to end job smm acl tested batch data tested batch data tested batch data tested job running in prod from wherescape
| 1
|
67,969
| 8,204,193,788
|
IssuesEvent
|
2018-09-03 05:00:17
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Feature Request : Keyboard shortcuts for bookmarks in bookmarks toolbar
|
design feature/bookmarks feature/keyboard-shortcuts stale suggestion
|
**Did you search for similar issues before submitting this one?**
Yes
**Describe the issue you encountered:**
A user from support has requested keyboard shortcuts for quick navigation of bookmarks in the bookmarks toolbar like Safari.
**Expected behavior:**
Users should be able to quickly navigate their favorite sites via keyboard accelerators.
- Platform (Win7, 8, 10? macOS? Linux distro?):
All
- Any related issues:
https://linkbubble.zendesk.com/agent/tickets/6313
|
1.0
|
Feature Request : Keyboard shortcuts for bookmarks in bookmarks toolbar - **Did you search for similar issues before submitting this one?**
Yes
**Describe the issue you encountered:**
A user from support has requested keyboard shortcuts for quick navigation of bookmarks in the bookmarks toolbar like Safari.
**Expected behavior:**
Users should be able to quickly navigate their favorite sites via keyboard accelerators.
- Platform (Win7, 8, 10? macOS? Linux distro?):
All
- Any related issues:
https://linkbubble.zendesk.com/agent/tickets/6313
|
non_process
|
feature request keyboard shortcuts for bookmarks in bookmarks toolbar did you search for similar issues before submitting this one yes describe the issue you encountered a user from support has requested keyboard shortcuts for quick navigation of bookmarks in the bookmarks toolbar like safari expected behavior users should be able to quickly navigate their favorite sites via keyboard accelerators platform macos linux distro all any related issues
| 0
|
1,922
| 4,759,036,572
|
IssuesEvent
|
2016-10-24 21:15:23
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Impossible to listen to beforeExit in single-tick --eval
|
confirmed-bug process
|
<!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v2.2.0+ (?)
* **Platform**: all
* **Subsystem**: process
<!-- Enter your issue details below this comment. -->
The following exits __without__ printing:
`node -e "process.on('beforeExit', () => process._rawDebug('hi'))"`
However, this __does__ print:
node -e "process.on('beforeExit', () => process._rawDebug('hi')); setImmediate(()=>{})"
--
I did some debugging and using this patch I get the following output:
```diff
diff --git a/lib/events.js b/lib/events.js
index d676580..16ee82c 100644
--- a/lib/events.js
+++ b/lib/events.js
@@ -138,6 +138,8 @@ EventEmitter.prototype.emit = function emit(type) {
var needDomainExit = false;
var doError = (type === 'error');
+ // process._rawDebug((new Error()).stack)
+
events = this._events;
if (events)
doError = (doError && events.error == null);
@@ -169,6 +171,9 @@ EventEmitter.prototype.emit = function emit(type) {
handler = events[type];
+ process._rawDebug('01.5:' + type)
+ process._rawDebug(typeof handler)
+
if (!handler)
return false;
@@ -234,6 +239,9 @@ function _addListener(target, type, listener, prepend) {
}
if (!existing) {
+ process._rawDebug('@@@@')
+ // process._rawDebug((new Error()).stack)
+ // process._rawDebug(listener.toString())
// Optimize the case of one listener. Don't need the extra array object.
existing = events[type] = listener;
++target._eventsCount;
diff --git a/lib/internal/process/next_tick.js b/lib/internal/process/next_tick.js
index 529645a..5f43ba0 100644
--- a/lib/internal/process/next_tick.js
+++ b/lib/internal/process/next_tick.js
@@ -138,6 +138,8 @@ function setupNextTick() {
}
function nextTick(callback) {
+ process._rawDebug('#####')
+ process._rawDebug((new Error()).stack)
if (typeof callback !== 'function')
throw new TypeError('callback is not a function');
// on the way out, don't bother. it won't get fired anyway.
```
```
./node -e "process.on('beforeExit', () => process._rawDebug('hi'))"
@@@@
@@@@
01.5:newListener
function
@@@@
#####
Error
at process.nextTick (internal/process/next_tick.js:142:24)
at evalScript (bootstrap_node.js:344:13)
at run (bootstrap_node.js:110:11)
at run (bootstrap_node.js:382:7)
at startup (bootstrap_node.js:109:9)
at bootstrap_node.js:497:3
01.5:beforeExit
undefined
01.5:newListener
function
@@@@
01.5:exit
undefined
```
That is, __`beforeExit` fires before the listener is attached__.
--
Turns out that `at evalScript (bootstrap_node.js:344:13)` leads to [this code and comment in `evalScript()`](https://github.com/nodejs/node/blob/2804518174b806da345f0924642c3f04fc39c30e/lib/internal/bootstrap_node.js#L341-L347):
```js
// Defer evaluation for a tick. This is a workaround for deferred
// events not firing when evaluating scripts from the command line,
// see https://github.com/nodejs/node/issues/1600.
process.nextTick(function() {
const result = module._compile(script, `${name}-wrapper`);
if (process._print_eval) console.log(result);
});
```
That comment leads back to https://github.com/nodejs/node/issues/1600 - `net.listen does not emit 'listening' event when in --eval mode`.
That was fixed in https://github.com/nodejs/node/commit/93a44d5228b2e1a885f6279f06c4175c174246be by @bnoordhuis and reviewed by @trevnorris resulting in this code in `evalScript()`.
--
Not sure how to fix this right now, but it seems like a deeper bug somewhere relating to startup and nextTick.
Refs: https://github.com/nodejs/node/pull/1793 & https://github.com/nodejs/node/issues/1600 & also the older https://github.com/nodejs/node-v0.x-archive/issues/14168
Edit: found by @cxreg
|
1.0
|
Impossible to listen to beforeExit in single-tick --eval - <!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v2.2.0+ (?)
* **Platform**: all
* **Subsystem**: process
<!-- Enter your issue details below this comment. -->
The following exits __without__ printing:
`node -e "process.on('beforeExit', () => process._rawDebug('hi'))"`
However, this __does__ print:
node -e "process.on('beforeExit', () => process._rawDebug('hi')); setImmediate(()=>{})"
--
I did some debugging and using this patch I get the following output:
```diff
diff --git a/lib/events.js b/lib/events.js
index d676580..16ee82c 100644
--- a/lib/events.js
+++ b/lib/events.js
@@ -138,6 +138,8 @@ EventEmitter.prototype.emit = function emit(type) {
var needDomainExit = false;
var doError = (type === 'error');
+ // process._rawDebug((new Error()).stack)
+
events = this._events;
if (events)
doError = (doError && events.error == null);
@@ -169,6 +171,9 @@ EventEmitter.prototype.emit = function emit(type) {
handler = events[type];
+ process._rawDebug('01.5:' + type)
+ process._rawDebug(typeof handler)
+
if (!handler)
return false;
@@ -234,6 +239,9 @@ function _addListener(target, type, listener, prepend) {
}
if (!existing) {
+ process._rawDebug('@@@@')
+ // process._rawDebug((new Error()).stack)
+ // process._rawDebug(listener.toString())
// Optimize the case of one listener. Don't need the extra array object.
existing = events[type] = listener;
++target._eventsCount;
diff --git a/lib/internal/process/next_tick.js b/lib/internal/process/next_tick.js
index 529645a..5f43ba0 100644
--- a/lib/internal/process/next_tick.js
+++ b/lib/internal/process/next_tick.js
@@ -138,6 +138,8 @@ function setupNextTick() {
}
function nextTick(callback) {
+ process._rawDebug('#####')
+ process._rawDebug((new Error()).stack)
if (typeof callback !== 'function')
throw new TypeError('callback is not a function');
// on the way out, don't bother. it won't get fired anyway.
```
```
./node -e "process.on('beforeExit', () => process._rawDebug('hi'))"
@@@@
@@@@
01.5:newListener
function
@@@@
#####
Error
at process.nextTick (internal/process/next_tick.js:142:24)
at evalScript (bootstrap_node.js:344:13)
at run (bootstrap_node.js:110:11)
at run (bootstrap_node.js:382:7)
at startup (bootstrap_node.js:109:9)
at bootstrap_node.js:497:3
01.5:beforeExit
undefined
01.5:newListener
function
@@@@
01.5:exit
undefined
```
That is, __`beforeExit` fires before the listener is attached__.
--
Turns out that `at evalScript (bootstrap_node.js:344:13)` leads to [this code and comment in `evalScript()`](https://github.com/nodejs/node/blob/2804518174b806da345f0924642c3f04fc39c30e/lib/internal/bootstrap_node.js#L341-L347):
```js
// Defer evaluation for a tick. This is a workaround for deferred
// events not firing when evaluating scripts from the command line,
// see https://github.com/nodejs/node/issues/1600.
process.nextTick(function() {
const result = module._compile(script, `${name}-wrapper`);
if (process._print_eval) console.log(result);
});
```
That comment leads back to https://github.com/nodejs/node/issues/1600 - `net.listen does not emit 'listening' event when in --eval mode`.
That was fixed in https://github.com/nodejs/node/commit/93a44d5228b2e1a885f6279f06c4175c174246be by @bnoordhuis and reviewed by @trevnorris resulting in this code in `evalScript()`.
--
Not sure how to fix this right now, but it seems like a deeper bug somewhere relating to startup and nextTick.
Refs: https://github.com/nodejs/node/pull/1793 & https://github.com/nodejs/node/issues/1600 & also the older https://github.com/nodejs/node-v0.x-archive/issues/14168
Edit: found by @cxreg
|
process
|
impossible to listen to beforeexit in single tick eval thank you for reporting an issue please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform all subsystem process the following exits without printing node e process on beforeexit process rawdebug hi however this does print node e process on beforeexit process rawdebug hi setimmediate i did some debugging and using this patch i get the following output diff diff git a lib events js b lib events js index a lib events js b lib events js eventemitter prototype emit function emit type var needdomainexit false var doerror type error process rawdebug new error stack events this events if events doerror doerror events error null eventemitter prototype emit function emit type handler events process rawdebug type process rawdebug typeof handler if handler return false function addlistener target type listener prepend if existing process rawdebug process rawdebug new error stack process rawdebug listener tostring optimize the case of one listener don t need the extra array object existing events listener target eventscount diff git a lib internal process next tick js b lib internal process next tick js index a lib internal process next tick js b lib internal process next tick js function setupnexttick function nexttick callback process rawdebug process rawdebug new error stack if typeof callback function throw new typeerror callback is not a function on the way out don t bother it won t get fired anyway node e process on beforeexit process rawdebug hi newlistener function error at process nexttick internal process next tick js at evalscript bootstrap node js at run bootstrap node js at run bootstrap node js at startup bootstrap node js at bootstrap node js beforeexit undefined newlistener function exit undefined that is beforeexit fires before the listener is attached turns out that at evalscript bootstrap node js leads to js defer evaluation for a tick this is a workaround for deferred events not firing when evaluating scripts from the command line see process nexttick function const result module compile script name wrapper if process print eval console log result that comment leads back to net listen does not emit listening event when in eval mode that was fixed in by bnoordhuis and reviewed by trevnorris resulting in this code in evalscript not sure how to fix this right now but it seems like a deeper bug somewhere relating to startup and nexttick refs also the older edit found by cxreg
| 1
|
1,212
| 3,715,967,336
|
IssuesEvent
|
2016-03-03 05:31:02
|
shakeelmohamed/Today-Was
|
https://api.github.com/repos/shakeelmohamed/Today-Was
|
closed
|
Consider reworking CSS
|
process
|
Also add a grunt task to minify, which should be run before commiting
A couple potential npm modules are:
* http://learnboost.github.io/stylus/
* http://www.myth.io/
|
1.0
|
Consider reworking CSS - Also add a grunt task to minify, which should be run before commiting
A couple potential npm modules are:
* http://learnboost.github.io/stylus/
* http://www.myth.io/
|
process
|
consider reworking css also add a grunt task to minify which should be run before commiting a couple potential npm modules are
| 1
|
99,431
| 30,451,962,532
|
IssuesEvent
|
2023-07-16 12:15:26
|
orbeon/orbeon-forms
|
https://api.github.com/repos/orbeon/orbeon-forms
|
opened
|
Only if in PDF, only if not in PDF common visibility
|
Module: Form Builder Type: RFE
|
This is a shortcut for `fr:mode() = 'pdf'` and `not(fr:mode() = 'pdf')`. Part of #2281.
[+1 from customer](https://3.basecamp.com/3600924/buckets/23878674/messages/6344372682)
|
1.0
|
Only if in PDF, only if not in PDF common visibility - This is a shortcut for `fr:mode() = 'pdf'` and `not(fr:mode() = 'pdf')`. Part of #2281.
[+1 from customer](https://3.basecamp.com/3600924/buckets/23878674/messages/6344372682)
|
non_process
|
only if in pdf only if not in pdf common visibility this is a shortcut for fr mode pdf and not fr mode pdf part of
| 0
|
133,936
| 12,558,104,018
|
IssuesEvent
|
2020-06-07 14:56:52
|
jamesoff/simplemonitor
|
https://api.github.com/repos/jamesoff/simplemonitor
|
closed
|
typing.NoReturn not fully supported in Python 3.5 and 3.6
|
documentation
|
`typing.NoReturn` was only added in 3.5.4 and 3.6.2 (https://docs.python.org/3/library/typing.html#typing.NoReturn).
On older patch releases of Python the code raises an ÌmportError`, obviously.
I suggest to not use `typing.NoReturn` or to make more clear that Python >= 3.5.4 or Python 3.6.2 is required.
It took me some time to figure this out on Debian Stretch which only has Python 3.5.3 :(.
Occurrences:
https://github.com/jamesoff/simplemonitor/blob/917a08a437bff687f8acf41a1aea3499fc40b34f/simplemonitor/Alerters/alerter.py#L9
https://github.com/jamesoff/simplemonitor/blob/917a08a437bff687f8acf41a1aea3499fc40b34f/simplemonitor/Monitors/monitor.py#L19
|
1.0
|
typing.NoReturn not fully supported in Python 3.5 and 3.6 - `typing.NoReturn` was only added in 3.5.4 and 3.6.2 (https://docs.python.org/3/library/typing.html#typing.NoReturn).
On older patch releases of Python the code raises an ÌmportError`, obviously.
I suggest to not use `typing.NoReturn` or to make more clear that Python >= 3.5.4 or Python 3.6.2 is required.
It took me some time to figure this out on Debian Stretch which only has Python 3.5.3 :(.
Occurrences:
https://github.com/jamesoff/simplemonitor/blob/917a08a437bff687f8acf41a1aea3499fc40b34f/simplemonitor/Alerters/alerter.py#L9
https://github.com/jamesoff/simplemonitor/blob/917a08a437bff687f8acf41a1aea3499fc40b34f/simplemonitor/Monitors/monitor.py#L19
|
non_process
|
typing noreturn not fully supported in python and typing noreturn was only added in and on older patch releases of python the code raises an ìmporterror obviously i suggest to not use typing noreturn or to make more clear that python or python is required it took me some time to figure this out on debian stretch which only has python occurrences
| 0
|
12,367
| 14,894,649,643
|
IssuesEvent
|
2021-01-21 07:54:51
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
opened
|
Support AWS Config logs
|
p1 story team:data processing
|
### Description
Support AWS Config logs: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
### Related Services
panther-log-processor
### Acceptance Criteria
- Panther supports AWS Config logs
|
1.0
|
Support AWS Config logs - ### Description
Support AWS Config logs: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
### Related Services
panther-log-processor
### Acceptance Criteria
- Panther supports AWS Config logs
|
process
|
support aws config logs description support aws config logs related services panther log processor acceptance criteria panther supports aws config logs
| 1
|
106,377
| 11,485,767,607
|
IssuesEvent
|
2020-02-11 08:30:59
|
cbdevnet/midimonster
|
https://api.github.com/repos/cbdevnet/midimonster
|
reopened
|
Add new command-line arguments to manpage
|
documentation
|
In particular `-v` (version), `-b` (backend option override) and `-i` (instance option override)
|
1.0
|
Add new command-line arguments to manpage - In particular `-v` (version), `-b` (backend option override) and `-i` (instance option override)
|
non_process
|
add new command line arguments to manpage in particular v version b backend option override and i instance option override
| 0
|
19,047
| 25,049,483,559
|
IssuesEvent
|
2022-11-05 17:51:38
|
Aryan20/Logomenu
|
https://api.github.com/repos/Aryan20/Logomenu
|
closed
|
Custom command for extension application
|
bug enhancement In Process
|
Hello,
I would like to request for a feature if possible.
When clicking on "Extensions" into the menu, it bring back the default's gnome manager, I was wondering if there will be a way to add a custom command just like `gnome-terminal` and `gnome-software`.
```sh
# Custom command could be:
flatpak run com.mattjakeman.ExtensionManager
```
Edit: Update command since I make a double `flatpak run` command...
*For the background story:*
I'm using the flatpak application "Extension manager" (https://flathub.org/apps/details/com.mattjakeman.ExtensionManager) to handle gnome shell extensions and installation instead of default gnome's one (easier to install extensions when using flatpak browser that does not currently support chrome's gnome shell extension).
There is no rush about it, I just wanted to know if you could make it or not at all.
Anyway, thanks you for your work and time on this extension!
|
1.0
|
Custom command for extension application - Hello,
I would like to request for a feature if possible.
When clicking on "Extensions" into the menu, it bring back the default's gnome manager, I was wondering if there will be a way to add a custom command just like `gnome-terminal` and `gnome-software`.
```sh
# Custom command could be:
flatpak run com.mattjakeman.ExtensionManager
```
Edit: Update command since I make a double `flatpak run` command...
*For the background story:*
I'm using the flatpak application "Extension manager" (https://flathub.org/apps/details/com.mattjakeman.ExtensionManager) to handle gnome shell extensions and installation instead of default gnome's one (easier to install extensions when using flatpak browser that does not currently support chrome's gnome shell extension).
There is no rush about it, I just wanted to know if you could make it or not at all.
Anyway, thanks you for your work and time on this extension!
|
process
|
custom command for extension application hello i would like to request for a feature if possible when clicking on extensions into the menu it bring back the default s gnome manager i was wondering if there will be a way to add a custom command just like gnome terminal and gnome software sh custom command could be flatpak run com mattjakeman extensionmanager edit update command since i make a double flatpak run command for the background story i m using the flatpak application extension manager to handle gnome shell extensions and installation instead of default gnome s one easier to install extensions when using flatpak browser that does not currently support chrome s gnome shell extension there is no rush about it i just wanted to know if you could make it or not at all anyway thanks you for your work and time on this extension
| 1
|
184,393
| 21,784,896,488
|
IssuesEvent
|
2022-05-14 01:43:15
|
yhuangsh/50pm
|
https://api.github.com/repos/yhuangsh/50pm
|
closed
|
WS-2019-0331 (Medium) detected in handlebars-4.1.1.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /50pm/frontend/50pm/package.json</p>
<p>Path to vulnerable library: 50pm/frontend/50pm/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0331 (Medium) detected in handlebars-4.1.1.tgz - autoclosed - ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p>
<p>Path to dependency file: /50pm/frontend/50pm/package.json</p>
<p>Path to vulnerable library: 50pm/frontend/50pm/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in handlebars tgz autoclosed ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file frontend package json path to vulnerable library frontend node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
240,482
| 7,802,295,552
|
IssuesEvent
|
2018-06-10 10:48:00
|
redbadger/pride-london-app
|
https://api.github.com/repos/redbadger/pride-london-app
|
opened
|
Incorrect day on parade information screen
|
:bomb: high priority bug
|
## Description of issue

> The date should read "Saturday 7th July"
We should double check if we have hard coded Sunday anywhere else and fix that too.
|
1.0
|
Incorrect day on parade information screen - ## Description of issue

> The date should read "Saturday 7th July"
We should double check if we have hard coded Sunday anywhere else and fix that too.
|
non_process
|
incorrect day on parade information screen description of issue the date should read saturday july we should double check if we have hard coded sunday anywhere else and fix that too
| 0
|
12,565
| 14,981,784,508
|
IssuesEvent
|
2021-01-28 15:14:30
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Keep @@ignore and @ignore during Re-Introspection
|
kind/improvement process/candidate team/migrations
|
These directives can be user provided or derived by introspection.
Introspection should check for their presence in the previous schema and reapply them during reintrospection to not accidentally expose entities the user chose to ignore in the Client.
|
1.0
|
Keep @@ignore and @ignore during Re-Introspection - These directives can be user provided or derived by introspection.
Introspection should check for their presence in the previous schema and reapply them during reintrospection to not accidentally expose entities the user chose to ignore in the Client.
|
process
|
keep ignore and ignore during re introspection these directives can be user provided or derived by introspection introspection should check for their presence in the previous schema and reapply them during reintrospection to not accidentally expose entities the user chose to ignore in the client
| 1
|
2,224
| 5,073,461,799
|
IssuesEvent
|
2016-12-27 08:51:01
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Named aggregations are missing `cols` metadata
|
Priority/P1 Query Processor
|
`["named", ...]` aggregations column metadata doesn't contain the correct `base_type`, `special_type`, or `source` properties. This is preventing these queries from being graphed / formatted correctly.
Example query:
```
{
"source_table": 1,
"aggregation": [["named", ["count"], "foo"]],
"breakout": [["datetime_field", ["field-id", 1], "as", "day"]]
}
```
Expected `data.cols[1]`:
```
{
"description":null,
"table_id":null,
"special_type":"type/Number",
"name":"foo",
"source":"aggregation",
"extra_info":{},
"id":null,
"target":null,
"display_name":"foo",
"base_type":"type/Integer"
}
```
Actual `data.cols[1]`:
```
{
"description":null,
"table_id":null,
"special_type":null,
"name":"foo",
"extra_info":{},
"id":null,
"target":null,
"preview_display":true,
"display_name":"foo",
"base_type":"type/*"
}
```
|
1.0
|
Named aggregations are missing `cols` metadata - `["named", ...]` aggregations column metadata doesn't contain the correct `base_type`, `special_type`, or `source` properties. This is preventing these queries from being graphed / formatted correctly.
Example query:
```
{
"source_table": 1,
"aggregation": [["named", ["count"], "foo"]],
"breakout": [["datetime_field", ["field-id", 1], "as", "day"]]
}
```
Expected `data.cols[1]`:
```
{
"description":null,
"table_id":null,
"special_type":"type/Number",
"name":"foo",
"source":"aggregation",
"extra_info":{},
"id":null,
"target":null,
"display_name":"foo",
"base_type":"type/Integer"
}
```
Actual `data.cols[1]`:
```
{
"description":null,
"table_id":null,
"special_type":null,
"name":"foo",
"extra_info":{},
"id":null,
"target":null,
"preview_display":true,
"display_name":"foo",
"base_type":"type/*"
}
```
|
process
|
named aggregations are missing cols metadata aggregations column metadata doesn t contain the correct base type special type or source properties this is preventing these queries from being graphed formatted correctly example query source table aggregation foo breakout as day expected data cols description null table id null special type type number name foo source aggregation extra info id null target null display name foo base type type integer actual data cols description null table id null special type null name foo extra info id null target null preview display true display name foo base type type
| 1
|
9,974
| 13,018,790,233
|
IssuesEvent
|
2020-07-26 19:08:01
|
ION28/BLUESPAWN
|
https://api.github.com/repos/ION28/BLUESPAWN
|
closed
|
Hunt T1038 and T1073 (DLL Search Order Hijacking/Sideloading)
|
difficulty/hard lang/c++ mode/hunt mode/monitor module/processes priority/high type/enhancement
|
https://attack.mitre.org/techniques/T1038/
and
https://attack.mitre.org/techniques/T1073/ (specifically see some of the APT group examples)
Also investigate: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs (note PowerSploit-PowerUp excludes these kinds of DLLs: https://github.com/PowerShellMafia/PowerSploit/blob/master/Privesc/PowerUp.ps1#L2369)
|
1.0
|
Hunt T1038 and T1073 (DLL Search Order Hijacking/Sideloading) - https://attack.mitre.org/techniques/T1038/
and
https://attack.mitre.org/techniques/T1073/ (specifically see some of the APT group examples)
Also investigate: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs (note PowerSploit-PowerUp excludes these kinds of DLLs: https://github.com/PowerShellMafia/PowerSploit/blob/master/Privesc/PowerUp.ps1#L2369)
|
process
|
hunt and dll search order hijacking sideloading and specifically see some of the apt group examples also investigate hkey local machine system currentcontrolset control session manager knowndlls note powersploit powerup excludes these kinds of dlls
| 1
|
790,178
| 27,818,015,373
|
IssuesEvent
|
2023-03-18 22:40:43
|
conan-io/conan
|
https://api.github.com/repos/conan-io/conan
|
closed
|
nested tools.vcvars() ignores arch change
|
type: look into priority: low stage: queue complex: medium
|
The code below doesn't work as one would expect:
```python
with tools.vcvars(self.settings, arch="x86_64"):
self.run("cl")
with tools.vcvars(self.settings, arch="x86"):
self.run("cl")
```
result:
```
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
```
(note `x64` printed by second `cl`). The reason I'm asking for this is following. Consider a test recipe:
```python
from conans import ConanFile, tools
class TestConan(ConanFile):
name = "test"
version = "0.0.1"
settings = "os_build", "compiler", "arch_build"
def build(self):
if self.settings.compiler == "Visual Studio":
with tools.vcvars(self.settings, arch="x86"):
self.run("cl")
```
Then I build it with `conan create`, 32-bit compiler is called. It's OK. But then I try to use conan package tools:
```python
from conans import tools
from conan.packager import ConanMultiPackager
if __name__ == "__main__":
builder = ConanMultiPackager()
builder.add()
builder.run()
```
I get output:
```
test/0.0.1@user/channel: Calling build()
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
```
64-bit environment is set. That's because `ConanMultiPackager()` calls `tools.vcvars()` itself and the second call of `tools.vcvars()` from the recipe is just ignored.
|
1.0
|
nested tools.vcvars() ignores arch change - The code below doesn't work as one would expect:
```python
with tools.vcvars(self.settings, arch="x86_64"):
self.run("cl")
with tools.vcvars(self.settings, arch="x86"):
self.run("cl")
```
result:
```
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
```
(note `x64` printed by second `cl`). The reason I'm asking for this is following. Consider a test recipe:
```python
from conans import ConanFile, tools
class TestConan(ConanFile):
name = "test"
version = "0.0.1"
settings = "os_build", "compiler", "arch_build"
def build(self):
if self.settings.compiler == "Visual Studio":
with tools.vcvars(self.settings, arch="x86"):
self.run("cl")
```
Then I build it with `conan create`, 32-bit compiler is called. It's OK. But then I try to use conan package tools:
```python
from conans import tools
from conan.packager import ConanMultiPackager
if __name__ == "__main__":
builder = ConanMultiPackager()
builder.add()
builder.run()
```
I get output:
```
test/0.0.1@user/channel: Calling build()
Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26726 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
```
64-bit environment is set. That's because `ConanMultiPackager()` calls `tools.vcvars()` itself and the second call of `tools.vcvars()` from the recipe is just ignored.
|
non_process
|
nested tools vcvars ignores arch change the code below doesn t work as one would expect python with tools vcvars self settings arch self run cl with tools vcvars self settings arch self run cl result microsoft r c c optimizing compiler version for copyright c microsoft corporation all rights reserved usage cl filename microsoft r c c optimizing compiler version for copyright c microsoft corporation all rights reserved usage cl filename note printed by second cl the reason i m asking for this is following consider a test recipe python from conans import conanfile tools class testconan conanfile name test version settings os build compiler arch build def build self if self settings compiler visual studio with tools vcvars self settings arch self run cl then i build it with conan create bit compiler is called it s ok but then i try to use conan package tools python from conans import tools from conan packager import conanmultipackager if name main builder conanmultipackager builder add builder run i get output test user channel calling build microsoft r c c optimizing compiler version for copyright c microsoft corporation all rights reserved usage cl filename bit environment is set that s because conanmultipackager calls tools vcvars itself and the second call of tools vcvars from the recipe is just ignored
| 0
|
115,668
| 11,884,581,885
|
IssuesEvent
|
2020-03-27 17:55:15
|
brazil-data-cube/wtss-spec
|
https://api.github.com/repos/brazil-data-cube/wtss-spec
|
closed
|
Missing start_date and end_date in the section time_series - README.rst
|
documentation
|
The ``query`` object is not matching with request parameters.
``http://myserver/wtss/time_series?coverage=mod13q1&attributes=red,nir&longitude=-54.0&latitude=-5.0&start_date=2000-02-18&end_date=2000-03-21``
```json
"query": {
"coverage": "mod13q1",
"attributes": [ "red", "nir" ],
"longitude": -54,
"latitude": -5
}
```
should be:
```json
"query": {
"coverage": "mod13q1",
"attributes": [ "red", "nir" ],
"longitude": -54,
"latitude": -5,
"start_date": "2000-02-18",
"end_date": "2000-03-21"
}
```
|
1.0
|
Missing start_date and end_date in the section time_series - README.rst - The ``query`` object is not matching with request parameters.
``http://myserver/wtss/time_series?coverage=mod13q1&attributes=red,nir&longitude=-54.0&latitude=-5.0&start_date=2000-02-18&end_date=2000-03-21``
```json
"query": {
"coverage": "mod13q1",
"attributes": [ "red", "nir" ],
"longitude": -54,
"latitude": -5
}
```
should be:
```json
"query": {
"coverage": "mod13q1",
"attributes": [ "red", "nir" ],
"longitude": -54,
"latitude": -5,
"start_date": "2000-02-18",
"end_date": "2000-03-21"
}
```
|
non_process
|
missing start date and end date in the section time series readme rst the query object is not matching with request parameters json query coverage attributes longitude latitude should be json query coverage attributes longitude latitude start date end date
| 0
|
10,474
| 13,247,808,536
|
IssuesEvent
|
2020-08-19 17:52:54
|
department-of-veterans-affairs/notification-api
|
https://api.github.com/repos/department-of-veterans-affairs/notification-api
|
opened
|
Approach with storing assets for VANotify
|
Process Task Release
|
Filip to add details on the options that were discussed.
*Notes:*
- The existing approach for storing VA assets needs to be resolved to work in Production
- Interim approach for storing assets is in card #145
- Sync with Filip on options that he has already discussed for the interim, and the limitations with the VA
- Document the decision on how/where to store assets for VANotify
|
1.0
|
Approach with storing assets for VANotify - Filip to add details on the options that were discussed.
*Notes:*
- The existing approach for storing VA assets needs to be resolved to work in Production
- Interim approach for storing assets is in card #145
- Sync with Filip on options that he has already discussed for the interim, and the limitations with the VA
- Document the decision on how/where to store assets for VANotify
|
process
|
approach with storing assets for vanotify filip to add details on the options that were discussed notes the existing approach for storing va assets needs to be resolved to work in production interim approach for storing assets is in card sync with filip on options that he has already discussed for the interim and the limitations with the va document the decision on how where to store assets for vanotify
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.