Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
17,181
22,762,361,086
IssuesEvent
2022-07-07 22:40:23
Carlosmtp/DomuzSGI
https://api.github.com/repos/Carlosmtp/DomuzSGI
closed
Añadir columnas en los indicadores de procesos
Enhancement High Process Management Reports Management
- [x] Crear columna goal en indicadores de procesos - [x] Enviar los perodic_reports en la funcion para consultar los procesos - [x] Crear la tabla process-periodic_records con campos de fecha:date, eficiencia:float e id del proceso al que hace referencia
1.0
Añadir columnas en los indicadores de procesos - - [x] Crear columna goal en indicadores de procesos - [x] Enviar los perodic_reports en la funcion para consultar los procesos - [x] Crear la tabla process-periodic_records con campos de fecha:date, eficiencia:float e id del proceso al que hace referencia
process
añadir columnas en los indicadores de procesos crear columna goal en indicadores de procesos enviar los perodic reports en la funcion para consultar los procesos crear la tabla process periodic records con campos de fecha date eficiencia float e id del proceso al que hace referencia
1
1,657
4,287,664,497
IssuesEvent
2016-07-16 22:32:35
pwittchen/NetworkEvents
https://api.github.com/repos/pwittchen/NetworkEvents
closed
Release v. 2.1.4
release process
**Initial release notes**: - changed implementation of the `OnlineChecker` in `OnlineCheckerImpl` class. Now it pings remote host. - added `android.permission.INTERNET` to `AndroidManifest.xml` - added back `NetworkHelper` class with static method `boolean isConnectedToWiFiOrMobileNetwork(context)` - updated sample apps **Things to do**: - [x] update documentation in `README.md` - [x] bump library version to 2.1.4 - [x] upload Archives to Maven Central Repository - [x] close and release artifact on Nexus - [x] update gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` after Maven Sync - [x] write deprecation note in `README.md` - [x] create new GitHub release **Important note**: After this release NetworkEvents library will be **deprecated and no longer maintained** in favor of [ReactiveNetwork](https://github.com/pwittchen/ReactiveNetwork) project.
1.0
Release v. 2.1.4 - **Initial release notes**: - changed implementation of the `OnlineChecker` in `OnlineCheckerImpl` class. Now it pings remote host. - added `android.permission.INTERNET` to `AndroidManifest.xml` - added back `NetworkHelper` class with static method `boolean isConnectedToWiFiOrMobileNetwork(context)` - updated sample apps **Things to do**: - [x] update documentation in `README.md` - [x] bump library version to 2.1.4 - [x] upload Archives to Maven Central Repository - [x] close and release artifact on Nexus - [x] update gh-pages - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` after Maven Sync - [x] write deprecation note in `README.md` - [x] create new GitHub release **Important note**: After this release NetworkEvents library will be **deprecated and no longer maintained** in favor of [ReactiveNetwork](https://github.com/pwittchen/ReactiveNetwork) project.
process
release v initial release notes changed implementation of the onlinechecker in onlinecheckerimpl class now it pings remote host added android permission internet to androidmanifest xml added back networkhelper class with static method boolean isconnectedtowifiormobilenetwork context updated sample apps things to do update documentation in readme md bump library version to upload archives to maven central repository close and release artifact on nexus update gh pages update changelog md after maven sync bump library version in readme md after maven sync write deprecation note in readme md create new github release important note after this release networkevents library will be deprecated and no longer maintained in favor of project
1
17,123
22,638,917,381
IssuesEvent
2022-06-30 22:23:21
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
opened
[processor/transform] Add option to define TQL with declarative syntax
priority:p2 comp: transformprocessor
**Is your feature request related to a problem? Please describe.** Currently the only way to interact with the Telemetry Query Language is to use the language's SQL-style syntax: ```yaml set(attribute["test"], "pass") where attribute["syntax style"] == "SQL" ``` This works great for the transform processor since it is a new component with no new users and no existing patterns, but for other components this may be a barrier of entry. With the TQL being moved to is own package (#11751) it needs to be as accessible as possible. If existing components want to take advantage of the language, it will be more natural to use a declarative syntax instead of the SQL syntax. **Describe the solution you'd like** The TQL package should be able to interpret the SQL-like syntax that it can today AND the ability to interpret a declarative syntax. At least to start, I do not think it should allow both types of input at the same time; it should be one or the other. All outputs and functionality of the package should remain the same. The only change should be its ability to interpret a new type of input. **Additional context** [A declarative syntax for the telemetry query language has already been discussed in the collector's processing doc](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/processing.md#declarative-configuration)
1.0
[processor/transform] Add option to define TQL with declarative syntax - **Is your feature request related to a problem? Please describe.** Currently the only way to interact with the Telemetry Query Language is to use the language's SQL-style syntax: ```yaml set(attribute["test"], "pass") where attribute["syntax style"] == "SQL" ``` This works great for the transform processor since it is a new component with no new users and no existing patterns, but for other components this may be a barrier of entry. With the TQL being moved to is own package (#11751) it needs to be as accessible as possible. If existing components want to take advantage of the language, it will be more natural to use a declarative syntax instead of the SQL syntax. **Describe the solution you'd like** The TQL package should be able to interpret the SQL-like syntax that it can today AND the ability to interpret a declarative syntax. At least to start, I do not think it should allow both types of input at the same time; it should be one or the other. All outputs and functionality of the package should remain the same. The only change should be its ability to interpret a new type of input. **Additional context** [A declarative syntax for the telemetry query language has already been discussed in the collector's processing doc](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/processing.md#declarative-configuration)
process
add option to define tql with declarative syntax is your feature request related to a problem please describe currently the only way to interact with the telemetry query language is to use the language s sql style syntax yaml set attribute pass where attribute sql this works great for the transform processor since it is a new component with no new users and no existing patterns but for other components this may be a barrier of entry with the tql being moved to is own package it needs to be as accessible as possible if existing components want to take advantage of the language it will be more natural to use a declarative syntax instead of the sql syntax describe the solution you d like the tql package should be able to interpret the sql like syntax that it can today and the ability to interpret a declarative syntax at least to start i do not think it should allow both types of input at the same time it should be one or the other all outputs and functionality of the package should remain the same the only change should be its ability to interpret a new type of input additional context
1
44,519
12,223,146,245
IssuesEvent
2020-05-02 16:19:17
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
ValueError 'k exceeds matrix dimensions' for sparse.diagonal() when 0 in sparse.shape
defect good first issue scipy.sparse
When a sparse matrix has a 0 in its shape, such as `(0, 0)`, `(0, 1)` or `(1, 0)`, calling `diagonal()` fails. This differs to `np.diag` on the equivalent dense array, which succeeds, returning an empty array. The best behaviour here seems like it'd be open for debate, but it's unfortunate that the default `diagonal()` method doesn't work on every square sparse matrix. It can require adding special cases/conditionals around `.diagonal()` calls, such as https://github.com/stellargraph/stellargraph/pull/1378. #### Reproducing code example: Minimal: ```python import scipy.sparse as sps import numpy as np m = sps.csr_matrix((0, 0)) print(np.diag(m.todense()).shape) # (0,) m.diagonal() # ValueError: k exceeds matrix dimensions ``` "Complete" tests: ```python import scipy.sparse as sps # check all the sparse matrix classes classes = [ sps.bsr_matrix, sps.coo_matrix, sps.csc_matrix, sps.csr_matrix, sps.dia_matrix, sps.dok_matrix, sps.lil_matrix, ] for cls in classes: try: cls((0,0)).diagonal() except Exception as e: msg = e else: msg = None print(f"{cls.__name__}: {msg!r}") # For completeness, non-(0, 0) empty matrices m = sps.csr_matrix((1, 0)) m.diagonal() # ValueError: k exceeds matrix dimensions m = sps.csr_matrix((0, 1)) m.diagonal() # ValueError: k exceeds matrix dimensions ``` #### Error message: Minimal example: ``` (0,) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-44-106e34b7d64c> in <module> 2 print(np.diag(m.todense()).shape) 3 ----> 4 m.diagonal() ~/.pyenv/versions/3.6.9/lib/python3.6/site-packages/scipy/sparse/compressed.py in diagonal(self, k) 531 rows, cols = self.shape 532 if k <= -rows or k >= cols: --> 533 raise ValueError("k exceeds matrix dimensions") 534 fn = getattr(_sparsetools, self.format + "_diagonal") 535 y = np.empty(min(rows + min(k, 0), cols - max(k, 0)), ValueError: k exceeds matrix dimensions ``` Output of the loop in the "complete" example: ``` bsr_matrix: ValueError('k exceeds matrix dimensions',) coo_matrix: ValueError('k exceeds matrix dimensions',) csc_matrix: ValueError('k exceeds matrix dimensions',) csr_matrix: ValueError('k exceeds matrix dimensions',) dia_matrix: ValueError('k exceeds matrix dimensions',) dok_matrix: ValueError('k exceeds matrix dimensions',) lil_matrix: ValueError('k exceeds matrix dimensions',) ``` #### Scipy/Numpy/Python version information: ``` 1.4.1 1.17.4 sys.version_info(major=3, minor=6, micro=9, releaselevel='final', serial=0) ```
1.0
ValueError 'k exceeds matrix dimensions' for sparse.diagonal() when 0 in sparse.shape - When a sparse matrix has a 0 in its shape, such as `(0, 0)`, `(0, 1)` or `(1, 0)`, calling `diagonal()` fails. This differs to `np.diag` on the equivalent dense array, which succeeds, returning an empty array. The best behaviour here seems like it'd be open for debate, but it's unfortunate that the default `diagonal()` method doesn't work on every square sparse matrix. It can require adding special cases/conditionals around `.diagonal()` calls, such as https://github.com/stellargraph/stellargraph/pull/1378. #### Reproducing code example: Minimal: ```python import scipy.sparse as sps import numpy as np m = sps.csr_matrix((0, 0)) print(np.diag(m.todense()).shape) # (0,) m.diagonal() # ValueError: k exceeds matrix dimensions ``` "Complete" tests: ```python import scipy.sparse as sps # check all the sparse matrix classes classes = [ sps.bsr_matrix, sps.coo_matrix, sps.csc_matrix, sps.csr_matrix, sps.dia_matrix, sps.dok_matrix, sps.lil_matrix, ] for cls in classes: try: cls((0,0)).diagonal() except Exception as e: msg = e else: msg = None print(f"{cls.__name__}: {msg!r}") # For completeness, non-(0, 0) empty matrices m = sps.csr_matrix((1, 0)) m.diagonal() # ValueError: k exceeds matrix dimensions m = sps.csr_matrix((0, 1)) m.diagonal() # ValueError: k exceeds matrix dimensions ``` #### Error message: Minimal example: ``` (0,) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-44-106e34b7d64c> in <module> 2 print(np.diag(m.todense()).shape) 3 ----> 4 m.diagonal() ~/.pyenv/versions/3.6.9/lib/python3.6/site-packages/scipy/sparse/compressed.py in diagonal(self, k) 531 rows, cols = self.shape 532 if k <= -rows or k >= cols: --> 533 raise ValueError("k exceeds matrix dimensions") 534 fn = getattr(_sparsetools, self.format + "_diagonal") 535 y = np.empty(min(rows + min(k, 0), cols - max(k, 0)), ValueError: k exceeds matrix dimensions ``` Output of the loop in the "complete" example: ``` bsr_matrix: ValueError('k exceeds matrix dimensions',) coo_matrix: ValueError('k exceeds matrix dimensions',) csc_matrix: ValueError('k exceeds matrix dimensions',) csr_matrix: ValueError('k exceeds matrix dimensions',) dia_matrix: ValueError('k exceeds matrix dimensions',) dok_matrix: ValueError('k exceeds matrix dimensions',) lil_matrix: ValueError('k exceeds matrix dimensions',) ``` #### Scipy/Numpy/Python version information: ``` 1.4.1 1.17.4 sys.version_info(major=3, minor=6, micro=9, releaselevel='final', serial=0) ```
non_process
valueerror k exceeds matrix dimensions for sparse diagonal when in sparse shape when a sparse matrix has a in its shape such as or calling diagonal fails this differs to np diag on the equivalent dense array which succeeds returning an empty array the best behaviour here seems like it d be open for debate but it s unfortunate that the default diagonal method doesn t work on every square sparse matrix it can require adding special cases conditionals around diagonal calls such as reproducing code example minimal python import scipy sparse as sps import numpy as np m sps csr matrix print np diag m todense shape m diagonal valueerror k exceeds matrix dimensions complete tests python import scipy sparse as sps check all the sparse matrix classes classes sps bsr matrix sps coo matrix sps csc matrix sps csr matrix sps dia matrix sps dok matrix sps lil matrix for cls in classes try cls diagonal except exception as e msg e else msg none print f cls name msg r for completeness non empty matrices m sps csr matrix m diagonal valueerror k exceeds matrix dimensions m sps csr matrix m diagonal valueerror k exceeds matrix dimensions error message minimal example valueerror traceback most recent call last in print np diag m todense shape m diagonal pyenv versions lib site packages scipy sparse compressed py in diagonal self k rows cols self shape if k cols raise valueerror k exceeds matrix dimensions fn getattr sparsetools self format diagonal y np empty min rows min k cols max k valueerror k exceeds matrix dimensions output of the loop in the complete example bsr matrix valueerror k exceeds matrix dimensions coo matrix valueerror k exceeds matrix dimensions csc matrix valueerror k exceeds matrix dimensions csr matrix valueerror k exceeds matrix dimensions dia matrix valueerror k exceeds matrix dimensions dok matrix valueerror k exceeds matrix dimensions lil matrix valueerror k exceeds matrix dimensions scipy numpy python version information sys version info major minor micro releaselevel final serial
0
276,800
24,021,054,494
IssuesEvent
2022-09-15 07:41:43
chshersh/tool-sync
https://api.github.com/repos/chshersh/tool-sync
opened
Create 'AssetError' and test 'select_asset' function
good first issue test refactoring
This function selects an asset to download based on config: https://github.com/chshersh/tool-sync/blob/b79eeb91cfdc3a122e6693d503117f68ff1fb44e/src/model/tool.rs#L72-L86 Currently, it returns `Result<Asset, String>` but the goal is to return a custom type for error: `Result<Asset, AssetError`. The idea is to replace `String` a custom constructor and add tests for these cases. - [ ] Create a new enum with two constructors - [ ] Implement a `display()` function for this enum - [ ] Replace `String` with enum - [ ] Write tests
1.0
Create 'AssetError' and test 'select_asset' function - This function selects an asset to download based on config: https://github.com/chshersh/tool-sync/blob/b79eeb91cfdc3a122e6693d503117f68ff1fb44e/src/model/tool.rs#L72-L86 Currently, it returns `Result<Asset, String>` but the goal is to return a custom type for error: `Result<Asset, AssetError`. The idea is to replace `String` a custom constructor and add tests for these cases. - [ ] Create a new enum with two constructors - [ ] Implement a `display()` function for this enum - [ ] Replace `String` with enum - [ ] Write tests
non_process
create asseterror and test select asset function this function selects an asset to download based on config currently it returns result but the goal is to return a custom type for error result asset asseterror the idea is to replace string a custom constructor and add tests for these cases create a new enum with two constructors implement a display function for this enum replace string with enum write tests
0
367,242
25,728,661,215
IssuesEvent
2022-12-07 18:28:14
Fiserv/Support
https://api.github.com/repos/Fiserv/Support
closed
Documentation section not rendering in Dev Studio
bug documentation Severity - High Priority - High BankingHub
# Reporting new issue for BankingHub tenant Documentation section of Banking Hub is not displaying in Dev instance of Developer Studio. https://dev-developerstudio.fiserv.com/product/BankingHub Please refer the screenshot below. ![image](https://user-images.githubusercontent.com/81968767/206167767-4c2e1d91-289c-4551-96ee-be529d0e3682.png) cc: @rahravin @bobburghardt
1.0
Documentation section not rendering in Dev Studio - # Reporting new issue for BankingHub tenant Documentation section of Banking Hub is not displaying in Dev instance of Developer Studio. https://dev-developerstudio.fiserv.com/product/BankingHub Please refer the screenshot below. ![image](https://user-images.githubusercontent.com/81968767/206167767-4c2e1d91-289c-4551-96ee-be529d0e3682.png) cc: @rahravin @bobburghardt
non_process
documentation section not rendering in dev studio reporting new issue for bankinghub tenant documentation section of banking hub is not displaying in dev instance of developer studio please refer the screenshot below cc rahravin bobburghardt
0
16,931
5,310,799,515
IssuesEvent
2017-02-12 22:59:54
oppia/oppia
https://api.github.com/repos/oppia/oppia
opened
ImageClickInput: Allow learners to respond with a set/sequence of points
loc: full-stack owner: @tjiang11 TODO: code TODO: tech (instructions) type: feature (minor)
Currently, learners can only respond with one point in an ImageClickInput, it would be nice to allow learners to respond with a set of points ("Pick all the fruits") and a sequence of points ("Pick the items in order of density"). This could be done as an entirely new widget or integrated with the existing ImageClickInput interaction. Originally part of #531
1.0
ImageClickInput: Allow learners to respond with a set/sequence of points - Currently, learners can only respond with one point in an ImageClickInput, it would be nice to allow learners to respond with a set of points ("Pick all the fruits") and a sequence of points ("Pick the items in order of density"). This could be done as an entirely new widget or integrated with the existing ImageClickInput interaction. Originally part of #531
non_process
imageclickinput allow learners to respond with a set sequence of points currently learners can only respond with one point in an imageclickinput it would be nice to allow learners to respond with a set of points pick all the fruits and a sequence of points pick the items in order of density this could be done as an entirely new widget or integrated with the existing imageclickinput interaction originally part of
0
180,713
30,554,442,667
IssuesEvent
2023-07-20 10:41:25
DeveloperAcademy-POSTECH/MC3-Team11-BeyondThe3F
https://api.github.com/repos/DeveloperAcademy-POSTECH/MC3-Team11-BeyondThe3F
closed
[Design] Component 재활용성 수정
Design
## 이슈 - 기존 컴포넌트의 중복을 조금 줄이기 위해 재사용 가능한 코드로 수정 ## To - Do - [ ] 중복 컴포턴트 제거 - [ ] 컬러, 텍스트 의존성 주입
1.0
[Design] Component 재활용성 수정 - ## 이슈 - 기존 컴포넌트의 중복을 조금 줄이기 위해 재사용 가능한 코드로 수정 ## To - Do - [ ] 중복 컴포턴트 제거 - [ ] 컬러, 텍스트 의존성 주입
non_process
component 재활용성 수정 이슈 기존 컴포넌트의 중복을 조금 줄이기 위해 재사용 가능한 코드로 수정 to do 중복 컴포턴트 제거 컬러 텍스트 의존성 주입
0
353,734
25,133,750,669
IssuesEvent
2022-11-09 16:56:32
extratone/WindowsIowa
https://api.github.com/repos/extratone/WindowsIowa
opened
Windows Iowa Theme for Blink Shell
documentation
![iPad 13-9](https://user-images.githubusercontent.com/43663476/200892458-f7ebb9ed-1a9f-41c5-82d1-3dce0c6daf7f.png) ```js t.prefs_.set('color-palette-overrides',["#050387", "#ff2320", "#00ff00", "#f5ff6f", "#2934b3", "#1f0022", "#c4f7f9", "#fff3e4", "#000051", "#ff1f1e", "#00ff00", "#f5ff6f", "#6869f3", "#ed0073", "#93fdff", "#ffffff"]); t.prefs_.set('foreground-color', "#ffffff"); t.prefs_.set('background-color', "#00006f"); t.prefs_.set('cursor-color', "#ffffff"); ```
1.0
Windows Iowa Theme for Blink Shell - ![iPad 13-9](https://user-images.githubusercontent.com/43663476/200892458-f7ebb9ed-1a9f-41c5-82d1-3dce0c6daf7f.png) ```js t.prefs_.set('color-palette-overrides',["#050387", "#ff2320", "#00ff00", "#f5ff6f", "#2934b3", "#1f0022", "#c4f7f9", "#fff3e4", "#000051", "#ff1f1e", "#00ff00", "#f5ff6f", "#6869f3", "#ed0073", "#93fdff", "#ffffff"]); t.prefs_.set('foreground-color', "#ffffff"); t.prefs_.set('background-color', "#00006f"); t.prefs_.set('cursor-color', "#ffffff"); ```
non_process
windows iowa theme for blink shell js t prefs set color palette overrides t prefs set foreground color ffffff t prefs set background color t prefs set cursor color ffffff
0
9,929
12,966,752,642
IssuesEvent
2020-07-21 01:25:15
googleapis/java-spanner
https://api.github.com/repos/googleapis/java-spanner
reopened
Update Dependencies (Renovate Bot)
api: spanner type: process
This [master issue](https://renovatebot.com/blog/master-issue) contains a list of Renovate updates and their statuses. ## Closed/Ignored These updates were closed unmerged and will not be recreated unless you click a checkbox below. - [ ] <!-- recreate-branch=renovate/org.apache.commons-commons-lang3-3.x -->[deps: update dependency org.apache.commons:commons-lang3 to v3.11](../pull/356) --- <details><summary>Advanced</summary> - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository </details>
1.0
Update Dependencies (Renovate Bot) - This [master issue](https://renovatebot.com/blog/master-issue) contains a list of Renovate updates and their statuses. ## Closed/Ignored These updates were closed unmerged and will not be recreated unless you click a checkbox below. - [ ] <!-- recreate-branch=renovate/org.apache.commons-commons-lang3-3.x -->[deps: update dependency org.apache.commons:commons-lang3 to v3.11](../pull/356) --- <details><summary>Advanced</summary> - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository </details>
process
update dependencies renovate bot this contains a list of renovate updates and their statuses closed ignored these updates were closed unmerged and will not be recreated unless you click a checkbox below pull advanced check this box to trigger a request for renovate to run again on this repository
1
2,393
2,611,713,688
IssuesEvent
2015-02-27 08:25:46
dambileh/Abblar
https://api.github.com/repos/dambileh/Abblar
closed
As an app I need organisation category screen
Design Ready
This will simply list different categories of messages (channels) that the user can subscribe to. We probably need some sort of a list with a checkbox next to each item. Just to give you an idea, take Telkom as an example, it could have "Fault report", "Promotions" and etc categories
1.0
As an app I need organisation category screen - This will simply list different categories of messages (channels) that the user can subscribe to. We probably need some sort of a list with a checkbox next to each item. Just to give you an idea, take Telkom as an example, it could have "Fault report", "Promotions" and etc categories
non_process
as an app i need organisation category screen this will simply list different categories of messages channels that the user can subscribe to we probably need some sort of a list with a checkbox next to each item just to give you an idea take telkom as an example it could have fault report promotions and etc categories
0
177,873
13,750,962,699
IssuesEvent
2020-10-06 12:46:27
enthought/traitsui
https://api.github.com/repos/enthought/traitsui
closed
Occasional error from QAbstractItemModel events upon test tear down
component: test suite
The following error occurs once from this travis build (https://travis-ci.org/github/enthought/traitsui/jobs/706873557) : ``` ====================================================================== FAIL: test_run (test_all_examples.TestExample) (file_path='/home/travis/build/enthought/traitsui/examples/demo/Advanced/ListStrEditor_demo.py') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 293, in test_run run_file(file_path) AttributeError: 'NoneType' object has no attribute 'auto_add' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 300, in test_run file_path, exc, message AssertionError: Executing /home/travis/build/enthought/traitsui/examples/demo/Advanced/ListStrEditor_demo.py failed with exception 'NoneType' object has no attribute 'auto_add' Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 293, in test_run run_file(file_path) File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 277, in run_file exec(content, globals) File "<string>", line 50, in <module> File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 238, in replaced_configure_traits process_cascade_events() File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/site-packages/traitsui/tests/_tools.py", line 67, in store_exceptions_on_all_threads raise exceptions[0] File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/site-packages/traitsui/qt4/list_str_model.py", line 53, in rowCount if editor.factory.auto_add: AttributeError: 'NoneType' object has no attribute 'auto_add' ``` This is similar to the ones seen in https://github.com/enthought/traitsui/issues/854 for `ImageEnumEditor` and the one worked around in https://github.com/enthought/traitsui/pull/897 for `TabularEditor`. The fix for #854 was to replace/unset the instance of `QAbstractItemModel` from the view widget in the editor's dispose. See https://github.com/enthought/traitsui/pull/974
1.0
Occasional error from QAbstractItemModel events upon test tear down - The following error occurs once from this travis build (https://travis-ci.org/github/enthought/traitsui/jobs/706873557) : ``` ====================================================================== FAIL: test_run (test_all_examples.TestExample) (file_path='/home/travis/build/enthought/traitsui/examples/demo/Advanced/ListStrEditor_demo.py') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 293, in test_run run_file(file_path) AttributeError: 'NoneType' object has no attribute 'auto_add' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 300, in test_run file_path, exc, message AssertionError: Executing /home/travis/build/enthought/traitsui/examples/demo/Advanced/ListStrEditor_demo.py failed with exception 'NoneType' object has no attribute 'auto_add' Traceback (most recent call last): File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 293, in test_run run_file(file_path) File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 277, in run_file exec(content, globals) File "<string>", line 50, in <module> File "/home/travis/build/enthought/traitsui/integrationtests/test_all_examples.py", line 238, in replaced_configure_traits process_cascade_events() File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/contextlib.py", line 88, in __exit__ next(self.gen) File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/site-packages/traitsui/tests/_tools.py", line 67, in store_exceptions_on_all_threads raise exceptions[0] File "/home/travis/.edm/envs/traitsui-test-3.6-pyqt/lib/python3.6/site-packages/traitsui/qt4/list_str_model.py", line 53, in rowCount if editor.factory.auto_add: AttributeError: 'NoneType' object has no attribute 'auto_add' ``` This is similar to the ones seen in https://github.com/enthought/traitsui/issues/854 for `ImageEnumEditor` and the one worked around in https://github.com/enthought/traitsui/pull/897 for `TabularEditor`. The fix for #854 was to replace/unset the instance of `QAbstractItemModel` from the view widget in the editor's dispose. See https://github.com/enthought/traitsui/pull/974
non_process
occasional error from qabstractitemmodel events upon test tear down the following error occurs once from this travis build fail test run test all examples testexample file path home travis build enthought traitsui examples demo advanced liststreditor demo py traceback most recent call last file home travis build enthought traitsui integrationtests test all examples py line in test run run file file path attributeerror nonetype object has no attribute auto add during handling of the above exception another exception occurred traceback most recent call last file home travis build enthought traitsui integrationtests test all examples py line in test run file path exc message assertionerror executing home travis build enthought traitsui examples demo advanced liststreditor demo py failed with exception nonetype object has no attribute auto add traceback most recent call last file home travis build enthought traitsui integrationtests test all examples py line in test run run file file path file home travis build enthought traitsui integrationtests test all examples py line in run file exec content globals file line in file home travis build enthought traitsui integrationtests test all examples py line in replaced configure traits process cascade events file home travis edm envs traitsui test pyqt lib contextlib py line in exit next self gen file home travis edm envs traitsui test pyqt lib site packages traitsui tests tools py line in store exceptions on all threads raise exceptions file home travis edm envs traitsui test pyqt lib site packages traitsui list str model py line in rowcount if editor factory auto add attributeerror nonetype object has no attribute auto add this is similar to the ones seen in for imageenumeditor and the one worked around in for tabulareditor the fix for was to replace unset the instance of qabstractitemmodel from the view widget in the editor s dispose see
0
107,181
13,440,491,581
IssuesEvent
2020-09-08 01:05:35
logseq/logseq
https://api.github.com/repos/logseq/logseq
closed
connectedpapers的UI设计非常漂亮logseq可以参考参考
design
connectedpapers的UI设计非常漂亮logseq可以参考参考,roam research的UI都是对workflowy进行学习,但connectedpapers的界面设计例如动画效果、页面布局、隐喻等都也是很好看的。 https://www.connectedpapers.com/main/f4b5b7a08649811db655bc0fbc4d33b608617bab/MoFeCoNi/graph ![image](https://user-images.githubusercontent.com/69051504/92406469-b264cf80-f16a-11ea-95bc-5372c7c37cc7.png)
1.0
connectedpapers的UI设计非常漂亮logseq可以参考参考 - connectedpapers的UI设计非常漂亮logseq可以参考参考,roam research的UI都是对workflowy进行学习,但connectedpapers的界面设计例如动画效果、页面布局、隐喻等都也是很好看的。 https://www.connectedpapers.com/main/f4b5b7a08649811db655bc0fbc4d33b608617bab/MoFeCoNi/graph ![image](https://user-images.githubusercontent.com/69051504/92406469-b264cf80-f16a-11ea-95bc-5372c7c37cc7.png)
non_process
connectedpapers的ui设计非常漂亮logseq可以参考参考 connectedpapers的ui设计非常漂亮logseq可以参考参考,roam research的ui都是对workflowy进行学习,但connectedpapers的界面设计例如动画效果、页面布局、隐喻等都也是很好看的。
0
14,441
17,498,017,914
IssuesEvent
2021-08-10 05:07:53
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Test failure System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException
arch-x86 area-System.ServiceProcess os-windows
Run: [runtime-libraries-coreclr outerloop 20210615.3](https://dev.azure.com/dnceng/public/_build/results?buildId=1187626&view=ms.vss-test-web.build-test-results-tab&runId=35678872&resultId=104107&paneView=debug) Failed test: ``` net6.0-windows-Release-x86-CoreCLR_release-Windows.7.Amd64.Open - System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException ``` **Error message:** ``` Assert.Throws() Failure Expected: typeof(System.InvalidOperationException) Actual: (No exception was thrown) Stack trace at System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/ServiceControllerTests.netcoreapp.cs:line 16 ```
1.0
Test failure System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException - Run: [runtime-libraries-coreclr outerloop 20210615.3](https://dev.azure.com/dnceng/public/_build/results?buildId=1187626&view=ms.vss-test-web.build-test-results-tab&runId=35678872&resultId=104107&paneView=debug) Failed test: ``` net6.0-windows-Release-x86-CoreCLR_release-Windows.7.Amd64.Open - System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException ``` **Error message:** ``` Assert.Throws() Failure Expected: typeof(System.InvalidOperationException) Actual: (No exception was thrown) Stack trace at System.ServiceProcess.Tests.ServiceControllerTests.Stop_FalseArg_WithDependentServices_ThrowsInvalidOperationException() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/ServiceControllerTests.netcoreapp.cs:line 16 ```
process
test failure system serviceprocess tests servicecontrollertests stop falsearg withdependentservices throwsinvalidoperationexception run failed test windows release coreclr release windows open system serviceprocess tests servicecontrollertests stop falsearg withdependentservices throwsinvalidoperationexception error message assert throws failure expected typeof system invalidoperationexception actual no exception was thrown stack trace at system serviceprocess tests servicecontrollertests stop falsearg withdependentservices throwsinvalidoperationexception in src libraries system serviceprocess servicecontroller tests servicecontrollertests netcoreapp cs line
1
111,835
4,489,039,409
IssuesEvent
2016-08-30 09:33:23
Victoire/victoire
https://api.github.com/repos/Victoire/victoire
opened
It's not possible to edit a widget list with a DQL query inside
bug Priority : Medium
Error 500 when i click on the widget (edition)
1.0
It's not possible to edit a widget list with a DQL query inside - Error 500 when i click on the widget (edition)
non_process
it s not possible to edit a widget list with a dql query inside error when i click on the widget edition
0
66,965
8,059,810,153
IssuesEvent
2018-08-02 23:55:33
kbase/narrative
https://api.github.com/repos/kbase/narrative
closed
Graphic design tweaks round 1
enhancement graphic design minor
These are mostly minor and is not a comprehensive list. But some of the consistency issues should be fixed. Design Throughout - Separator lines are too pale, they should be #CECECE - for all the lists (the data list, the apps & methods list), remove the separator at the very top Header - put spacing or even divider between account icon and narrative buttons - make the buttons round if we are mostly going for round buttons - after saving the narrative, make the title non-italic (italic for unsaved only) Left Panel - Make the tabs color #2196F3 - Make inactive tab text color #BBDEFB - Make active tab indicator #10CE34 Data Panel - The “Add Data” button, change colors to #E53935 on hover Data Slideout - the “add” buttons should be consistent (in example data, the button is persistent, in other tabs, they appear on hover) - icons need to be the same as the icons in the data panel - filter options and searching needs to be consistent through all the tabs - import, need to fix the tabs (should not show the first level of tabs when showing the secondary level of tabs, see the mockups) Working/Notebook Area - bottom buttons are too subtle, maybe try blue #2196F3 (with opacity still 0.5) - the markdown cells shouldn’t have that top panel styling like that, it can be a simple horizontal line. There's sort of mix of styles going on in general.
1.0
Graphic design tweaks round 1 - These are mostly minor and is not a comprehensive list. But some of the consistency issues should be fixed. Design Throughout - Separator lines are too pale, they should be #CECECE - for all the lists (the data list, the apps & methods list), remove the separator at the very top Header - put spacing or even divider between account icon and narrative buttons - make the buttons round if we are mostly going for round buttons - after saving the narrative, make the title non-italic (italic for unsaved only) Left Panel - Make the tabs color #2196F3 - Make inactive tab text color #BBDEFB - Make active tab indicator #10CE34 Data Panel - The “Add Data” button, change colors to #E53935 on hover Data Slideout - the “add” buttons should be consistent (in example data, the button is persistent, in other tabs, they appear on hover) - icons need to be the same as the icons in the data panel - filter options and searching needs to be consistent through all the tabs - import, need to fix the tabs (should not show the first level of tabs when showing the secondary level of tabs, see the mockups) Working/Notebook Area - bottom buttons are too subtle, maybe try blue #2196F3 (with opacity still 0.5) - the markdown cells shouldn’t have that top panel styling like that, it can be a simple horizontal line. There's sort of mix of styles going on in general.
non_process
graphic design tweaks round these are mostly minor and is not a comprehensive list but some of the consistency issues should be fixed design throughout separator lines are too pale they should be cecece for all the lists the data list the apps methods list remove the separator at the very top header put spacing or even divider between account icon and narrative buttons make the buttons round if we are mostly going for round buttons after saving the narrative make the title non italic italic for unsaved only left panel make the tabs color make inactive tab text color bbdefb make active tab indicator data panel the “add data” button change colors to on hover data slideout the “add” buttons should be consistent in example data the button is persistent in other tabs they appear on hover icons need to be the same as the icons in the data panel filter options and searching needs to be consistent through all the tabs import need to fix the tabs should not show the first level of tabs when showing the secondary level of tabs see the mockups working notebook area bottom buttons are too subtle maybe try blue with opacity still the markdown cells shouldn’t have that top panel styling like that it can be a simple horizontal line there s sort of mix of styles going on in general
0
13,800
16,554,004,781
IssuesEvent
2021-05-28 11:56:00
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Need to increase detectable area for Password Criteria icon (?) in change Password screen
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
Steps: 1. Login into Participant manager 2. Navigate to My Account section 3. Click on Change Password link 4. Verify the password criteria icon in change password screen **A/R**:- Detectable area for Password Criteria icon is very less. **E/R**:- Detectable area should be increase for Password Criteria icon for better user experience **Note**:- Need to fix the above issue wherever Password criteria icon appears. ![image](https://user-images.githubusercontent.com/60500517/118836324-5e0d1480-b8e1-11eb-9340-1aabc44a526c.png)
3.0
[PM] Need to increase detectable area for Password Criteria icon (?) in change Password screen - Steps: 1. Login into Participant manager 2. Navigate to My Account section 3. Click on Change Password link 4. Verify the password criteria icon in change password screen **A/R**:- Detectable area for Password Criteria icon is very less. **E/R**:- Detectable area should be increase for Password Criteria icon for better user experience **Note**:- Need to fix the above issue wherever Password criteria icon appears. ![image](https://user-images.githubusercontent.com/60500517/118836324-5e0d1480-b8e1-11eb-9340-1aabc44a526c.png)
process
need to increase detectable area for password criteria icon in change password screen steps login into participant manager navigate to my account section click on change password link verify the password criteria icon in change password screen a r detectable area for password criteria icon is very less e r detectable area should be increase for password criteria icon for better user experience note need to fix the above issue wherever password criteria icon appears
1
6,768
9,905,872,942
IssuesEvent
2019-06-27 12:41:16
spring-projects/spring-hateoas
https://api.github.com/repos/spring-projects/spring-hateoas
closed
Switch to Spring's ServerWebExchangeContextFilter.
in: core process: in progress stack: webflux
This was completed on #983 and merged to `master`. But due to instabilities with Spring Framework upstream and Spring Data downstream, the effort was moved off to a branch to delay until we're all ready to accept it.
1.0
Switch to Spring's ServerWebExchangeContextFilter. - This was completed on #983 and merged to `master`. But due to instabilities with Spring Framework upstream and Spring Data downstream, the effort was moved off to a branch to delay until we're all ready to accept it.
process
switch to spring s serverwebexchangecontextfilter this was completed on and merged to master but due to instabilities with spring framework upstream and spring data downstream the effort was moved off to a branch to delay until we re all ready to accept it
1
5,324
8,139,530,661
IssuesEvent
2018-08-20 18:01:11
jfmcbrayer/brutaldon
https://api.github.com/repos/jfmcbrayer/brutaldon
closed
Multi-account support
enhancement inprocess
This would require a Django login and registration for brutaldon. And cross-account actions would require quite a bit of refactoring. But otherwise not hard.
1.0
Multi-account support - This would require a Django login and registration for brutaldon. And cross-account actions would require quite a bit of refactoring. But otherwise not hard.
process
multi account support this would require a django login and registration for brutaldon and cross account actions would require quite a bit of refactoring but otherwise not hard
1
4,507
7,350,301,481
IssuesEvent
2018-03-08 13:56:04
shobrook/BitVision
https://api.github.com/repos/shobrook/BitVision
closed
Instead of using Bitstamp price data, use data averaged across multiple exchanges
medium priority preprocessing
We want to avoid exchange-specific biases.
1.0
Instead of using Bitstamp price data, use data averaged across multiple exchanges - We want to avoid exchange-specific biases.
process
instead of using bitstamp price data use data averaged across multiple exchanges we want to avoid exchange specific biases
1
5,499
8,364,474,082
IssuesEvent
2018-10-03 23:13:34
w3c/w3process
https://api.github.com/repos/w3c/w3process
closed
Written notification?
AC-review Process2019Candidate
In section 3.6 Resignation from a group, it says "On written notification from an Advisory Committee representative...". Does the notification from SysBot (when an AC rep unhooks someone from the WG/IG) qualify as "written notification"?
1.0
Written notification? - In section 3.6 Resignation from a group, it says "On written notification from an Advisory Committee representative...". Does the notification from SysBot (when an AC rep unhooks someone from the WG/IG) qualify as "written notification"?
process
written notification in section resignation from a group it says on written notification from an advisory committee representative does the notification from sysbot when an ac rep unhooks someone from the wg ig qualify as written notification
1
11,969
14,730,443,029
IssuesEvent
2021-01-06 13:13:24
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
To change term 'user' to 'admin' in the error and message codes of PM, also in PM emails
Feature request P1 Participant manager Process: Release 2 Process: Tested QA Process: Tested dev
Replace the word user with admin in the error and message codes of PM. Also, we need to replace them in the emails of PM.
3.0
To change term 'user' to 'admin' in the error and message codes of PM, also in PM emails - Replace the word user with admin in the error and message codes of PM. Also, we need to replace them in the emails of PM.
process
to change term user to admin in the error and message codes of pm also in pm emails replace the word user with admin in the error and message codes of pm also we need to replace them in the emails of pm
1
517,612
15,017,110,680
IssuesEvent
2021-02-01 10:25:46
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
xnxx.com - site is not usable
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false priority-important
<!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/66445 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: http://xnxx.com **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android 9 **Tested Another Browser**: Yes Other **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Videos doesn't display and unable to open site <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
xnxx.com - site is not usable - <!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/66445 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: http://xnxx.com **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android 9 **Tested Another Browser**: Yes Other **Problem type**: Site is not usable **Description**: Browser unsupported **Steps to Reproduce**: Videos doesn't display and unable to open site <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
xnxx com site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description browser unsupported steps to reproduce videos doesn t display and unable to open site browser configuration none from with ❤️
0
196,157
22,441,002,875
IssuesEvent
2022-06-21 01:16:58
JMD60260/fetchmeaband
https://api.github.com/repos/JMD60260/fetchmeaband
opened
CVE-2022-32209 (Medium) detected in rails-html-sanitizer-1.3.0.gem
security vulnerability
## CVE-2022-32209 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.3.0.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem">https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem</a></p> <p> Dependency Hierarchy: - coffee-rails-4.2.2.gem (Root Library) - railties-5.2.3.gem - actionpack-5.2.3.gem - :x: **rails-html-sanitizer-1.3.0.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/JMD60260/fetchmeaband/commit/430b5f2947d45ada69dc047ea870d3c988006344">430b5f2947d45ada69dc047ea870d3c988006344</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A possible XSS vulnerability has been discovered in rails-html-sanitizer before 1.4.3. This allows an attacker to inject content if the application developer has overridden the sanitizer's allowed tags to allow both `select` and `style` elements. Code is only impacted if allowed tags are being overridden. This may be done via application configuration. <p>Publish Date: 2022-06-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-32209>CVE-2022-32209</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/1530898">https://hackerone.com/reports/1530898</a></p> <p>Release Date: 2022-06-02</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-32209 (Medium) detected in rails-html-sanitizer-1.3.0.gem - ## CVE-2022-32209 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.3.0.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem">https://rubygems.org/gems/rails-html-sanitizer-1.3.0.gem</a></p> <p> Dependency Hierarchy: - coffee-rails-4.2.2.gem (Root Library) - railties-5.2.3.gem - actionpack-5.2.3.gem - :x: **rails-html-sanitizer-1.3.0.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/JMD60260/fetchmeaband/commit/430b5f2947d45ada69dc047ea870d3c988006344">430b5f2947d45ada69dc047ea870d3c988006344</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A possible XSS vulnerability has been discovered in rails-html-sanitizer before 1.4.3. This allows an attacker to inject content if the application developer has overridden the sanitizer's allowed tags to allow both `select` and `style` elements. Code is only impacted if allowed tags are being overridden. This may be done via application configuration. <p>Publish Date: 2022-06-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-32209>CVE-2022-32209</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/1530898">https://hackerone.com/reports/1530898</a></p> <p>Release Date: 2022-06-02</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in rails html sanitizer gem cve medium severity vulnerability vulnerable library rails html sanitizer gem html sanitization for rails applications library home page a href dependency hierarchy coffee rails gem root library railties gem actionpack gem x rails html sanitizer gem vulnerable library found in head commit a href found in base branch master vulnerability details a possible xss vulnerability has been discovered in rails html sanitizer before this allows an attacker to inject content if the application developer has overridden the sanitizer s allowed tags to allow both select and style elements code is only impacted if allowed tags are being overridden this may be done via application configuration publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rails html sanitizer step up your open source security game with mend
0
53,045
13,260,844,386
IssuesEvent
2020-08-20 18:51:25
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
boost port needs to fail, if it can't find python "devel" parts (Trac #625)
Migrated from Trac defect tools/ports
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/625">https://code.icecube.wisc.edu/projects/icecube/ticket/625</a>, reported by negaand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2014-10-22T17:41:41", "_ts": "1413999701734819", "description": "", "reporter": "nega", "cc": "", "resolution": "wontfix", "time": "2011-04-28T20:34:33", "component": "tools/ports", "summary": "boost port needs to fail, if it can't find python \"devel\" parts", "priority": "minor", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
boost port needs to fail, if it can't find python "devel" parts (Trac #625) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/625">https://code.icecube.wisc.edu/projects/icecube/ticket/625</a>, reported by negaand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2014-10-22T17:41:41", "_ts": "1413999701734819", "description": "", "reporter": "nega", "cc": "", "resolution": "wontfix", "time": "2011-04-28T20:34:33", "component": "tools/ports", "summary": "boost port needs to fail, if it can't find python \"devel\" parts", "priority": "minor", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
non_process
boost port needs to fail if it can t find python devel parts trac migrated from json status closed changetime ts description reporter nega cc resolution wontfix time component tools ports summary boost port needs to fail if it can t find python devel parts priority minor keywords milestone owner nega type defect
0
35,931
6,510,826,645
IssuesEvent
2017-08-25 06:40:58
magenta-aps/mora
https://api.github.com/repos/magenta-aps/mora
opened
Dokumentation af autentificeringsmekanisme
documentation
Vi skal have tilføjet et afsnit om autentificeringsmekanismen i README-filen
1.0
Dokumentation af autentificeringsmekanisme - Vi skal have tilføjet et afsnit om autentificeringsmekanismen i README-filen
non_process
dokumentation af autentificeringsmekanisme vi skal have tilføjet et afsnit om autentificeringsmekanismen i readme filen
0
21,953
30,452,539,832
IssuesEvent
2023-07-16 13:31:00
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
etm-dgraham 5.1.13 has 4 GuardDog issues
guarddog exec-base64 silent-process-execution
https://pypi.org/project/etm-dgraham https://inspector.pypi.io/project/etm-dgraham ```{ "dependency": "etm-dgraham", "version": "5.1.13", "result": { "issues": 4, "errors": {}, "results": { "exec-base64": [ { "location": "etm-dgraham-5.1.13/bump.py:121", "code": " check_output(f\"git commit -a -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:123", "code": " check_output(f\"git tag -a -f '{new_version}' -m '{version_info}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:128", "code": " check_output(f\"git commit -a --amend -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ], "silent-process-execution": [ { "location": "etm-dgraham-5.1.13/etm/view.py:1558", "code": " pid = subprocess.Popen(parts, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).pid", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpajgh5bnl/etm-dgraham" } }```
1.0
etm-dgraham 5.1.13 has 4 GuardDog issues - https://pypi.org/project/etm-dgraham https://inspector.pypi.io/project/etm-dgraham ```{ "dependency": "etm-dgraham", "version": "5.1.13", "result": { "issues": 4, "errors": {}, "results": { "exec-base64": [ { "location": "etm-dgraham-5.1.13/bump.py:121", "code": " check_output(f\"git commit -a -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:123", "code": " check_output(f\"git tag -a -f '{new_version}' -m '{version_info}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" }, { "location": "etm-dgraham-5.1.13/bump.py:128", "code": " check_output(f\"git commit -a --amend -m '{tmsg}'\")", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ], "silent-process-execution": [ { "location": "etm-dgraham-5.1.13/etm/view.py:1558", "code": " pid = subprocess.Popen(parts, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL).pid", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmpajgh5bnl/etm-dgraham" } }```
process
etm dgraham has guarddog issues dependency etm dgraham version result issues errors results exec location etm dgraham bump py code check output f git commit a m tmsg message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location etm dgraham bump py code check output f git tag a f new version m version info message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location etm dgraham bump py code check output f git commit a amend m tmsg message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n silent process execution location etm dgraham etm view py code pid subprocess popen parts stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull pid message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp etm dgraham
1
90,023
8,224,943,689
IssuesEvent
2018-09-06 14:57:14
magento-engcom/msi
https://api.github.com/repos/magento-engcom/msi
closed
[Configuration-Catalog-Products-Configurable product] Configurable Product created with text swatch attribute configuration and Default Source assigned by Admin user
MFTF (Functional Test Coverage)
1. Login to backend as admin 2. Go to Catalog -> Categories 3. Select Default Category on Categories Tree 4. Click button "Add Subcategory" 5. set "Enable Category" to "Yes" 6. set "Include in Menu" to "Yes" 7. Fill in "Category Name" = "Category 1" 8. Click button "Save" 9. Success message "You saved the category." appears 10. Verify that Category 1 appeared in Categories tree as subcategory of Default category 11. Login to backend as admin 12. Go to Stores -> Manage Sources 13. Click button "Add New Source" 14. fill New Source data in General tab - Name, Code etc: * name = Test Source 1 * code = test_source_1 15. Set your New Source "Is Enabled" = Yes 16. Fill all fields with data of your New Source in Contact Info tab 17. Fill in address data of your New Source in Address Data tab: * Country: United States * State/Province: California (CA) * City: Culver City * Street: 6161 West Centinela Avenue * Postсode: 90230 18. Set In "Carriers" tab "Use global Shipping configuration" to Yes 19. Click button "Save & close" 20. Confirmation message "The Source has been saved" appears 21. verify that data in all tabs is correct 22. Login to backend as admin 23. Go to Catalog -> Products 24. Click button "Add Configurable Product" 25. set "Enable Product" to "Yes" 26. fill Name = "Configurable Product 1" 27. set Price = "10" 28. set Quantity = "100" 29. set Weight = "1" 30. select Category = "Category 1" 31. in "Product in Websites" tab select "Main Website" 32. click button 'Create Configuration" 33. On page "Step 1" - click button 'Create New Attribute" 34. Fill in "Default label" - "Text swatch attribute" 35. Select in "Catalog Input Type for Store Owner" = Text Swatch 36. Add two text swatches with labels - "Text 1" and "Text 2" 37. click button "Save attribute" 38. Select "Text swatch attribute" from Attributes grid 39. click button "Next" 40. On page "Step 2" click on "Select all" 41. click button "Next" 42. On page "Step 3" select "Apply single quantity to each SKUs" 43. Click button "Assign Sources" 44. In Assign Sources grid select "Default Source" and click "Done" 45. set Quantity = 100 46. click button "Next" 47. On page "Step 4" click button "Generate Products" 48. click button "Save" 49. Success message "You saved the product." appears 50. Verify that created variation products are present on "Configuration" tab 51. Go to "Home page" 52. Open "Category 1" page 53. verify "Configurable Product 1" is present 54. verify that "Name" and "Price" are correct 55. Open "Configurable Product 1" page 56. verify that "Name" and "SKU" are correct 57. verify that text swatch options "Text 1" and "Text 2" are present ---- HipTest Original ID: 1432167
1.0
[Configuration-Catalog-Products-Configurable product] Configurable Product created with text swatch attribute configuration and Default Source assigned by Admin user - 1. Login to backend as admin 2. Go to Catalog -> Categories 3. Select Default Category on Categories Tree 4. Click button "Add Subcategory" 5. set "Enable Category" to "Yes" 6. set "Include in Menu" to "Yes" 7. Fill in "Category Name" = "Category 1" 8. Click button "Save" 9. Success message "You saved the category." appears 10. Verify that Category 1 appeared in Categories tree as subcategory of Default category 11. Login to backend as admin 12. Go to Stores -> Manage Sources 13. Click button "Add New Source" 14. fill New Source data in General tab - Name, Code etc: * name = Test Source 1 * code = test_source_1 15. Set your New Source "Is Enabled" = Yes 16. Fill all fields with data of your New Source in Contact Info tab 17. Fill in address data of your New Source in Address Data tab: * Country: United States * State/Province: California (CA) * City: Culver City * Street: 6161 West Centinela Avenue * Postсode: 90230 18. Set In "Carriers" tab "Use global Shipping configuration" to Yes 19. Click button "Save & close" 20. Confirmation message "The Source has been saved" appears 21. verify that data in all tabs is correct 22. Login to backend as admin 23. Go to Catalog -> Products 24. Click button "Add Configurable Product" 25. set "Enable Product" to "Yes" 26. fill Name = "Configurable Product 1" 27. set Price = "10" 28. set Quantity = "100" 29. set Weight = "1" 30. select Category = "Category 1" 31. in "Product in Websites" tab select "Main Website" 32. click button 'Create Configuration" 33. On page "Step 1" - click button 'Create New Attribute" 34. Fill in "Default label" - "Text swatch attribute" 35. Select in "Catalog Input Type for Store Owner" = Text Swatch 36. Add two text swatches with labels - "Text 1" and "Text 2" 37. click button "Save attribute" 38. Select "Text swatch attribute" from Attributes grid 39. click button "Next" 40. On page "Step 2" click on "Select all" 41. click button "Next" 42. On page "Step 3" select "Apply single quantity to each SKUs" 43. Click button "Assign Sources" 44. In Assign Sources grid select "Default Source" and click "Done" 45. set Quantity = 100 46. click button "Next" 47. On page "Step 4" click button "Generate Products" 48. click button "Save" 49. Success message "You saved the product." appears 50. Verify that created variation products are present on "Configuration" tab 51. Go to "Home page" 52. Open "Category 1" page 53. verify "Configurable Product 1" is present 54. verify that "Name" and "Price" are correct 55. Open "Configurable Product 1" page 56. verify that "Name" and "SKU" are correct 57. verify that text swatch options "Text 1" and "Text 2" are present ---- HipTest Original ID: 1432167
non_process
configurable product created with text swatch attribute configuration and default source assigned by admin user login to backend as admin go to catalog categories select default category on categories tree click button add subcategory set enable category to yes set include in menu to yes fill in category name category click button save success message you saved the category appears verify that category appeared in categories tree as subcategory of default category login to backend as admin go to stores manage sources click button add new source fill new source data in general tab name code etc name test source code test source set your new source is enabled yes fill all fields with data of your new source in contact info tab fill in address data of your new source in address data tab country united states state province california ca city culver city street west centinela avenue postсode set in carriers tab use global shipping configuration to yes click button save close confirmation message the source has been saved appears verify that data in all tabs is correct login to backend as admin go to catalog products click button add configurable product set enable product to yes fill name configurable product set price set quantity set weight select category category in product in websites tab select main website click button create configuration on page step click button create new attribute fill in default label text swatch attribute select in catalog input type for store owner text swatch add two text swatches with labels text and text click button save attribute select text swatch attribute from attributes grid click button next on page step click on select all click button next on page step select apply single quantity to each skus click button assign sources in assign sources grid select default source and click done set quantity click button next on page step click button generate products click button save success message you saved the product appears verify that created variation products are present on configuration tab go to home page open category page verify configurable product is present verify that name and price are correct open configurable product page verify that name and sku are correct verify that text swatch options text and text are present hiptest original id
0
7,155
3,934,829,554
IssuesEvent
2016-04-26 00:50:00
jens-maus/yam
https://api.github.com/repos/jens-maus/yam
closed
Send mails works only when "outgoing" folder is active
#major @normal bug fixed Mail Filter nightly build
**Originally by _stellan.pistoor@gmx.de_ on 2014-03-19 17:14:27 +0100** ___ If not the "outgoing" folder is active send mails doesn`t work (no progress window appears, mail(s) isn`t sent). When I select the "outgoing" folder it works as expected.
1.0
Send mails works only when "outgoing" folder is active - **Originally by _stellan.pistoor@gmx.de_ on 2014-03-19 17:14:27 +0100** ___ If not the "outgoing" folder is active send mails doesn`t work (no progress window appears, mail(s) isn`t sent). When I select the "outgoing" folder it works as expected.
non_process
send mails works only when outgoing folder is active originally by stellan pistoor gmx de on if not the outgoing folder is active send mails doesn t work no progress window appears mail s isn t sent when i select the outgoing folder it works as expected
0
340,628
24,662,808,822
IssuesEvent
2022-10-18 08:05:14
catalystneuro/neuroconv
https://api.github.com/repos/catalystneuro/neuroconv
closed
Simplify conversions with multiple interfaces
documentation enhancement
Right now we have the conversion gallery which as simple as possible illustrates how to do conversion with a single interface: https://neuroconv.readthedocs.io/en/main/conversion_examples_gallery/conversion_example_gallery.html#extracellular-electrophysiology On the other hand, our workwflow for handling conversions with multiple interfaces relies on the `NWBConverter` object for which we also have another specific tutorial: https://neuroconv.readthedocs.io/en/main/user_guide/nwbconverter.html In my view, the use of the `NWBConverter` is not as simple as it could be. To be specific, two complexities that I see to the end user are the following: 1) They need to define a class themselves and inherit from the converter 2) To define all the properties they need to use a structure of nested dictionaries for the source data, conversion options and metadata. I feel that this creates an unnecessary gap in complexity between A single interface conversion and a multiple interface conversion. Moreover, the workflow for multiple interfaces does not build on the single interface one which seems like a lost purpose. Concretely, the single interface conversions rely on their specific interfaces for conversion whereas the multiple ones rely on the NWBConverter object. **It would be great if we could simplify the step from single to multiple interface and that the multiple interface conversions built on what we have in the conversion gallery already**. I want to propose some solutions that go in this direction: 1. The first solution is to adapt the `NWBConverter` object to take as an input previously initialized interfaces. This allows us to mostly rely on the machinery that we have already in place while at the same time it allows users to pass their already initialized interfaces (that they copy-paste from the conversion gallery) to the NWBConveter object to build more complex pipelines. #164 shows a prototype for this. 2. @bendichter has mentioned a couple of times that he takes inspiration from `scikit-learn` and I think we can follow the same course here. We could use something like their [pipeline objects ](https://scikit-learn.org/stable/modules/compose.html) for users to concatenate conversions. Of course, the pipelines would rely on `NWBConverter` behind the scenes and this is just a matter of simplifying the interaction with the object. The advantage of this approach is that it relies on another widely used data model that is out there in a popular library so we can leverage this knowledge from the community. 3. Since we introduced contexts to handle writing most of our conversion can and some do return an nwb-file with the data already attached to them. We could leverage this capability and instruct the users that they can chain their single interface conversions to attach data from other interfaces to the same file. Example of pipeline: ``` build nwb_file with pynwb or by running the first conversion with `nwbfile_path` instead. # Step 1 nwbfile = interface1.run_conversion(nwbfile) # Step 2 nwbfile = interface2.run_conversion(nwbfile) # Step 3 save the file to disk # Step 4 ``` The drawback of this last approach is that we lost all the work that we did with the NWBConverter object. Moreover, we have less control of how the data is combined and therefore is harder to test.
1.0
Simplify conversions with multiple interfaces - Right now we have the conversion gallery which as simple as possible illustrates how to do conversion with a single interface: https://neuroconv.readthedocs.io/en/main/conversion_examples_gallery/conversion_example_gallery.html#extracellular-electrophysiology On the other hand, our workwflow for handling conversions with multiple interfaces relies on the `NWBConverter` object for which we also have another specific tutorial: https://neuroconv.readthedocs.io/en/main/user_guide/nwbconverter.html In my view, the use of the `NWBConverter` is not as simple as it could be. To be specific, two complexities that I see to the end user are the following: 1) They need to define a class themselves and inherit from the converter 2) To define all the properties they need to use a structure of nested dictionaries for the source data, conversion options and metadata. I feel that this creates an unnecessary gap in complexity between A single interface conversion and a multiple interface conversion. Moreover, the workflow for multiple interfaces does not build on the single interface one which seems like a lost purpose. Concretely, the single interface conversions rely on their specific interfaces for conversion whereas the multiple ones rely on the NWBConverter object. **It would be great if we could simplify the step from single to multiple interface and that the multiple interface conversions built on what we have in the conversion gallery already**. I want to propose some solutions that go in this direction: 1. The first solution is to adapt the `NWBConverter` object to take as an input previously initialized interfaces. This allows us to mostly rely on the machinery that we have already in place while at the same time it allows users to pass their already initialized interfaces (that they copy-paste from the conversion gallery) to the NWBConveter object to build more complex pipelines. #164 shows a prototype for this. 2. @bendichter has mentioned a couple of times that he takes inspiration from `scikit-learn` and I think we can follow the same course here. We could use something like their [pipeline objects ](https://scikit-learn.org/stable/modules/compose.html) for users to concatenate conversions. Of course, the pipelines would rely on `NWBConverter` behind the scenes and this is just a matter of simplifying the interaction with the object. The advantage of this approach is that it relies on another widely used data model that is out there in a popular library so we can leverage this knowledge from the community. 3. Since we introduced contexts to handle writing most of our conversion can and some do return an nwb-file with the data already attached to them. We could leverage this capability and instruct the users that they can chain their single interface conversions to attach data from other interfaces to the same file. Example of pipeline: ``` build nwb_file with pynwb or by running the first conversion with `nwbfile_path` instead. # Step 1 nwbfile = interface1.run_conversion(nwbfile) # Step 2 nwbfile = interface2.run_conversion(nwbfile) # Step 3 save the file to disk # Step 4 ``` The drawback of this last approach is that we lost all the work that we did with the NWBConverter object. Moreover, we have less control of how the data is combined and therefore is harder to test.
non_process
simplify conversions with multiple interfaces right now we have the conversion gallery which as simple as possible illustrates how to do conversion with a single interface on the other hand our workwflow for handling conversions with multiple interfaces relies on the nwbconverter object for which we also have another specific tutorial in my view the use of the nwbconverter is not as simple as it could be to be specific two complexities that i see to the end user are the following they need to define a class themselves and inherit from the converter to define all the properties they need to use a structure of nested dictionaries for the source data conversion options and metadata i feel that this creates an unnecessary gap in complexity between a single interface conversion and a multiple interface conversion moreover the workflow for multiple interfaces does not build on the single interface one which seems like a lost purpose concretely the single interface conversions rely on their specific interfaces for conversion whereas the multiple ones rely on the nwbconverter object it would be great if we could simplify the step from single to multiple interface and that the multiple interface conversions built on what we have in the conversion gallery already i want to propose some solutions that go in this direction the first solution is to adapt the nwbconverter object to take as an input previously initialized interfaces this allows us to mostly rely on the machinery that we have already in place while at the same time it allows users to pass their already initialized interfaces that they copy paste from the conversion gallery to the nwbconveter object to build more complex pipelines shows a prototype for this bendichter has mentioned a couple of times that he takes inspiration from scikit learn and i think we can follow the same course here we could use something like their for users to concatenate conversions of course the pipelines would rely on nwbconverter behind the scenes and this is just a matter of simplifying the interaction with the object the advantage of this approach is that it relies on another widely used data model that is out there in a popular library so we can leverage this knowledge from the community since we introduced contexts to handle writing most of our conversion can and some do return an nwb file with the data already attached to them we could leverage this capability and instruct the users that they can chain their single interface conversions to attach data from other interfaces to the same file example of pipeline build nwb file with pynwb or by running the first conversion with nwbfile path instead step nwbfile run conversion nwbfile step nwbfile run conversion nwbfile step save the file to disk step the drawback of this last approach is that we lost all the work that we did with the nwbconverter object moreover we have less control of how the data is combined and therefore is harder to test
0
26,068
7,781,158,457
IssuesEvent
2018-06-05 22:41:46
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
WinRM timouts on Creating encrypted win 2016 AMI on AWS
bug builder/amazon communicator/winrm question
- Packer version from `packer version` 1.2.3 - Host platform Windows - **Debug log output from `PACKER_LOG=1 packer build template.json`. ``` C:\Users\rahul18564\Desktop\2018\packer>packer build -debug sample13.json Debug mode enabled. Builds will not be parallelized. amazon-ebs output will be in this color. ==> amazon-ebs: Prevalidating AMI Name: Microsoft Windows Server 2016 ==> amazon-ebs: Pausing after run of step 'StepPreValidate'. Press enter to continue. amazon-ebs: Found Image ID: ami-f0df538f ==> amazon-ebs: Pausing after run of step 'StepSourceAMIInfo'. Press enter to continue. ==> amazon-ebs: Using existing SSH private key ==> amazon-ebs: Pausing after run of step 'StepKeyPair'. Press enter to continue. ==> amazon-ebs: Pausing after run of step 'StepSecurityGroup'. Press enter to continue. ==> amazon-ebs: Pausing after run of step 'stepCleanupVolumes'. Press enter to continue. ==> amazon-ebs: Launching a source AWS instance... ==> amazon-ebs: Adding tags to source instance amazon-ebs: Adding tag: "Name": "Packer Builder" amazon-ebs: Instance ID: i-0498408e22b2fa231 ==> amazon-ebs: Waiting for instance (i-0498408e22b2fa231) to become ready... amazon-ebs: Private IP: 10.23.3.61 ==> amazon-ebs: Pausing after run of step 'StepRunSourceInstance'. Press enter to continue. ==> amazon-ebs: Skipping waiting for password since WinRM password set... ==> amazon-ebs: Pausing after run of step 'StepGetPassword'. Press enter to continue. ==> amazon-ebs: Waiting for WinRM to become available...** ``` Please paste this in a gist https://gist.github.com The Scripts ``` { "builders": [{ "type": "amazon-ebs", "access_key": “”, "secret_key": “”, "region": "us-east-1", "ssh_keypair_name": "packer_testing", "ssh_private_key_file": "packer_testing.pem", "source_ami": "ami-f0df538f", "instance_type": "m3.medium", "ami_name": "Microsoft Windows Server 2016 ", "user_data_file": "./ec2-userdata1.ps1", "communicator": "winrm", "winrm_username": "admin_raxxxxx", "winrm_password": "xxxxxxxxx", "winrm_timeout": "1h", "winrm_use_ssl": true, "winrm_insecure": true, "winrm_use_ntlm": true, "ssh_interface": "private_dns", "vpc_id": "xxxxxxxx", "subnet_id": "xxxxxxxxx", "security_group_id": "xxxxxxxx" }] } ``` Powershell Scripts ``` <powershell> write-output "Running User Data Script" write-host "(host) Running User Data Script" Set-ExecutionPolicy Unrestricted -Scope LocalMachine -Force -ErrorAction Ignore # Don't set this before Set-ExecutionPolicy as it throws an error $ErrorActionPreference = "stop" # Remove HTTP listener Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse $Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "packer" New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force # WinRM write-output "Setting up WinRM" write-host "(host) setting up WinRM" cmd.exe /c winrm quickconfig -q cmd.exe /c winrm set "winrm/config" '@{MaxTimeoutms="1800000"}' cmd.exe /c winrm set "winrm/config/winrs" '@{MaxMemoryPerShellMB="1024"}' cmd.exe /c winrm set "winrm/config/service" '@{AllowUnencrypted="true"}' cmd.exe /c winrm set "winrm/config/client" '@{AllowUnencrypted="true"}' cmd.exe /c winrm set "winrm/config/service/auth" '@{Basic="true"}' cmd.exe /c winrm set "winrm/config/client/auth" '@{Basic="true"}' cmd.exe /c winrm set "winrm/config/service/auth" '@{CredSSP="true"}' cmd.exe /c winrm set "winrm/config/listener?Address=*+Transport=HTTPS" "@{Port=`"5986`";Hostname=`"packer`";CertificateThumbprint=`"$($Cert.Thumbprint)`"}" cmd.exe /c netsh advfirewall firewall set rule group="remote administration" new enable=yes cmd.exe /c netsh firewall add portopening TCP 5986 "Port 5986" cmd.exe /c net stop winrm cmd.exe /c sc config winrm start= auto cmd.exe /c net start winrm ```
1.0
WinRM timouts on Creating encrypted win 2016 AMI on AWS - - Packer version from `packer version` 1.2.3 - Host platform Windows - **Debug log output from `PACKER_LOG=1 packer build template.json`. ``` C:\Users\rahul18564\Desktop\2018\packer>packer build -debug sample13.json Debug mode enabled. Builds will not be parallelized. amazon-ebs output will be in this color. ==> amazon-ebs: Prevalidating AMI Name: Microsoft Windows Server 2016 ==> amazon-ebs: Pausing after run of step 'StepPreValidate'. Press enter to continue. amazon-ebs: Found Image ID: ami-f0df538f ==> amazon-ebs: Pausing after run of step 'StepSourceAMIInfo'. Press enter to continue. ==> amazon-ebs: Using existing SSH private key ==> amazon-ebs: Pausing after run of step 'StepKeyPair'. Press enter to continue. ==> amazon-ebs: Pausing after run of step 'StepSecurityGroup'. Press enter to continue. ==> amazon-ebs: Pausing after run of step 'stepCleanupVolumes'. Press enter to continue. ==> amazon-ebs: Launching a source AWS instance... ==> amazon-ebs: Adding tags to source instance amazon-ebs: Adding tag: "Name": "Packer Builder" amazon-ebs: Instance ID: i-0498408e22b2fa231 ==> amazon-ebs: Waiting for instance (i-0498408e22b2fa231) to become ready... amazon-ebs: Private IP: 10.23.3.61 ==> amazon-ebs: Pausing after run of step 'StepRunSourceInstance'. Press enter to continue. ==> amazon-ebs: Skipping waiting for password since WinRM password set... ==> amazon-ebs: Pausing after run of step 'StepGetPassword'. Press enter to continue. ==> amazon-ebs: Waiting for WinRM to become available...** ``` Please paste this in a gist https://gist.github.com The Scripts ``` { "builders": [{ "type": "amazon-ebs", "access_key": “”, "secret_key": “”, "region": "us-east-1", "ssh_keypair_name": "packer_testing", "ssh_private_key_file": "packer_testing.pem", "source_ami": "ami-f0df538f", "instance_type": "m3.medium", "ami_name": "Microsoft Windows Server 2016 ", "user_data_file": "./ec2-userdata1.ps1", "communicator": "winrm", "winrm_username": "admin_raxxxxx", "winrm_password": "xxxxxxxxx", "winrm_timeout": "1h", "winrm_use_ssl": true, "winrm_insecure": true, "winrm_use_ntlm": true, "ssh_interface": "private_dns", "vpc_id": "xxxxxxxx", "subnet_id": "xxxxxxxxx", "security_group_id": "xxxxxxxx" }] } ``` Powershell Scripts ``` <powershell> write-output "Running User Data Script" write-host "(host) Running User Data Script" Set-ExecutionPolicy Unrestricted -Scope LocalMachine -Force -ErrorAction Ignore # Don't set this before Set-ExecutionPolicy as it throws an error $ErrorActionPreference = "stop" # Remove HTTP listener Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse $Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "packer" New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force # WinRM write-output "Setting up WinRM" write-host "(host) setting up WinRM" cmd.exe /c winrm quickconfig -q cmd.exe /c winrm set "winrm/config" '@{MaxTimeoutms="1800000"}' cmd.exe /c winrm set "winrm/config/winrs" '@{MaxMemoryPerShellMB="1024"}' cmd.exe /c winrm set "winrm/config/service" '@{AllowUnencrypted="true"}' cmd.exe /c winrm set "winrm/config/client" '@{AllowUnencrypted="true"}' cmd.exe /c winrm set "winrm/config/service/auth" '@{Basic="true"}' cmd.exe /c winrm set "winrm/config/client/auth" '@{Basic="true"}' cmd.exe /c winrm set "winrm/config/service/auth" '@{CredSSP="true"}' cmd.exe /c winrm set "winrm/config/listener?Address=*+Transport=HTTPS" "@{Port=`"5986`";Hostname=`"packer`";CertificateThumbprint=`"$($Cert.Thumbprint)`"}" cmd.exe /c netsh advfirewall firewall set rule group="remote administration" new enable=yes cmd.exe /c netsh firewall add portopening TCP 5986 "Port 5986" cmd.exe /c net stop winrm cmd.exe /c sc config winrm start= auto cmd.exe /c net start winrm ```
non_process
winrm timouts on creating encrypted win ami on aws packer version from packer version host platform windows debug log output from packer log packer build template json c users desktop packer packer build debug json debug mode enabled builds will not be parallelized amazon ebs output will be in this color amazon ebs prevalidating ami name microsoft windows server amazon ebs pausing after run of step stepprevalidate press enter to continue amazon ebs found image id ami amazon ebs pausing after run of step stepsourceamiinfo press enter to continue amazon ebs using existing ssh private key amazon ebs pausing after run of step stepkeypair press enter to continue amazon ebs pausing after run of step stepsecuritygroup press enter to continue amazon ebs pausing after run of step stepcleanupvolumes press enter to continue amazon ebs launching a source aws instance amazon ebs adding tags to source instance amazon ebs adding tag name packer builder amazon ebs instance id i amazon ebs waiting for instance i to become ready amazon ebs private ip amazon ebs pausing after run of step steprunsourceinstance press enter to continue amazon ebs skipping waiting for password since winrm password set amazon ebs pausing after run of step stepgetpassword press enter to continue amazon ebs waiting for winrm to become available please paste this in a gist the scripts builders type amazon ebs access key “” secret key “” region us east ssh keypair name packer testing ssh private key file packer testing pem source ami ami instance type medium ami name microsoft windows server user data file communicator winrm winrm username admin raxxxxx winrm password xxxxxxxxx winrm timeout winrm use ssl true winrm insecure true winrm use ntlm true ssh interface private dns vpc id xxxxxxxx subnet id xxxxxxxxx security group id xxxxxxxx powershell scripts write output running user data script write host host running user data script set executionpolicy unrestricted scope localmachine force erroraction ignore don t set this before set executionpolicy as it throws an error erroractionpreference stop remove http listener remove item path wsman localhost listener listener recurse cert new selfsignedcertificate certstorelocation cert localmachine my dnsname packer new item path wsman localhost listener transport https address certificatethumbprint cert thumbprint force winrm write output setting up winrm write host host setting up winrm cmd exe c winrm quickconfig q cmd exe c winrm set winrm config maxtimeoutms cmd exe c winrm set winrm config winrs maxmemorypershellmb cmd exe c winrm set winrm config service allowunencrypted true cmd exe c winrm set winrm config client allowunencrypted true cmd exe c winrm set winrm config service auth basic true cmd exe c winrm set winrm config client auth basic true cmd exe c winrm set winrm config service auth credssp true cmd exe c winrm set winrm config listener address transport https port hostname packer certificatethumbprint cert thumbprint cmd exe c netsh advfirewall firewall set rule group remote administration new enable yes cmd exe c netsh firewall add portopening tcp port cmd exe c net stop winrm cmd exe c sc config winrm start auto cmd exe c net start winrm
0
15,628
2,611,495,314
IssuesEvent
2015-02-27 05:35:02
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
map size, no X and Y limit but a power of x and y that is the limit
auto-migrated Priority-Medium Type-Enhancement
``` Please provide any additional information below. hey, i wonder if it is possible to have a map where the max size limit aren't just X =4096 and Y =2048 but are calcul bye the power of the two widht and height (x and y) we could set instead a max X and max Y, and max power of XandY explemple : actually its 4096*2048 that give = 8388608 the next map could be any size at condition that X*Y is 8388608 max : 1000*8388 or 2048*4096 etc... i ask it in irc and forum : nemo aswered me that is it possible ( i think) but could be problematic for smartphone user, koda ask me to create this issue to be able track it do you understand me ? ( my english is not that good ) ``` Original issue reported on code.google.com by `sphrixcl...@gmail.com` on 3 Jun 2012 at 10:21
1.0
map size, no X and Y limit but a power of x and y that is the limit - ``` Please provide any additional information below. hey, i wonder if it is possible to have a map where the max size limit aren't just X =4096 and Y =2048 but are calcul bye the power of the two widht and height (x and y) we could set instead a max X and max Y, and max power of XandY explemple : actually its 4096*2048 that give = 8388608 the next map could be any size at condition that X*Y is 8388608 max : 1000*8388 or 2048*4096 etc... i ask it in irc and forum : nemo aswered me that is it possible ( i think) but could be problematic for smartphone user, koda ask me to create this issue to be able track it do you understand me ? ( my english is not that good ) ``` Original issue reported on code.google.com by `sphrixcl...@gmail.com` on 3 Jun 2012 at 10:21
non_process
map size no x and y limit but a power of x and y that is the limit please provide any additional information below hey i wonder if it is possible to have a map where the max size limit aren t just x and y but are calcul bye the power of the two widht and height x and y we could set instead a max x and max y and max power of xandy explemple actually its that give the next map could be any size at condition that x y is max or etc i ask it in irc and forum nemo aswered me that is it possible i think but could be problematic for smartphone user koda ask me to create this issue to be able track it do you understand me my english is not that good original issue reported on code google com by sphrixcl gmail com on jun at
0
245,322
20,762,002,056
IssuesEvent
2022-03-15 16:57:29
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
closed
inventory in chests issue
bug Not Tested
the inventory chest in my house has been resetting to previous contents and i'm losing everything i'm putting in there as i loot and try to progress.
1.0
inventory in chests issue - the inventory chest in my house has been resetting to previous contents and i'm losing everything i'm putting in there as i loot and try to progress.
non_process
inventory in chests issue the inventory chest in my house has been resetting to previous contents and i m losing everything i m putting in there as i loot and try to progress
0
440,838
12,704,779,732
IssuesEvent
2020-06-23 02:31:21
naphthasl/sakamoto
https://api.github.com/repos/naphthasl/sakamoto
closed
Add search functionality
enhancement medium priority
Navbar should have a simple search bar allowing users to search the contents of all the pages on the site.
1.0
Add search functionality - Navbar should have a simple search bar allowing users to search the contents of all the pages on the site.
non_process
add search functionality navbar should have a simple search bar allowing users to search the contents of all the pages on the site
0
21,464
29,500,395,048
IssuesEvent
2023-06-02 21:03:48
openslide/openslide
https://api.github.com/repos/openslide/openslide
closed
Consider disabling merge commits, requiring squash (and maybe rebase) only
enhancement development-process
https://github.com/openslide/openslide/settings Merge commits add a new commit without content and can clutter history. Squash commits convert an entire pull request into a single commit, keeping minor changes made during review time out of the committed history. It is a personal preference how this is configured, but I have found squash commits to work better than other styles when using code review. Rebase commits are a little better than merge commits, but still can generate extra commit history.
1.0
Consider disabling merge commits, requiring squash (and maybe rebase) only - https://github.com/openslide/openslide/settings Merge commits add a new commit without content and can clutter history. Squash commits convert an entire pull request into a single commit, keeping minor changes made during review time out of the committed history. It is a personal preference how this is configured, but I have found squash commits to work better than other styles when using code review. Rebase commits are a little better than merge commits, but still can generate extra commit history.
process
consider disabling merge commits requiring squash and maybe rebase only merge commits add a new commit without content and can clutter history squash commits convert an entire pull request into a single commit keeping minor changes made during review time out of the committed history it is a personal preference how this is configured but i have found squash commits to work better than other styles when using code review rebase commits are a little better than merge commits but still can generate extra commit history
1
1,902
4,728,385,901
IssuesEvent
2016-10-18 15:49:29
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
closed
Accept identifiers with whitespaces like "ISRCTN 47772397"
4. Ready for Review Collectors Processors
For example, the trial http://www.isrctn.com/ISRCTN47772397 has 16 publications on PubMed according to ISRCTN's page. However, we have only [8 in our database](http://explorer.opentrials.net/trials/021c89e1-2d13-4e90-93e5-25439c804802). The publications we don't have are: * http://www.ncbi.nlm.nih.gov/pubmed/15297138 * http://www.ncbi.nlm.nih.gov/pubmed/15622611 * http://www.ncbi.nlm.nih.gov/pubmed/17155990 * http://www.ncbi.nlm.nih.gov/pubmed/19540054 * http://www.ncbi.nlm.nih.gov/pubmed/19826203 * http://www.ncbi.nlm.nih.gov/pubmed/20443499 * http://www.ncbi.nlm.nih.gov/pubmed/21386140 * http://www.ncbi.nlm.nih.gov/pubmed/22119373 * http://www.ncbi.nlm.nih.gov/pubmed/22520267 (Apparently we have a publication that they don't have as well) The reason we didn't find any of those is because they're adding a whitespace in the identifier (`ISRCTN 47772397` instead of `ISRCTN47772397`). Considering this, we should: * Change [our regexps](https://github.com/opentrials/processors/blob/ce16f94/processors/base/helpers/__init__.py#L131) to accept whitespaces inbetween the identifiers; * Add the publications from ISRCTN; @opentrials/research Maybe it's worthy to send a message to the authors? As far as I know, there should be no whitespaces in the identifiers, so this can be considered a bug on their side.
1.0
Accept identifiers with whitespaces like "ISRCTN 47772397" - For example, the trial http://www.isrctn.com/ISRCTN47772397 has 16 publications on PubMed according to ISRCTN's page. However, we have only [8 in our database](http://explorer.opentrials.net/trials/021c89e1-2d13-4e90-93e5-25439c804802). The publications we don't have are: * http://www.ncbi.nlm.nih.gov/pubmed/15297138 * http://www.ncbi.nlm.nih.gov/pubmed/15622611 * http://www.ncbi.nlm.nih.gov/pubmed/17155990 * http://www.ncbi.nlm.nih.gov/pubmed/19540054 * http://www.ncbi.nlm.nih.gov/pubmed/19826203 * http://www.ncbi.nlm.nih.gov/pubmed/20443499 * http://www.ncbi.nlm.nih.gov/pubmed/21386140 * http://www.ncbi.nlm.nih.gov/pubmed/22119373 * http://www.ncbi.nlm.nih.gov/pubmed/22520267 (Apparently we have a publication that they don't have as well) The reason we didn't find any of those is because they're adding a whitespace in the identifier (`ISRCTN 47772397` instead of `ISRCTN47772397`). Considering this, we should: * Change [our regexps](https://github.com/opentrials/processors/blob/ce16f94/processors/base/helpers/__init__.py#L131) to accept whitespaces inbetween the identifiers; * Add the publications from ISRCTN; @opentrials/research Maybe it's worthy to send a message to the authors? As far as I know, there should be no whitespaces in the identifiers, so this can be considered a bug on their side.
process
accept identifiers with whitespaces like isrctn for example the trial has publications on pubmed according to isrctn s page however we have only the publications we don t have are apparently we have a publication that they don t have as well the reason we didn t find any of those is because they re adding a whitespace in the identifier isrctn instead of considering this we should change to accept whitespaces inbetween the identifiers add the publications from isrctn opentrials research maybe it s worthy to send a message to the authors as far as i know there should be no whitespaces in the identifiers so this can be considered a bug on their side
1
315,844
27,110,809,719
IssuesEvent
2023-02-15 15:11:49
containers/podman-desktop
https://api.github.com/repos/containers/podman-desktop
opened
Implement/extend UI test for the running application covering basic e2e scenario
kind/enhancement area/tests
### Is your enhancement related to a problem? Please describe We already have a playwright e2e test available. I would like to look at the framework used, play around and extend covered scenario. ### Describe the solution you'd like Extend existing test or add new one that would test installed application instead of running it from the codebase. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
1.0
Implement/extend UI test for the running application covering basic e2e scenario - ### Is your enhancement related to a problem? Please describe We already have a playwright e2e test available. I would like to look at the framework used, play around and extend covered scenario. ### Describe the solution you'd like Extend existing test or add new one that would test installed application instead of running it from the codebase. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
non_process
implement extend ui test for the running application covering basic scenario is your enhancement related to a problem please describe we already have a playwright test available i would like to look at the framework used play around and extend covered scenario describe the solution you d like extend existing test or add new one that would test installed application instead of running it from the codebase describe alternatives you ve considered no response additional context no response
0
11,240
14,015,261,680
IssuesEvent
2020-10-29 13:06:04
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - https://dwc.tdwg.org/pw/#dwcpw_p045
Class - Occurrence Process - implement Term - change
## Change term * Submitter: John Wieczorek * Justification (why is this change necessary?): grammatical error * Proponents (who needs this change): Everyone Proposed new attributes of the term: Change 'their' to 'there' in * Definition of the term: Organisms transported and released by humans in a (semi)natural environment with the intention that they should live there without further human aid.
1.0
Change term - https://dwc.tdwg.org/pw/#dwcpw_p045 - ## Change term * Submitter: John Wieczorek * Justification (why is this change necessary?): grammatical error * Proponents (who needs this change): Everyone Proposed new attributes of the term: Change 'their' to 'there' in * Definition of the term: Organisms transported and released by humans in a (semi)natural environment with the intention that they should live there without further human aid.
process
change term change term submitter john wieczorek justification why is this change necessary grammatical error proponents who needs this change everyone proposed new attributes of the term change their to there in definition of the term organisms transported and released by humans in a semi natural environment with the intention that they should live there without further human aid
1
289,014
21,731,147,884
IssuesEvent
2022-05-11 12:08:27
finalstate/WOLviaREST
https://api.github.com/repos/finalstate/WOLviaREST
closed
Very minimalist. Needs at least installation and usage docs
documentation
Needs at least installation and usage docs
1.0
Very minimalist. Needs at least installation and usage docs - Needs at least installation and usage docs
non_process
very minimalist needs at least installation and usage docs needs at least installation and usage docs
0
13,550
16,092,446,723
IssuesEvent
2021-04-26 18:28:55
unicode-org/icu4x
https://api.github.com/repos/unicode-org/icu4x
closed
Add roadmap
C-process T-docs-tests
We have an ICU4X quarter-by-quarter roadmap written up in some docs. We should put it publicly on the repo so that we can point people to it.
1.0
Add roadmap - We have an ICU4X quarter-by-quarter roadmap written up in some docs. We should put it publicly on the repo so that we can point people to it.
process
add roadmap we have an quarter by quarter roadmap written up in some docs we should put it publicly on the repo so that we can point people to it
1
256,425
19,412,488,813
IssuesEvent
2021-12-20 11:08:51
nhsx/AIF_Allocation_Tool
https://api.github.com/repos/nhsx/AIF_Allocation_Tool
opened
Add support email more prominently
documentation
``` Need more help? For queries on CCG Allocations or suggestions for ACRA’s work programme please email: england.revenue-allocations@nhs.net ```
1.0
Add support email more prominently - ``` Need more help? For queries on CCG Allocations or suggestions for ACRA’s work programme please email: england.revenue-allocations@nhs.net ```
non_process
add support email more prominently need more help for queries on ccg allocations or suggestions for acra’s work programme please email england revenue allocations nhs net
0
16,894
22,196,744,209
IssuesEvent
2022-06-07 07:39:37
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
reopened
Extend gateway's create process instance request mapping with start instructions
team/process-automation
The gateway maps the incoming gRPC CreateProcessInstanceRequest into a ProcessInstanceCreation record and sends it to the Broker. We need to extend this mapping with the start instructions. Let's not map the start instructions for CreateProcessInstanceWithResult. Blocked by #9388, #9396
1.0
Extend gateway's create process instance request mapping with start instructions - The gateway maps the incoming gRPC CreateProcessInstanceRequest into a ProcessInstanceCreation record and sends it to the Broker. We need to extend this mapping with the start instructions. Let's not map the start instructions for CreateProcessInstanceWithResult. Blocked by #9388, #9396
process
extend gateway s create process instance request mapping with start instructions the gateway maps the incoming grpc createprocessinstancerequest into a processinstancecreation record and sends it to the broker we need to extend this mapping with the start instructions let s not map the start instructions for createprocessinstancewithresult blocked by
1
64,804
18,909,787,128
IssuesEvent
2021-11-16 13:00:53
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
BUG: dimension doesn't change after indexing sparse matrices, unlike numpy arrays
defect
Invalid: follows `np.matrix` behavior.
1.0
BUG: dimension doesn't change after indexing sparse matrices, unlike numpy arrays - Invalid: follows `np.matrix` behavior.
non_process
bug dimension doesn t change after indexing sparse matrices unlike numpy arrays invalid follows np matrix behavior
0
169,255
13,131,606,722
IssuesEvent
2020-08-06 17:19:40
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/beat_detail·js - Monitoring app beats detail "before all" hook for "cluster status bar shows correct information"
failed-test test-cloud
**Version: 7.9.0** **Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/beat_detail·js** **Stack Trace:** Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="clusterItemContainerBeats"] [data-test-subj="beatsListing"]) Wait timed out after 10016ms at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:13) _Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/495/testReport/_
2.0
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/beat_detail·js - Monitoring app beats detail "before all" hook for "cluster status bar shows correct information" - **Version: 7.9.0** **Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/beat_detail·js** **Stack Trace:** Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="clusterItemContainerBeats"] [data-test-subj="beatsListing"]) Wait timed out after 10016ms at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/node_modules/selenium-webdriver/lib/webdriver.js:842:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:13) _Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/495/testReport/_
non_process
chrome x pack ui functional x pack test functional apps monitoring beats beat detail·js monitoring app beats detail before all hook for cluster status bar shows correct information version class chrome x pack ui functional x pack test functional apps monitoring beats beat detail·js stack trace error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js at onfailure var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana test common services retry retry for success ts at retryforsuccess var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana test common services retry retry for success ts test report
0
345,734
30,836,969,363
IssuesEvent
2023-08-02 08:05:08
strimzi/strimzi-kafka-operator
https://api.github.com/repos/strimzi/strimzi-kafka-operator
opened
Re-factor applying NetworkPolicy with copying secrets
System tests
We should refactor our current way of creating an auxiliary namespace, application of NP and copying of pull secret (in system tests) and follow the rule DRY. ```java KubeClusterResource.getInstance().createNamespace(CollectorElement.createCollectorElement(simpleClassName), Constants.TEST_SUITE_NAMESPACE); NetworkPolicyResource.applyDefaultNetworkPolicySettings(extensionContext, Collections.singletonList(Constants.TEST_SUITE_NAMESPACE)); StUtils.copyImagePullSecrets(Constants.TEST_SUITE_NAMESPACE); ```
1.0
Re-factor applying NetworkPolicy with copying secrets - We should refactor our current way of creating an auxiliary namespace, application of NP and copying of pull secret (in system tests) and follow the rule DRY. ```java KubeClusterResource.getInstance().createNamespace(CollectorElement.createCollectorElement(simpleClassName), Constants.TEST_SUITE_NAMESPACE); NetworkPolicyResource.applyDefaultNetworkPolicySettings(extensionContext, Collections.singletonList(Constants.TEST_SUITE_NAMESPACE)); StUtils.copyImagePullSecrets(Constants.TEST_SUITE_NAMESPACE); ```
non_process
re factor applying networkpolicy with copying secrets we should refactor our current way of creating an auxiliary namespace application of np and copying of pull secret in system tests and follow the rule dry java kubeclusterresource getinstance createnamespace collectorelement createcollectorelement simpleclassname constants test suite namespace networkpolicyresource applydefaultnetworkpolicysettings extensioncontext collections singletonlist constants test suite namespace stutils copyimagepullsecrets constants test suite namespace
0
12,076
14,739,938,487
IssuesEvent
2021-01-07 08:13:07
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Send Emails - Error Missing Attachment
anc-process anp-1 ant-bug ant-enhancement ant-parent/primary
In GitLab by @kdjstudios on Sep 28, 2018, 10:24 Attachments at the site level for the email invoices are causing errors: - We need to know why these attachments are suddenly disappearing? Users do not have access to delete these files. - We need to update the "send emails" functionality to generate an error to the user to allow them to fix the attachment. - Error text "We are unable to process sending emails due to an issue with your email attachment file. Please edit the attachment file in the edit site settings page."
1.0
Send Emails - Error Missing Attachment - In GitLab by @kdjstudios on Sep 28, 2018, 10:24 Attachments at the site level for the email invoices are causing errors: - We need to know why these attachments are suddenly disappearing? Users do not have access to delete these files. - We need to update the "send emails" functionality to generate an error to the user to allow them to fix the attachment. - Error text "We are unable to process sending emails due to an issue with your email attachment file. Please edit the attachment file in the edit site settings page."
process
send emails error missing attachment in gitlab by kdjstudios on sep attachments at the site level for the email invoices are causing errors we need to know why these attachments are suddenly disappearing users do not have access to delete these files we need to update the send emails functionality to generate an error to the user to allow them to fix the attachment error text we are unable to process sending emails due to an issue with your email attachment file please edit the attachment file in the edit site settings page
1
16,985
12,159,228,869
IssuesEvent
2020-04-26 08:09:36
microsoft/react-native-windows
https://api.github.com/repos/microsoft/react-native-windows
opened
Stop Publishing ReactUwp NuGet Package
Area: Infrastructure
We publish ReactUwp as a NuGet package for Stellar. We're no longer supporting ReactUwp for 0.62. We should stop publishing the package.
1.0
Stop Publishing ReactUwp NuGet Package - We publish ReactUwp as a NuGet package for Stellar. We're no longer supporting ReactUwp for 0.62. We should stop publishing the package.
non_process
stop publishing reactuwp nuget package we publish reactuwp as a nuget package for stellar we re no longer supporting reactuwp for we should stop publishing the package
0
75,071
9,200,024,986
IssuesEvent
2019-03-07 16:09:35
coreos/fedora-coreos-tracker
https://api.github.com/repos/coreos/fedora-coreos-tracker
closed
Host Installer for Fedora CoreOS (bare metal)
design priority/medium
Being that we are planning to boot from a common "image" on first boot in Fedora CoreOS we'd like an installer that can get that image onto a disk for a bare metal environment (cloud/VM environments should be using related image artifacts or pre-uploaded cloud artifacts). Anaconda can do this (i.e. write a pre-baked image to disk), but might be overkill for what we actually need considering we don't really want any customizations done by the installer and all of them performed by ignition on first boot. Container Linux in the past has used a small script (basically wrapping dd) as their installer. Let's come up with a strategy for a host installer for Fedora CoreOS and implement it.
1.0
Host Installer for Fedora CoreOS (bare metal) - Being that we are planning to boot from a common "image" on first boot in Fedora CoreOS we'd like an installer that can get that image onto a disk for a bare metal environment (cloud/VM environments should be using related image artifacts or pre-uploaded cloud artifacts). Anaconda can do this (i.e. write a pre-baked image to disk), but might be overkill for what we actually need considering we don't really want any customizations done by the installer and all of them performed by ignition on first boot. Container Linux in the past has used a small script (basically wrapping dd) as their installer. Let's come up with a strategy for a host installer for Fedora CoreOS and implement it.
non_process
host installer for fedora coreos bare metal being that we are planning to boot from a common image on first boot in fedora coreos we d like an installer that can get that image onto a disk for a bare metal environment cloud vm environments should be using related image artifacts or pre uploaded cloud artifacts anaconda can do this i e write a pre baked image to disk but might be overkill for what we actually need considering we don t really want any customizations done by the installer and all of them performed by ignition on first boot container linux in the past has used a small script basically wrapping dd as their installer let s come up with a strategy for a host installer for fedora coreos and implement it
0
7,392
10,519,283,598
IssuesEvent
2019-09-29 16:53:18
fluent/fluent-bit
https://api.github.com/repos/fluent/fluent-bit
closed
Stream processor Missing sub-key in Aggregate function
enhancement work-in-process
## Bug Report **Describe the bug** This gives an error: "SELECT key['nestedkey'], SUM(key['nestedsumkey']) FROM STREAM:instream GROUP BY key['nestedkey']" **To Reproduce** - Rubular link if applicable: - Example log message if applicable: ``` json structure: { "key": { "nestedkey": "SomeKey", "nestedsumkey": 23 } } ``` **Expected behavior** It should be possible to aggregate using nested keys. In this case it generates an error. It would also be good if it could follow SQL standard so you could group on a subpart of columns but in the SELECT have all columns and aggregate function on the specific column, like: "SELECT notnestedkey, key['nestedkey1'], key['nestedkey2'], key['nestedkey3'], sum( key['nestedsumkey']) FROM STREAM:instream group by notnestedkey, key['nestedkey1'];" **Screenshots** **Your Environment** <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 1.1.3 * Configuration: TCP input and stream processor and STDOUT. * Environment name and version (e.g. Kubernetes? What version?): * Server type and version: Linux * Operating System and version: Debian 9.9 * Filters and plugins: TCP input plugin, Stream processor and Stdout Plugin **Additional context** This issue could help achieve aggregations of duplicate rows and minimize network trafic. This should be helpful for others to have as well.
1.0
Stream processor Missing sub-key in Aggregate function - ## Bug Report **Describe the bug** This gives an error: "SELECT key['nestedkey'], SUM(key['nestedsumkey']) FROM STREAM:instream GROUP BY key['nestedkey']" **To Reproduce** - Rubular link if applicable: - Example log message if applicable: ``` json structure: { "key": { "nestedkey": "SomeKey", "nestedsumkey": 23 } } ``` **Expected behavior** It should be possible to aggregate using nested keys. In this case it generates an error. It would also be good if it could follow SQL standard so you could group on a subpart of columns but in the SELECT have all columns and aggregate function on the specific column, like: "SELECT notnestedkey, key['nestedkey1'], key['nestedkey2'], key['nestedkey3'], sum( key['nestedsumkey']) FROM STREAM:instream group by notnestedkey, key['nestedkey1'];" **Screenshots** **Your Environment** <!--- Include as many relevant details about the environment you experienced the bug in --> * Version used: 1.1.3 * Configuration: TCP input and stream processor and STDOUT. * Environment name and version (e.g. Kubernetes? What version?): * Server type and version: Linux * Operating System and version: Debian 9.9 * Filters and plugins: TCP input plugin, Stream processor and Stdout Plugin **Additional context** This issue could help achieve aggregations of duplicate rows and minimize network trafic. This should be helpful for others to have as well.
process
stream processor missing sub key in aggregate function bug report describe the bug this gives an error select key sum key from stream instream group by key to reproduce rubular link if applicable example log message if applicable json structure key nestedkey somekey nestedsumkey expected behavior it should be possible to aggregate using nested keys in this case it generates an error it would also be good if it could follow sql standard so you could group on a subpart of columns but in the select have all columns and aggregate function on the specific column like select notnestedkey key key key sum key from stream instream group by notnestedkey key screenshots your environment version used configuration tcp input and stream processor and stdout environment name and version e g kubernetes what version server type and version linux operating system and version debian filters and plugins tcp input plugin stream processor and stdout plugin additional context this issue could help achieve aggregations of duplicate rows and minimize network trafic this should be helpful for others to have as well
1
15,575
19,703,507,506
IssuesEvent
2022-01-12 19:08:16
googleapis/nodejs-dialogflow-cx
https://api.github.com/repos/googleapis/nodejs-dialogflow-cx
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'dialogflow-cx' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'dialogflow-cx' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname dialogflow cx invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
437
2,870,165,405
IssuesEvent
2015-06-06 22:17:39
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
closed
CSA results are different between v1.1.2 and v2.0.3
priority: high sct_process_segmentation
https://sourceforge.net/p/spinalcordtoolbox/discussion/help/thread/ec2e6371/ data: sct_example_data/t2 syntax v1.1.2: ~~~~ sct_process_segmentation -i t2_seg.nii.gz -p compute_csa ~~~~ syntax v2.0.3: ~~~~ sct_process_segmentation -i t2_seg.nii.gz -p csa ~~~~ result v1.1.2 ~~~~ z=185: 80.9039201974 mm^2 z=186: 79.9035690101 mm^2 z=187: 81.8995062934 mm^2 z=188: 81.8979851359 mm^2 z=189: 82.8952288064 mm^2 z=190: 79.8975847077 mm^2 z=191: 79.8961806615 mm^2 z=192: 76.8987484115 mm^2 z=193: 78.8948734624 mm^2 z=194: 77.895254389 mm^2 ~~~~ result v2.0.3 ~~~~ z=185: 64.5376720147 mm^2 z=186: 64.7126068463 mm^2 z=187: 64.899578898 mm^2 z=188: 65.0991905586 mm^2 z=189: 65.3120442167 mm^2 z=190: 65.538742261 mm^2 z=191: 65.7798870803 mm^2 z=192: 66.0360810632 mm^2 z=193: 66.3079265983 mm^2 z=194: 66.5960260744 mm^2 ~~~~
1.0
CSA results are different between v1.1.2 and v2.0.3 - https://sourceforge.net/p/spinalcordtoolbox/discussion/help/thread/ec2e6371/ data: sct_example_data/t2 syntax v1.1.2: ~~~~ sct_process_segmentation -i t2_seg.nii.gz -p compute_csa ~~~~ syntax v2.0.3: ~~~~ sct_process_segmentation -i t2_seg.nii.gz -p csa ~~~~ result v1.1.2 ~~~~ z=185: 80.9039201974 mm^2 z=186: 79.9035690101 mm^2 z=187: 81.8995062934 mm^2 z=188: 81.8979851359 mm^2 z=189: 82.8952288064 mm^2 z=190: 79.8975847077 mm^2 z=191: 79.8961806615 mm^2 z=192: 76.8987484115 mm^2 z=193: 78.8948734624 mm^2 z=194: 77.895254389 mm^2 ~~~~ result v2.0.3 ~~~~ z=185: 64.5376720147 mm^2 z=186: 64.7126068463 mm^2 z=187: 64.899578898 mm^2 z=188: 65.0991905586 mm^2 z=189: 65.3120442167 mm^2 z=190: 65.538742261 mm^2 z=191: 65.7798870803 mm^2 z=192: 66.0360810632 mm^2 z=193: 66.3079265983 mm^2 z=194: 66.5960260744 mm^2 ~~~~
process
csa results are different between and data sct example data syntax sct process segmentation i seg nii gz p compute csa syntax sct process segmentation i seg nii gz p csa result z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm result z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm
1
169,808
6,417,870,592
IssuesEvent
2017-08-08 17:43:00
hackupc/backend
https://api.github.com/repos/hackupc/backend
closed
Add review applications view
enhancement low_priority
Admin only view similar to vote view that allows to manually accept knowing the current votes that the application has. - Ordering should be for votes and submission_dates. - Actions: Accept, Skip, Reject - Note that as skip is a possible action, there should be a parameter that saves how many applications have been skipped (or just remove skip). Reason: Actual admin dashboard doesn't show a lot of information to admins, so this would be an option to review applications one by one (after they have been voted)
1.0
Add review applications view - Admin only view similar to vote view that allows to manually accept knowing the current votes that the application has. - Ordering should be for votes and submission_dates. - Actions: Accept, Skip, Reject - Note that as skip is a possible action, there should be a parameter that saves how many applications have been skipped (or just remove skip). Reason: Actual admin dashboard doesn't show a lot of information to admins, so this would be an option to review applications one by one (after they have been voted)
non_process
add review applications view admin only view similar to vote view that allows to manually accept knowing the current votes that the application has ordering should be for votes and submission dates actions accept skip reject note that as skip is a possible action there should be a parameter that saves how many applications have been skipped or just remove skip reason actual admin dashboard doesn t show a lot of information to admins so this would be an option to review applications one by one after they have been voted
0
747,836
26,100,651,854
IssuesEvent
2022-12-27 06:26:46
bounswe/bounswe2022group4
https://api.github.com/repos/bounswe/bounswe2022group4
closed
Mobile: Doctor comments should be highlighted.
Category - To Do Category - Enhancement Priority - High Status: In Progress Language - Kotlin Team - Mobile Mobile
### Description: Doctors should have outshined comments. ### What to do: - [x] Doctor specific comment fragment ### Deadline 27.11.2022, 12.00(GMT+3)
1.0
Mobile: Doctor comments should be highlighted. - ### Description: Doctors should have outshined comments. ### What to do: - [x] Doctor specific comment fragment ### Deadline 27.11.2022, 12.00(GMT+3)
non_process
mobile doctor comments should be highlighted description doctors should have outshined comments what to do doctor specific comment fragment deadline gmt
0
740,964
25,775,798,193
IssuesEvent
2022-12-09 11:52:10
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
RPI Pico usb hangs up in interrupt handler for composite devices
bug priority: low area: Drivers area: USB platform: Raspberry Pi Pico
**Describe the bug** I am using the rp2040 with the cdc_acm_composite sample. When sending bytes the first few bytes to the first acm port they will be echoed to the second acm port. After a few more bytes the port will hang up. **To Reproduce** Steps to reproduce the behavior: 1. Build the cdc_acm_composite sample for the rp2040 2. Connect to ACM0 and ACM1 (or whatever they are called on your system) 3. Send bytes to ACM0 4. Observe the hangup **Expected behavior** All bytes should be echoed to ACM1 **Impact** Composite devices are not usable **Logs and console output** It seems like there is some unhandled interrupt in the udc_rpi_isr which never gets cleared, resulting in endless looping of the isr. gdb log: ``` Thread 1 received signal SIGINT, Interrupt. 0x100038bc in udc_rpi_isr (arg=<optimized out>) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:194 194 if (status & USB_INTS_DEV_CONN_DIS_BITS) { (gdb) bt #0 0x100038bc in udc_rpi_isr (arg=<optimized out>) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:194 #1 0x100034d4 in _isr_wrapper () at /data/git/zephyr-workspace/zephyr/arch/arm/core/aarch32/isr_wrapper.S:259 #2 <signal handler called> #3 0x100033da in arch_irq_unlock (key=0) at /data/git/zephyr-workspace/zephyr/include/zephyr/arch/arm/aarch32/asm_inline_gcc.h:91 #4 arch_swap (key=key@entry=0) at /data/git/zephyr-workspace/zephyr/arch/arm/core/aarch32/swap.c:44 #5 0x10005224 in z_swap_irqlock (key=0) at /data/git/zephyr-workspace/zephyr/kernel/include/kswap.h:185 #6 z_swap (key=..., lock=0x20001684) at /data/git/zephyr-workspace/zephyr/kernel/include/kswap.h:196 #7 z_pend_curr (lock=lock@entry=0x20001684, key=..., wait_q=wait_q@entry=0x20000a28 <z_usb_work_q+120>, timeout=timeout@entry=...) at /data/git/zephyr-workspace/zephyr/kernel/sched.c:842 #8 0x100055e4 in z_sched_wait (lock=lock@entry=0x20001684, key=..., key@entry=..., wait_q=wait_q@entry=0x20000a28 <z_usb_work_q+120>, timeout=timeout@entry=..., data=data@entry=0x0) at /data/git/zephyr-workspace/zephyr/kernel/sched.c:1893 #9 0x10004d5c in work_queue_main (workq_ptr=0x200009b0 <z_usb_work_q>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/kernel/work.c:660 #10 0x10005c0e in z_thread_entry (entry=0x10004d21 <work_queue_main>, p1=<optimized out>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/lib/os/thread_entry.c:36 #11 0x10005c0e in z_thread_entry (entry=0x0, p1=<optimized out>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/lib/os/thread_entry.c:36 #12 0x10026040 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?) (gdb) b udc_rpi_isr Breakpoint 1 at 0x10003858: file /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c, line 179. Note: automatically using hardware breakpoints for read-only addresses. (gdb) c Continuing. target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0x00000138 msp: 0x20041f00 Thread 1 hit Breakpoint 1, udc_rpi_isr (arg=0x0) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:179 179 uint32_t status = usb_hw->ints; (gdb) n 183 if (status & USB_INTS_SETUP_REQ_BITS) { (gdb) n 189 if (status & USB_INTS_BUFF_STATUS_BITS) { (gdb) n 191 udc_rpi_handle_buff_status(); (gdb) n 194 if (status & USB_INTS_DEV_CONN_DIS_BITS) { (gdb) n 209 if (status & USB_INTS_BUS_RESET_BITS) { (gdb) n 230 if (status & USB_INTS_ERROR_DATA_SEQ_BITS) { (gdb) n 236 if (status ^ handled) { (gdb) s 237 LOG_ERR("unhandled IRQ: 0x%x", (uint)(status ^ handled)); ``` ``` *** Booting Zephyr OS build 1b26acca1e03 *** [00:00:00.005,000] <inf> cdc_acm_composite: Wait for DTR [00:00:04.317,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.317,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.393,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.394,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.478,000] <inf> usb_cdc_acm: Device configured [00:00:04.478,000] <inf> usb_cdc_acm: Device configured [00:00:28.834,000] <inf> cdc_acm_composite: DTR set, start test ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr-SDK 0.15.0, Zephyr 3.2 ** Note ** Sometimes this bug also occurs when using a single acm device
1.0
RPI Pico usb hangs up in interrupt handler for composite devices - **Describe the bug** I am using the rp2040 with the cdc_acm_composite sample. When sending bytes the first few bytes to the first acm port they will be echoed to the second acm port. After a few more bytes the port will hang up. **To Reproduce** Steps to reproduce the behavior: 1. Build the cdc_acm_composite sample for the rp2040 2. Connect to ACM0 and ACM1 (or whatever they are called on your system) 3. Send bytes to ACM0 4. Observe the hangup **Expected behavior** All bytes should be echoed to ACM1 **Impact** Composite devices are not usable **Logs and console output** It seems like there is some unhandled interrupt in the udc_rpi_isr which never gets cleared, resulting in endless looping of the isr. gdb log: ``` Thread 1 received signal SIGINT, Interrupt. 0x100038bc in udc_rpi_isr (arg=<optimized out>) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:194 194 if (status & USB_INTS_DEV_CONN_DIS_BITS) { (gdb) bt #0 0x100038bc in udc_rpi_isr (arg=<optimized out>) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:194 #1 0x100034d4 in _isr_wrapper () at /data/git/zephyr-workspace/zephyr/arch/arm/core/aarch32/isr_wrapper.S:259 #2 <signal handler called> #3 0x100033da in arch_irq_unlock (key=0) at /data/git/zephyr-workspace/zephyr/include/zephyr/arch/arm/aarch32/asm_inline_gcc.h:91 #4 arch_swap (key=key@entry=0) at /data/git/zephyr-workspace/zephyr/arch/arm/core/aarch32/swap.c:44 #5 0x10005224 in z_swap_irqlock (key=0) at /data/git/zephyr-workspace/zephyr/kernel/include/kswap.h:185 #6 z_swap (key=..., lock=0x20001684) at /data/git/zephyr-workspace/zephyr/kernel/include/kswap.h:196 #7 z_pend_curr (lock=lock@entry=0x20001684, key=..., wait_q=wait_q@entry=0x20000a28 <z_usb_work_q+120>, timeout=timeout@entry=...) at /data/git/zephyr-workspace/zephyr/kernel/sched.c:842 #8 0x100055e4 in z_sched_wait (lock=lock@entry=0x20001684, key=..., key@entry=..., wait_q=wait_q@entry=0x20000a28 <z_usb_work_q+120>, timeout=timeout@entry=..., data=data@entry=0x0) at /data/git/zephyr-workspace/zephyr/kernel/sched.c:1893 #9 0x10004d5c in work_queue_main (workq_ptr=0x200009b0 <z_usb_work_q>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/kernel/work.c:660 #10 0x10005c0e in z_thread_entry (entry=0x10004d21 <work_queue_main>, p1=<optimized out>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/lib/os/thread_entry.c:36 #11 0x10005c0e in z_thread_entry (entry=0x0, p1=<optimized out>, p2=<optimized out>, p3=<optimized out>) at /data/git/zephyr-workspace/zephyr/lib/os/thread_entry.c:36 #12 0x10026040 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?) (gdb) b udc_rpi_isr Breakpoint 1 at 0x10003858: file /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c, line 179. Note: automatically using hardware breakpoints for read-only addresses. (gdb) c Continuing. target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0x00000138 msp: 0x20041f00 Thread 1 hit Breakpoint 1, udc_rpi_isr (arg=0x0) at /data/git/zephyr-workspace/zephyr/drivers/usb/device/usb_dc_rpi_pico.c:179 179 uint32_t status = usb_hw->ints; (gdb) n 183 if (status & USB_INTS_SETUP_REQ_BITS) { (gdb) n 189 if (status & USB_INTS_BUFF_STATUS_BITS) { (gdb) n 191 udc_rpi_handle_buff_status(); (gdb) n 194 if (status & USB_INTS_DEV_CONN_DIS_BITS) { (gdb) n 209 if (status & USB_INTS_BUS_RESET_BITS) { (gdb) n 230 if (status & USB_INTS_ERROR_DATA_SEQ_BITS) { (gdb) n 236 if (status ^ handled) { (gdb) s 237 LOG_ERR("unhandled IRQ: 0x%x", (uint)(status ^ handled)); ``` ``` *** Booting Zephyr OS build 1b26acca1e03 *** [00:00:00.005,000] <inf> cdc_acm_composite: Wait for DTR [00:00:04.317,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.317,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.393,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.394,000] <inf> usb_cdc_acm: Device disconnected [00:00:04.478,000] <inf> usb_cdc_acm: Device configured [00:00:04.478,000] <inf> usb_cdc_acm: Device configured [00:00:28.834,000] <inf> cdc_acm_composite: DTR set, start test ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr-SDK 0.15.0, Zephyr 3.2 ** Note ** Sometimes this bug also occurs when using a single acm device
non_process
rpi pico usb hangs up in interrupt handler for composite devices describe the bug i am using the with the cdc acm composite sample when sending bytes the first few bytes to the first acm port they will be echoed to the second acm port after a few more bytes the port will hang up to reproduce steps to reproduce the behavior build the cdc acm composite sample for the connect to and or whatever they are called on your system send bytes to observe the hangup expected behavior all bytes should be echoed to impact composite devices are not usable logs and console output it seems like there is some unhandled interrupt in the udc rpi isr which never gets cleared resulting in endless looping of the isr gdb log thread received signal sigint interrupt in udc rpi isr arg at data git zephyr workspace zephyr drivers usb device usb dc rpi pico c if status usb ints dev conn dis bits gdb bt in udc rpi isr arg at data git zephyr workspace zephyr drivers usb device usb dc rpi pico c in isr wrapper at data git zephyr workspace zephyr arch arm core isr wrapper s in arch irq unlock key at data git zephyr workspace zephyr include zephyr arch arm asm inline gcc h arch swap key key entry at data git zephyr workspace zephyr arch arm core swap c in z swap irqlock key at data git zephyr workspace zephyr kernel include kswap h z swap key lock at data git zephyr workspace zephyr kernel include kswap h z pend curr lock lock entry key wait q wait q entry timeout timeout entry at data git zephyr workspace zephyr kernel sched c in z sched wait lock lock entry key key entry wait q wait q entry timeout timeout entry data data entry at data git zephyr workspace zephyr kernel sched c in work queue main workq ptr at data git zephyr workspace zephyr kernel work c in z thread entry entry at data git zephyr workspace zephyr lib os thread entry c in z thread entry entry at data git zephyr workspace zephyr lib os thread entry c in backtrace stopped previous frame identical to this frame corrupt stack gdb b udc rpi isr breakpoint at file data git zephyr workspace zephyr drivers usb device usb dc rpi pico c line note automatically using hardware breakpoints for read only addresses gdb c continuing target halted due to debug request current mode thread xpsr pc msp thread hit breakpoint udc rpi isr arg at data git zephyr workspace zephyr drivers usb device usb dc rpi pico c t status usb hw ints gdb n if status usb ints setup req bits gdb n if status usb ints buff status bits gdb n udc rpi handle buff status gdb n if status usb ints dev conn dis bits gdb n if status usb ints bus reset bits gdb n if status usb ints error data seq bits gdb n if status handled gdb s log err unhandled irq x uint status handled booting zephyr os build cdc acm composite wait for dtr usb cdc acm device disconnected usb cdc acm device disconnected usb cdc acm device disconnected usb cdc acm device disconnected usb cdc acm device configured usb cdc acm device configured cdc acm composite dtr set start test environment please complete the following information os linux toolchain zephyr sdk zephyr note sometimes this bug also occurs when using a single acm device
0
24,607
12,131,691,343
IssuesEvent
2020-04-23 05:30:49
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
opened
Provide Model Factory per .NET Mocking Guidelines
Client Cognitive Services FormRecognizer
https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-mocking ✅ DO provide factory or builder for constructing model graphs returned from virtual service methods. Model types shouldn’t have public constructors. Instances of the model are typically returned from the client library, and are not constructed by the consumer of the library. Mock implementations need to create instances of model types. Implement a static class called <service>ModelFactory in the same namespace as the model types.
1.0
Provide Model Factory per .NET Mocking Guidelines - https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-mocking ✅ DO provide factory or builder for constructing model graphs returned from virtual service methods. Model types shouldn’t have public constructors. Instances of the model are typically returned from the client library, and are not constructed by the consumer of the library. Mock implementations need to create instances of model types. Implement a static class called <service>ModelFactory in the same namespace as the model types.
non_process
provide model factory per net mocking guidelines ✅ do provide factory or builder for constructing model graphs returned from virtual service methods model types shouldn’t have public constructors instances of the model are typically returned from the client library and are not constructed by the consumer of the library mock implementations need to create instances of model types implement a static class called modelfactory in the same namespace as the model types
0
9,345
12,345,928,718
IssuesEvent
2020-05-15 09:49:22
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `TimeToSec` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `TimeToSec` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @sticnarf ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `TimeToSec` from TiDB - ## Description Port the scalar function `TimeToSec` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @sticnarf ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function timetosec from tidb description port the scalar function timetosec from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
1
209,955
23,730,981,154
IssuesEvent
2022-08-31 01:39:33
Baneeishaque/FinPro-ERP-Web
https://api.github.com/repos/Baneeishaque/FinPro-ERP-Web
closed
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed
security vulnerability
## CVE-2018-14040 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p> <p>Path to vulnerable library: /FinPro_ERP_Web/public/js/bootstrap.min.js,/FinPro_ERP_Web/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/bootstrap.min.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/FinPro_ERP_Web/commit/3fb63435000ba9bb2bec7d5cebf21a3d61da6d18">3fb63435000ba9bb2bec7d5cebf21a3d61da6d18</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-14040 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed - ## CVE-2018-14040 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p> <p>Path to vulnerable library: /FinPro_ERP_Web/public/js/bootstrap.min.js,/FinPro_ERP_Web/vendor/phpunit/php-code-coverage/src/Report/Html/Renderer/Template/js/bootstrap.min.js</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/FinPro_ERP_Web/commit/3fb63435000ba9bb2bec7d5cebf21a3d61da6d18">3fb63435000ba9bb2bec7d5cebf21a3d61da6d18</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent attribute. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14040>CVE-2018-14040</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2,org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in bootstrap min js autoclosed cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library finpro erp web public js bootstrap min js finpro erp web vendor phpunit php code coverage src report html renderer template js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before xss is possible in the collapse data parent attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with whitesource
0
13,340
15,801,045,539
IssuesEvent
2021-04-03 02:40:04
PyCQA/flake8
https://api.github.com/repos/PyCQA/flake8
closed
Deadlock with very large number of files
component:multiprocessing component:performance help wanted priority:high
In GitLab by @asottile on Nov 18, 2016, 11:35 ## Installing flake8 ``` $ virtualenv venv -ppython2.7 $ venv/bin/pip install flake8 $ ./venv/bin/flake8 --bug-report { "dependencies": [ { "dependency": "setuptools", "version": "3.4.4" } ], "platform": { "python_implementation": "CPython", "python_version": "2.7.6", "system": "Linux" }, "plugins": [ { "plugin": "mccabe", "version": "0.5.2" }, { "plugin": "pycodestyle", "version": "2.0.0" }, { "plugin": "pyflakes", "version": "1.2.3" } ], "version": "3.2.0" } ``` ## Issue Description With lots of files, flake8 deadlocks. Deadlock is confirmed by seeing all processes at either `wait` or `futex_wait` in `ps`. In the 2.x line this does not seem to be a problem. ## Reproduction (This is affecting a real repository with real files, I've simulated this with 100,001 blank files below). ``` mkdir -p bar touch bar/a{0..100000}.py flake8 bar -j8 ``` In a separate tab open: ``` watch -n.1 "ps -wwwelfy | grep $USER |grep flake8 | grep -v grep | grep -v watch" ``` In my case, after ~3 minutes the watch panel reaches this state: ``` Every .1s: ps -wwwelfy | grep asottile |grep flake8 | grep -v grep | grep -v watch Fri Nov 18 11:31:49 2016 S asottile 1484265 1029227 27 80 0 591748 171242 wait 11:21 pts/29 00:02:44 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487064 1484265 3 80 0 583904 190214 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487072 1484265 3 80 0 583920 190244 futex_ 11:22 pts/29 00:00:19 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487077 1484265 3 80 0 583916 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487085 1484265 3 80 0 583920 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487093 1484265 3 80 0 583924 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487097 1484265 3 80 0 583924 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487110 1484265 3 80 0 583956 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487111 1484265 3 80 0 583920 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 ``` ## Regression With an older `flake8`, this finishes much quicker and does not deadlock: ``` $ flake8 --version 2.6.2 (pycodestyle: 2.0.0, pyflakes: 1.2.3, mccabe: 0.5.2) CPython 2.7.6 on Linux $ time flake8 bar -j8 real 0m15.752s user 0m14.612s sys 0m1.104s ```
1.0
Deadlock with very large number of files - In GitLab by @asottile on Nov 18, 2016, 11:35 ## Installing flake8 ``` $ virtualenv venv -ppython2.7 $ venv/bin/pip install flake8 $ ./venv/bin/flake8 --bug-report { "dependencies": [ { "dependency": "setuptools", "version": "3.4.4" } ], "platform": { "python_implementation": "CPython", "python_version": "2.7.6", "system": "Linux" }, "plugins": [ { "plugin": "mccabe", "version": "0.5.2" }, { "plugin": "pycodestyle", "version": "2.0.0" }, { "plugin": "pyflakes", "version": "1.2.3" } ], "version": "3.2.0" } ``` ## Issue Description With lots of files, flake8 deadlocks. Deadlock is confirmed by seeing all processes at either `wait` or `futex_wait` in `ps`. In the 2.x line this does not seem to be a problem. ## Reproduction (This is affecting a real repository with real files, I've simulated this with 100,001 blank files below). ``` mkdir -p bar touch bar/a{0..100000}.py flake8 bar -j8 ``` In a separate tab open: ``` watch -n.1 "ps -wwwelfy | grep $USER |grep flake8 | grep -v grep | grep -v watch" ``` In my case, after ~3 minutes the watch panel reaches this state: ``` Every .1s: ps -wwwelfy | grep asottile |grep flake8 | grep -v grep | grep -v watch Fri Nov 18 11:31:49 2016 S asottile 1484265 1029227 27 80 0 591748 171242 wait 11:21 pts/29 00:02:44 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487064 1484265 3 80 0 583904 190214 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487072 1484265 3 80 0 583920 190244 futex_ 11:22 pts/29 00:00:19 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487077 1484265 3 80 0 583916 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487085 1484265 3 80 0 583920 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487093 1484265 3 80 0 583924 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487097 1484265 3 80 0 583924 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487110 1484265 3 80 0 583956 190221 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 S asottile 1487111 1484265 3 80 0 583920 190222 futex_ 11:22 pts/29 00:00:18 /tmp/fpp/venv/bin/python2.7 /tmp/fpp/venv/bin/flake8 bar -j8 ``` ## Regression With an older `flake8`, this finishes much quicker and does not deadlock: ``` $ flake8 --version 2.6.2 (pycodestyle: 2.0.0, pyflakes: 1.2.3, mccabe: 0.5.2) CPython 2.7.6 on Linux $ time flake8 bar -j8 real 0m15.752s user 0m14.612s sys 0m1.104s ```
process
deadlock with very large number of files in gitlab by asottile on nov installing virtualenv venv venv bin pip install venv bin bug report dependencies dependency setuptools version platform python implementation cpython python version system linux plugins plugin mccabe version plugin pycodestyle version plugin pyflakes version version issue description with lots of files deadlocks deadlock is confirmed by seeing all processes at either wait or futex wait in ps in the x line this does not seem to be a problem reproduction this is affecting a real repository with real files i ve simulated this with blank files below mkdir p bar touch bar a py bar in a separate tab open watch n ps wwwelfy grep user grep grep v grep grep v watch in my case after minutes the watch panel reaches this state every ps wwwelfy grep asottile grep grep v grep grep v watch fri nov s asottile wait pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar s asottile futex pts tmp fpp venv bin tmp fpp venv bin bar regression with an older this finishes much quicker and does not deadlock version pycodestyle pyflakes mccabe cpython on linux time bar real user sys
1
16,644
5,266,423,909
IssuesEvent
2017-02-04 12:21:13
jba0040/GGJ17Burgos
https://api.github.com/repos/jba0040/GGJ17Burgos
opened
Ataque cuerpo a cuerpo: funcionalidad
code enhancement
Sistema de ataque cuerpo a cuerpo. El objetivo es que desplace ligeramente al enemigo. El ataque podrá cargarse para aumentar la distancia de desplazamiento del enemigo.
1.0
Ataque cuerpo a cuerpo: funcionalidad - Sistema de ataque cuerpo a cuerpo. El objetivo es que desplace ligeramente al enemigo. El ataque podrá cargarse para aumentar la distancia de desplazamiento del enemigo.
non_process
ataque cuerpo a cuerpo funcionalidad sistema de ataque cuerpo a cuerpo el objetivo es que desplace ligeramente al enemigo el ataque podrá cargarse para aumentar la distancia de desplazamiento del enemigo
0
14,108
16,998,222,779
IssuesEvent
2021-07-01 09:12:42
hochschule-darmstadt/openartbrowser
https://api.github.com/repos/hochschule-darmstadt/openartbrowser
opened
New attributes
User Interface etl process feature medium priority question
**Reason (Why?)** The wikidata datasource offers way more attributes to crawl. **Solution (What?)** Add the following attributes: - for Buildings: - Additional attributes are architect P84, which corresponds to the artist. - architectural style P194 corresponds to movement This is a good opportunity to change the models in crawler to match the models and the inheritance of properties in the frontend. [Add further attributes here] **Relation to other Issues** Provide additional context (link relevant issues) **Acceptance criteria** New attributes are added
1.0
New attributes - **Reason (Why?)** The wikidata datasource offers way more attributes to crawl. **Solution (What?)** Add the following attributes: - for Buildings: - Additional attributes are architect P84, which corresponds to the artist. - architectural style P194 corresponds to movement This is a good opportunity to change the models in crawler to match the models and the inheritance of properties in the frontend. [Add further attributes here] **Relation to other Issues** Provide additional context (link relevant issues) **Acceptance criteria** New attributes are added
process
new attributes reason why the wikidata datasource offers way more attributes to crawl solution what add the following attributes for buildings additional attributes are architect which corresponds to the artist architectural style corresponds to movement this is a good opportunity to change the models in crawler to match the models and the inheritance of properties in the frontend relation to other issues provide additional context link relevant issues acceptance criteria new attributes are added
1
2,135
4,974,558,330
IssuesEvent
2016-12-06 07:11:23
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
closed
Setup continuous processing
Processors
- [ ] add strategy to process only updated data in `warehouse` - [ ] update `processors` stack to use this strategy and `make-initial-processing` to do not use - [ ] run `processors` stack on docker-cloud - [ ] remove `make-initial-processing` stack? all `processors` could check last day when objects was processed and start to process from this time - so only `processors` stack will be needed.
1.0
Setup continuous processing - - [ ] add strategy to process only updated data in `warehouse` - [ ] update `processors` stack to use this strategy and `make-initial-processing` to do not use - [ ] run `processors` stack on docker-cloud - [ ] remove `make-initial-processing` stack? all `processors` could check last day when objects was processed and start to process from this time - so only `processors` stack will be needed.
process
setup continuous processing add strategy to process only updated data in warehouse update processors stack to use this strategy and make initial processing to do not use run processors stack on docker cloud remove make initial processing stack all processors could check last day when objects was processed and start to process from this time so only processors stack will be needed
1
210,776
16,385,093,578
IssuesEvent
2021-05-17 09:23:50
TeamPneumatic/pnc-repressurized
https://api.github.com/repos/TeamPneumatic/pnc-repressurized
closed
Technical typos in the PNC:R handbook
Bug Documentation Fixed in Dev
### Minecraft Version 1.16.5 ### Forge Version 36.1.16 ### Mod Version 184 Some entries have certain strings that haven't been parsed correctly as contextual links. ![uhoh](https://user-images.githubusercontent.com/1103834/118428388-32d9b980-b684-11eb-8e90-bfad4245a6a5.png) So far I've seen this in: "Pneumatic Armor Overview", "Pneumatic Chestplate"
1.0
Technical typos in the PNC:R handbook - ### Minecraft Version 1.16.5 ### Forge Version 36.1.16 ### Mod Version 184 Some entries have certain strings that haven't been parsed correctly as contextual links. ![uhoh](https://user-images.githubusercontent.com/1103834/118428388-32d9b980-b684-11eb-8e90-bfad4245a6a5.png) So far I've seen this in: "Pneumatic Armor Overview", "Pneumatic Chestplate"
non_process
technical typos in the pnc r handbook minecraft version forge version mod version some entries have certain strings that haven t been parsed correctly as contextual links so far i ve seen this in pneumatic armor overview pneumatic chestplate
0
288,882
31,930,973,376
IssuesEvent
2023-09-19 07:24:02
Trinadh465/linux-4.1.15_CVE-2023-4128
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
opened
CVE-2020-10732 (Medium) detected in linuxlinux-4.6
Mend: dependency security vulnerability
## CVE-2020-10732 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/binfmt_elf.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/binfmt_elf.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel's implementation of Userspace core dumps. This flaw allows an attacker with a local account to crash a trivial program and exfiltrate private kernel data. <p>Publish Date: 2020-06-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10732>CVE-2020-10732</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10732">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10732</a></p> <p>Release Date: 2020-06-12</p> <p>Fix Resolution: v5.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-10732 (Medium) detected in linuxlinux-4.6 - ## CVE-2020-10732 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/binfmt_elf.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/binfmt_elf.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel's implementation of Userspace core dumps. This flaw allows an attacker with a local account to crash a trivial program and exfiltrate private kernel data. <p>Publish Date: 2020-06-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10732>CVE-2020-10732</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10732">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10732</a></p> <p>Release Date: 2020-06-12</p> <p>Fix Resolution: v5.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files fs binfmt elf c fs binfmt elf c vulnerability details a flaw was found in the linux kernel s implementation of userspace core dumps this flaw allows an attacker with a local account to crash a trivial program and exfiltrate private kernel data publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
162,050
12,609,066,663
IssuesEvent
2020-06-12 00:21:40
pantsbuild/pants
https://api.github.com/repos/pantsbuild/pants
opened
GoTestIntegrationTest.test_go_test_simple is flaky
flaky-test
``` __________________ GoTestIntegrationTest.test_go_test_simple ___________________ self = <pants_test.contrib.go.tasks.test_go_test_integration.GoTestIntegrationTest testMethod=test_go_test_simple> def test_go_test_simple(self): args = ["test", "contrib/go/examples/src/go/libA"] pants_run = self.run_pants(args) self.assert_success(pants_run) # libA depends on libB, so both tests should be run. self.assertRegex(pants_run.stdout_data, r"ok\s+libA") self.assertRegex(pants_run.stdout_data, r"ok\s+libB") # Run a second time and see that they are cached. # TODO: this is better done with a unit test, and as noted in #7188, testing interaction with a # remote cache should probably be added somewhere. pants_run = self.run_pants(args) > self.assert_success(pants_run) pants_test/contrib/go/tasks/test_go_test_integration.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ pants/testutil/pants_run_integration_test.py:499: in assert_success self.assert_result(pants_run, PANTS_SUCCEEDED_EXIT_CODE, expected=True, msg=msg) pants/testutil/pants_run_integration_test.py:520: in assert_result assertion(value, pants_run.returncode, error_msg) E AssertionError: 0 != 1 : /pyenv-docker-build/versions/3.6.8/bin/python3.6 -m pants --no-pantsrc --pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d --print-exception-stacktrace=True --kill-nailguns test contrib/go/examples/src/go/libA E returncode: 1 E stdout: E E stderr: E Scrubbed PYTHONPATH=/b/f/w:/home/nobody/.pex/code/da39a3ee5e6b4b0d3255bfef95601890afd80709:/home/nobody/.pex/code/da39a3ee5e6b4b0d3255bfef95601890afd80709:/b/f/w/test_runner.pex:/pyenv-docker-build/versions/3.6.8/lib/python36.zip:/pyenv-docker-build/versions/3.6.8/lib/python3.6:/pyenv-docker-build/versions/3.6.8/lib/python3.6/lib-dynload:/home/nobody/.pex/installed_wheels/49d4d88c0299db8e10f74886545884e6a9cacbb5/Pygments-2.6.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/e91da08a7091c46772d87f0c961a63ac85d4a810/attrs-19.3.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/653b4b3b05183674ad07fe68096eeede4de276bc/backcall-0.2.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/06f6f44bd42b03ce2144205f7d688faeebf58cf8/coverage-5.1-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/a39764c5860f8844b13681b2f1513aaa97f3b754/decorator-4.4.2-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/83e0d232d816c9c638ee3ed5e83b822f1ac9f099/icdiff-1.9.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/4355f6b0feb0d63bb6d38c864a4b3937f441ca17/importlib_metadata-1.6.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4ba72e96d18f2bf48a4d1ef9ac4b1d94140b57ca/zipp-2.1.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/1df9b21e44b0993eb2f05f18feda11f77ec6ac3b/ipdb-0.13.2-py3-none-any.whl:/home/nobody/.pex/installed_wheels/7fdc500ea8756f23c4018b03b5f2b4be2f84354d/ipython-7.15.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/c84a350be3c972331863155699d5cd3c97f70c2d/setuptools-44.0.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/9f88e1d206d5445575b1567be5cf757944e97984/pickleshare-0.7.5-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/586e21c71919cbbc17b646536392c6f315ea5e9d/prompt_toolkit-3.0.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/5426e3f5850211033c23d03969e29331156b9f54/pexpect-4.8.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a9d5bf6dea47e1eda6353297d98a30656a6e41f7/jedi-0.17.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2141181f135fda3f826e75453c112fc9d4ab5943/traitlets-4.3.3-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d6d6fc4a263540aa4f357feff250ee78212587a7/wcwidth-0.2.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b022182801b9c7678c2a75da584161da521027f4/ptyprocess-0.6.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b39c6737e620e687416487869684d1f13030aa33/parso-0.7.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/8a22673b2bbd32a36071077c315914d35928003b/ipython_genutils-0.2.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/0cb007434fc9c1d2ee600dbd964088b69f5d670a/six-1.15.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/333c248fbcfa0263b4db142c80776ec1857d0abe/more_itertools-8.3.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/9664f565aa9532c471a09b969ff7ad32ba129789/packaging-20.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/66efc87a982c8ddaf7c60ae455d9d5b87d067259/pyparsing-2.4.7-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/8046b1eee652ee862d8ec7a0ad17fdbe4c81b4ed/pluggy-0.13.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/9d3c52ba8e1a020dc4a88b63fc8231d94000e6f2/pprintpp-0.4.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/f12aba29a84270bc263e09ac2308b1902306c749/py-1.8.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4e3d983d0f0fc98f5b042644cab1b8385f4175d6/pytest_cov-2.8.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/7ba07f4d109a12f8dfc840b5d7363dcfc543bfb0/pytest-5.3.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/b66fe775e573b6856100f8b093f0511037582574/pytest_icdiff-0.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff4d646f3154f680222b1c33cc0cd24cd85ffb17/pytest_timeout-1.3.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a76d3c478a716209a045d38c22f218d18d702bd8/Markdown-2.1.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/3fd0ae90f726055866ae80c5e52c27254b79add8/PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/49d4d88c0299db8e10f74886545884e6a9cacbb5/Pygments-2.6.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/09b1e6d512758006ed26e520287d9cdb2c7e791a/ansicolors-1.1.8-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/f47f4b8964c35360502f697cf57b3393f0279095/beautifulsoup4-4.6.3-py3-none-any.whl:/home/nobody/.pex/installed_wheels/9af923ed9ad037e13a1dcce887288fa5daa4320a/certifi-2020.4.5.2-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/67978da8bd78e479f0bd07d45a246cf2508ae1be/cffi-1.14.0-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/4bf5fb0f6540ca05cb505b5bba08c37043075a90/pycparser-2.20-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/7baca230fd8c611ab475ddc834063ed30538a784/chardet-3.0.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2eac0049ba89da81b29c164f2c54bf60fc64a775/cryptography-2.9.2-cp35-abi3-manylinux2010_x86_64.whl:/home/nobody/.pex/installed_wheels/0cb007434fc9c1d2ee600dbd964088b69f5d670a/six-1.15.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4942d49ca06cf2b496398adedf0447df3a8bed03/dataclasses-0.6-py3-none-any.whl:/home/nobody/.pex/installed_wheels/39c3714bfb00d713251571888ac69e1de544161a/docutils-0.16-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b3c554f04d9bb1c726351cf969e9301345897544/fasteners-0.15-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a25b3c1dec53262dddf6171e4c9b20039255684f/monotonic-1.5-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/17874ccb364b6c5b47bdbb1b6d8b241435bfbcfe/idna-2.9-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/717dc45ee6775628f1a9c3cb600d6b623ed40eb3/packaging-20.3-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/66efc87a982c8ddaf7c60ae455d9d5b87d067259/pyparsing-2.4.7-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4972466d45eb3be6905de9fb88c046d6a21759c4/pathspec-0.8.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d123bc570f5cb5c52070590b2bdbc8c558b6a66c/pex-2.1.11-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2bf1fc5c32c8ee05055ad3d9206415c986976acf/ply-3.11-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d8de0a1b41e12886ae8a5aecd728fffcfc1ebea1/psutil-5.7.0-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/56777feaefd555c3441aa884a6d4d8ffda60f946/py_zipkin-0.20.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/aea358da742a78d20171fa98390e60283b71d04f/thriftpy2-0.4.11-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/b10d2b90ef5f6320f42d881be6234345f896e2f0/pyOpenSSL-19.1.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/e34b43c3f9f7347f368458e68d40a5a1e70291d2/pystache-0.5.4-py3-none-any.whl:/home/nobody/.pex/installed_wheels/7499593a193c022ae86b0458ed7f8fd6f1d421bb/python_Levenshtein-0.12.0-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/c84a350be3c972331863155699d5cd3c97f70c2d/setuptools-44.0.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff51658fdce695e5a9b78037cde8a3ebbf1c812a/pywatchman-1.4.1-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/5a5391a0d4858402ffabba10f3c8444de332269f/requests-2.23.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/cf34d79fe2d84e646ced8d40080780a87f6ea0ce/urllib3-1.25.9-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/cecbc5e55a38c8a52fece89c179c063c18eb0460/setproctitle-1.1.10-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/72a57f8c0def7dc1527d10698d70f44a577e9fff/toml-0.10.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/997752670c4c1718d9e3c7af2f8e3960a5364511/twitter.common.confluence-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/1ef626456a6e0c505c40d182f6ca57a26c88bec1/twitter.common.log-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff0f47266faefbe0899ff79308333193d7bf44a4/twitter.common.dirutil-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/dd9e4fd73f3c8a75d79407cbbd32bade1b182542/twitter.common.options-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/c0e8a883e9c2eb43b38885befb3b691c3f6b561f/twitter.common.lang-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/d0a12ba117abec741977a4d9fd97af997eab9b14/typed_ast-1.4.1-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/871624bc41050236e14619ab5f73539d495f557c/typing_extensions-3.7.4.2-py3-none-any.whl:/home/nobody/.pex/installed_wheels/82666a1d11c1a0924a6bac6c9d75f590c607b26d/www_authenticate-0.9.2-py3-none-any.whl:/b/f/w/test_runner.pex/.bootstrap from the environment. E 22:38:22 [INFO] waiting for pantsd to start... E 22:38:23 [INFO] pantsd started E 22:38:23 [ERROR] E 22:38:23 [ERROR] lost active connection to pantsd! E 22:38:23 [ERROR] abruptly lost active connection to pantsd runner: NailgunError('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E Remote exception: E timestamp: 2020-06-11T22:38:23.142874 E process title: pantsd [/b/f/w] E sys.argv: ['/b/f/w/pants/__main__.py', '--no-pantsrc', '--pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d', '--print-exception-stacktrace=True', '--kill-nailguns', 'test', 'contrib/go/examples/src/go/libA'] E pid: 6310 E Exception caught: (pants.engine.internals.scheduler.ExecutionError) E File "/b/f/w/pants/__main__.py", line 7, in <module> E pants_loader.main() E File "/b/f/w/pants/bin/pants_loader.py", line 94, in main E PantsLoader.run() E File "/b/f/w/pants/bin/pants_loader.py", line 90, in run E cls.load_and_execute(entrypoint) E File "/b/f/w/pants/bin/pants_loader.py", line 83, in load_and_execute E entrypoint_main() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 452, in launch E PantsDaemon.create(OptionsBootstrapper.create()).run_sync() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 447, in run_sync E self._run_services(self._services) E File "/b/f/w/pants/pantsd/pants_daemon.py", line 350, in _run_services E self._initialize_pid() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 393, in _initialize_pid E os.path.relpath(pidfile_absolute, self._build_root) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 120, in add_invalidation_glob E self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False)) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 80, in _get_snapshot E Snapshot, subjects=[PathGlobs(globs)], poll=poll, timeout=timeout, E File "/b/f/w/pants/engine/internals/scheduler.py", line 549, in product_request E self._raise_on_error([t for _, t in throws]) E File "/b/f/w/pants/engine/internals/scheduler.py", line 489, in _raise_on_error E wrapped_exceptions=tuple(t.exc for t in throws), E E Exception message: 1 Exception encountered: E E Engine traceback: E in Snapshot(PathGlobs(globs=('!*.pyc', '!*_test.py', '!__pycache__/', '.pids/pantsd/pid', '3rdparty/**/requirements.txt', 'contrib/confluence/src/python', 'contrib/confluence/src/python/**', 'contrib/go/src/python', 'contrib/go/src/python/**', 'contrib/mypy/src/python', 'contrib/mypy/src/python/**', 'contrib/node/src/python', 'contrib/node/src/python/**', 'contrib/scrooge/src/python', 'contrib/scrooge/src/python/**', 'pants', 'pants-plugins/src/python', 'pants-plugins/src/python/**', 'pants.toml', 'pants.toml/**', 'pants/**', 'requirements.txt', 'src/rust/engine/**/*.rs', 'src/rust/engine/**/*.toml', 'test_runner.pex', 'test_runner.pex/**', 'test_runner.pex/.bootstrap', 'test_runner.pex/.bootstrap/**'), glob_match_error_behavior=<GlobMatchErrorBehavior.ignore: 'ignore'>, conjunction=<GlobExpansionConjunction.any_match: 'any_match'>, description_of_origin='')) E Traceback (no traceback): E <pants native internals> E Exception: Failed to scan directory "/b/f/w/.pids": No such file or directory (os error 2) E E Traceback (most recent call last): E File "/b/f/w/pants/java/nailgun_protocol.py", line 188, in read_chunk E header = cls._read_until(sock, cls.HEADER_BYTES) E File "/b/f/w/pants/java/nailgun_protocol.py", line 168, in _read_until E desired_size, len(buf) E pants.java.nailgun_protocol.NailgunProtocol.TruncatedRead: Expected 5 bytes before socket shutdown, instead received 0 E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/java/nailgun_client.py", line 288, in execute E exit_code = self._session.execute(cwd, main_class, *args, **environment) E File "/b/f/w/pants/java/nailgun_client.py", line 141, in execute E return self._process_session() E File "/b/f/w/pants/java/nailgun_client.py", line 104, in _process_session E MaybeShutdownSocket(self._sock), return_bytes=True, timeout_object=self, E File "/b/f/w/pants/java/nailgun_protocol.py", line 290, in iter_chunks E maybe_shutdown_socket.socket, return_bytes E File "/b/f/w/pants/java/nailgun_protocol.py", line 190, in read_chunk E raise cls.TruncatedHeaderError("Failed to read nailgun chunk header ({!r}).".format(e)) E pants.java.nailgun_protocol.NailgunProtocol.TruncatedHeaderError: Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)). E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/bin/remote_pants_runner.py", line 125, in _run_pants_with_retry E return self._connect_and_execute(pantsd_handle) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 181, in _connect_and_execute E return client.execute(self._args[0], self._args[1:], modified_env) E File "/b/f/w/pants/java/nailgun_client.py", line 292, in execute E address=self._address_string, wrapped_exc=e, E pants.java.nailgun_client.NailgunClient.NailgunError: ('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/bin/pants_exe.py", line 36, in main E exit_code = runner.run(start_time) E File "/b/f/w/pants/bin/pants_runner.py", line 86, in run E return RemotePantsRunner(self.args, self.env, options_bootstrapper).run(start_time) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 209, in run E return self._run_pants_with_retry(self._client.maybe_launch()) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 148, in _run_pants_with_retry E raise self._extract_remote_exception(pantsd_handle.pid, e).with_traceback(traceback) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 125, in _run_pants_with_retry E return self._connect_and_execute(pantsd_handle) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 181, in _connect_and_execute E return client.execute(self._args[0], self._args[1:], modified_env) E File "/b/f/w/pants/java/nailgun_client.py", line 292, in execute E address=self._address_string, wrapped_exc=e, E pants.bin.remote_pants_runner.RemotePantsRunner.Terminated: abruptly lost active connection to pantsd runner: NailgunError('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E Remote exception: E timestamp: 2020-06-11T22:38:23.142874 E process title: pantsd [/b/f/w] E sys.argv: ['/b/f/w/pants/__main__.py', '--no-pantsrc', '--pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d', '--print-exception-stacktrace=True', '--kill-nailguns', 'test', 'contrib/go/examples/src/go/libA'] E pid: 6310 E Exception caught: (pants.engine.internals.scheduler.ExecutionError) E File "/b/f/w/pants/__main__.py", line 7, in <module> E pants_loader.main() E File "/b/f/w/pants/bin/pants_loader.py", line 94, in main E PantsLoader.run() E File "/b/f/w/pants/bin/pants_loader.py", line 90, in run E cls.load_and_execute(entrypoint) E File "/b/f/w/pants/bin/pants_loader.py", line 83, in load_and_execute E entrypoint_main() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 452, in launch E PantsDaemon.create(OptionsBootstrapper.create()).run_sync() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 447, in run_sync E self._run_services(self._services) E File "/b/f/w/pants/pantsd/pants_daemon.py", line 350, in _run_services E self._initialize_pid() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 393, in _initialize_pid E os.path.relpath(pidfile_absolute, self._build_root) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 120, in add_invalidation_glob E self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False)) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 80, in _get_snapshot E Snapshot, subjects=[PathGlobs(globs)], poll=poll, timeout=timeout, E File "/b/f/w/pants/engine/internals/scheduler.py", line 549, in product_request E self._raise_on_error([t for _, t in throws]) E File "/b/f/w/pants/engine/internals/scheduler.py", line 489, in _raise_on_error E wrapped_exceptions=tuple(t.exc for t in throws), E E Exception message: 1 Exception encountered: E E Engine traceback: E in Snapshot(PathGlobs(globs=('!*.pyc', '!*_test.py', '!__pycache__/', '.pids/pantsd/pid', '3rdparty/**/requirements.txt', 'contrib/confluence/src/python', 'contrib/confluence/src/python/**', 'contrib/go/src/python', 'contrib/go/src/python/**', 'contrib/mypy/src/python', 'contrib/mypy/src/python/**', 'contrib/node/src/python', 'contrib/node/src/python/**', 'contrib/scrooge/src/python', 'contrib/scrooge/src/python/**', 'pants', 'pants-plugins/src/python', 'pants-plugins/src/python/**', 'pants.toml', 'pants.toml/**', 'pants/**', 'requirements.txt', 'src/rust/engine/**/*.rs', 'src/rust/engine/**/*.toml', 'test_runner.pex', 'test_runner.pex/**', 'test_runner.pex/.bootstrap', 'test_runner.pex/.bootstrap/**'), glob_match_error_behavior=<GlobMatchErrorBehavior.ignore: 'ignore'>, conjunction=<GlobExpansionConjunction.any_match: 'any_match'>, description_of_origin='')) E Traceback (no traceback): E <pants native internals> E Exception: Failed to scan directory "/b/f/w/.pids": No such file or directory (os error 2) ```
1.0
GoTestIntegrationTest.test_go_test_simple is flaky - ``` __________________ GoTestIntegrationTest.test_go_test_simple ___________________ self = <pants_test.contrib.go.tasks.test_go_test_integration.GoTestIntegrationTest testMethod=test_go_test_simple> def test_go_test_simple(self): args = ["test", "contrib/go/examples/src/go/libA"] pants_run = self.run_pants(args) self.assert_success(pants_run) # libA depends on libB, so both tests should be run. self.assertRegex(pants_run.stdout_data, r"ok\s+libA") self.assertRegex(pants_run.stdout_data, r"ok\s+libB") # Run a second time and see that they are cached. # TODO: this is better done with a unit test, and as noted in #7188, testing interaction with a # remote cache should probably be added somewhere. pants_run = self.run_pants(args) > self.assert_success(pants_run) pants_test/contrib/go/tasks/test_go_test_integration.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ pants/testutil/pants_run_integration_test.py:499: in assert_success self.assert_result(pants_run, PANTS_SUCCEEDED_EXIT_CODE, expected=True, msg=msg) pants/testutil/pants_run_integration_test.py:520: in assert_result assertion(value, pants_run.returncode, error_msg) E AssertionError: 0 != 1 : /pyenv-docker-build/versions/3.6.8/bin/python3.6 -m pants --no-pantsrc --pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d --print-exception-stacktrace=True --kill-nailguns test contrib/go/examples/src/go/libA E returncode: 1 E stdout: E E stderr: E Scrubbed PYTHONPATH=/b/f/w:/home/nobody/.pex/code/da39a3ee5e6b4b0d3255bfef95601890afd80709:/home/nobody/.pex/code/da39a3ee5e6b4b0d3255bfef95601890afd80709:/b/f/w/test_runner.pex:/pyenv-docker-build/versions/3.6.8/lib/python36.zip:/pyenv-docker-build/versions/3.6.8/lib/python3.6:/pyenv-docker-build/versions/3.6.8/lib/python3.6/lib-dynload:/home/nobody/.pex/installed_wheels/49d4d88c0299db8e10f74886545884e6a9cacbb5/Pygments-2.6.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/e91da08a7091c46772d87f0c961a63ac85d4a810/attrs-19.3.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/653b4b3b05183674ad07fe68096eeede4de276bc/backcall-0.2.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/06f6f44bd42b03ce2144205f7d688faeebf58cf8/coverage-5.1-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/a39764c5860f8844b13681b2f1513aaa97f3b754/decorator-4.4.2-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/83e0d232d816c9c638ee3ed5e83b822f1ac9f099/icdiff-1.9.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/4355f6b0feb0d63bb6d38c864a4b3937f441ca17/importlib_metadata-1.6.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4ba72e96d18f2bf48a4d1ef9ac4b1d94140b57ca/zipp-2.1.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/1df9b21e44b0993eb2f05f18feda11f77ec6ac3b/ipdb-0.13.2-py3-none-any.whl:/home/nobody/.pex/installed_wheels/7fdc500ea8756f23c4018b03b5f2b4be2f84354d/ipython-7.15.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/c84a350be3c972331863155699d5cd3c97f70c2d/setuptools-44.0.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/9f88e1d206d5445575b1567be5cf757944e97984/pickleshare-0.7.5-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/586e21c71919cbbc17b646536392c6f315ea5e9d/prompt_toolkit-3.0.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/5426e3f5850211033c23d03969e29331156b9f54/pexpect-4.8.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a9d5bf6dea47e1eda6353297d98a30656a6e41f7/jedi-0.17.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2141181f135fda3f826e75453c112fc9d4ab5943/traitlets-4.3.3-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d6d6fc4a263540aa4f357feff250ee78212587a7/wcwidth-0.2.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b022182801b9c7678c2a75da584161da521027f4/ptyprocess-0.6.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b39c6737e620e687416487869684d1f13030aa33/parso-0.7.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/8a22673b2bbd32a36071077c315914d35928003b/ipython_genutils-0.2.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/0cb007434fc9c1d2ee600dbd964088b69f5d670a/six-1.15.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/333c248fbcfa0263b4db142c80776ec1857d0abe/more_itertools-8.3.0-py3-none-any.whl:/home/nobody/.pex/installed_wheels/9664f565aa9532c471a09b969ff7ad32ba129789/packaging-20.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/66efc87a982c8ddaf7c60ae455d9d5b87d067259/pyparsing-2.4.7-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/8046b1eee652ee862d8ec7a0ad17fdbe4c81b4ed/pluggy-0.13.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/9d3c52ba8e1a020dc4a88b63fc8231d94000e6f2/pprintpp-0.4.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/f12aba29a84270bc263e09ac2308b1902306c749/py-1.8.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4e3d983d0f0fc98f5b042644cab1b8385f4175d6/pytest_cov-2.8.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/7ba07f4d109a12f8dfc840b5d7363dcfc543bfb0/pytest-5.3.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/b66fe775e573b6856100f8b093f0511037582574/pytest_icdiff-0.5-py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff4d646f3154f680222b1c33cc0cd24cd85ffb17/pytest_timeout-1.3.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a76d3c478a716209a045d38c22f218d18d702bd8/Markdown-2.1.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/3fd0ae90f726055866ae80c5e52c27254b79add8/PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/49d4d88c0299db8e10f74886545884e6a9cacbb5/Pygments-2.6.1-py3-none-any.whl:/home/nobody/.pex/installed_wheels/09b1e6d512758006ed26e520287d9cdb2c7e791a/ansicolors-1.1.8-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/f47f4b8964c35360502f697cf57b3393f0279095/beautifulsoup4-4.6.3-py3-none-any.whl:/home/nobody/.pex/installed_wheels/9af923ed9ad037e13a1dcce887288fa5daa4320a/certifi-2020.4.5.2-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/67978da8bd78e479f0bd07d45a246cf2508ae1be/cffi-1.14.0-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/4bf5fb0f6540ca05cb505b5bba08c37043075a90/pycparser-2.20-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/7baca230fd8c611ab475ddc834063ed30538a784/chardet-3.0.4-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2eac0049ba89da81b29c164f2c54bf60fc64a775/cryptography-2.9.2-cp35-abi3-manylinux2010_x86_64.whl:/home/nobody/.pex/installed_wheels/0cb007434fc9c1d2ee600dbd964088b69f5d670a/six-1.15.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4942d49ca06cf2b496398adedf0447df3a8bed03/dataclasses-0.6-py3-none-any.whl:/home/nobody/.pex/installed_wheels/39c3714bfb00d713251571888ac69e1de544161a/docutils-0.16-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/b3c554f04d9bb1c726351cf969e9301345897544/fasteners-0.15-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/a25b3c1dec53262dddf6171e4c9b20039255684f/monotonic-1.5-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/17874ccb364b6c5b47bdbb1b6d8b241435bfbcfe/idna-2.9-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/717dc45ee6775628f1a9c3cb600d6b623ed40eb3/packaging-20.3-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/66efc87a982c8ddaf7c60ae455d9d5b87d067259/pyparsing-2.4.7-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/4972466d45eb3be6905de9fb88c046d6a21759c4/pathspec-0.8.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d123bc570f5cb5c52070590b2bdbc8c558b6a66c/pex-2.1.11-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/2bf1fc5c32c8ee05055ad3d9206415c986976acf/ply-3.11-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/d8de0a1b41e12886ae8a5aecd728fffcfc1ebea1/psutil-5.7.0-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/56777feaefd555c3441aa884a6d4d8ffda60f946/py_zipkin-0.20.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/aea358da742a78d20171fa98390e60283b71d04f/thriftpy2-0.4.11-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/b10d2b90ef5f6320f42d881be6234345f896e2f0/pyOpenSSL-19.1.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/e34b43c3f9f7347f368458e68d40a5a1e70291d2/pystache-0.5.4-py3-none-any.whl:/home/nobody/.pex/installed_wheels/7499593a193c022ae86b0458ed7f8fd6f1d421bb/python_Levenshtein-0.12.0-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/c84a350be3c972331863155699d5cd3c97f70c2d/setuptools-44.0.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff51658fdce695e5a9b78037cde8a3ebbf1c812a/pywatchman-1.4.1-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/5a5391a0d4858402ffabba10f3c8444de332269f/requests-2.23.0-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/cf34d79fe2d84e646ced8d40080780a87f6ea0ce/urllib3-1.25.9-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/cecbc5e55a38c8a52fece89c179c063c18eb0460/setproctitle-1.1.10-cp36-cp36m-linux_x86_64.whl:/home/nobody/.pex/installed_wheels/72a57f8c0def7dc1527d10698d70f44a577e9fff/toml-0.10.1-py2.py3-none-any.whl:/home/nobody/.pex/installed_wheels/997752670c4c1718d9e3c7af2f8e3960a5364511/twitter.common.confluence-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/1ef626456a6e0c505c40d182f6ca57a26c88bec1/twitter.common.log-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/ff0f47266faefbe0899ff79308333193d7bf44a4/twitter.common.dirutil-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/dd9e4fd73f3c8a75d79407cbbd32bade1b182542/twitter.common.options-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/c0e8a883e9c2eb43b38885befb3b691c3f6b561f/twitter.common.lang-0.3.11-py3-none-any.whl:/home/nobody/.pex/installed_wheels/d0a12ba117abec741977a4d9fd97af997eab9b14/typed_ast-1.4.1-cp36-cp36m-manylinux1_x86_64.whl:/home/nobody/.pex/installed_wheels/871624bc41050236e14619ab5f73539d495f557c/typing_extensions-3.7.4.2-py3-none-any.whl:/home/nobody/.pex/installed_wheels/82666a1d11c1a0924a6bac6c9d75f590c607b26d/www_authenticate-0.9.2-py3-none-any.whl:/b/f/w/test_runner.pex/.bootstrap from the environment. E 22:38:22 [INFO] waiting for pantsd to start... E 22:38:23 [INFO] pantsd started E 22:38:23 [ERROR] E 22:38:23 [ERROR] lost active connection to pantsd! E 22:38:23 [ERROR] abruptly lost active connection to pantsd runner: NailgunError('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E Remote exception: E timestamp: 2020-06-11T22:38:23.142874 E process title: pantsd [/b/f/w] E sys.argv: ['/b/f/w/pants/__main__.py', '--no-pantsrc', '--pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d', '--print-exception-stacktrace=True', '--kill-nailguns', 'test', 'contrib/go/examples/src/go/libA'] E pid: 6310 E Exception caught: (pants.engine.internals.scheduler.ExecutionError) E File "/b/f/w/pants/__main__.py", line 7, in <module> E pants_loader.main() E File "/b/f/w/pants/bin/pants_loader.py", line 94, in main E PantsLoader.run() E File "/b/f/w/pants/bin/pants_loader.py", line 90, in run E cls.load_and_execute(entrypoint) E File "/b/f/w/pants/bin/pants_loader.py", line 83, in load_and_execute E entrypoint_main() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 452, in launch E PantsDaemon.create(OptionsBootstrapper.create()).run_sync() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 447, in run_sync E self._run_services(self._services) E File "/b/f/w/pants/pantsd/pants_daemon.py", line 350, in _run_services E self._initialize_pid() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 393, in _initialize_pid E os.path.relpath(pidfile_absolute, self._build_root) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 120, in add_invalidation_glob E self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False)) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 80, in _get_snapshot E Snapshot, subjects=[PathGlobs(globs)], poll=poll, timeout=timeout, E File "/b/f/w/pants/engine/internals/scheduler.py", line 549, in product_request E self._raise_on_error([t for _, t in throws]) E File "/b/f/w/pants/engine/internals/scheduler.py", line 489, in _raise_on_error E wrapped_exceptions=tuple(t.exc for t in throws), E E Exception message: 1 Exception encountered: E E Engine traceback: E in Snapshot(PathGlobs(globs=('!*.pyc', '!*_test.py', '!__pycache__/', '.pids/pantsd/pid', '3rdparty/**/requirements.txt', 'contrib/confluence/src/python', 'contrib/confluence/src/python/**', 'contrib/go/src/python', 'contrib/go/src/python/**', 'contrib/mypy/src/python', 'contrib/mypy/src/python/**', 'contrib/node/src/python', 'contrib/node/src/python/**', 'contrib/scrooge/src/python', 'contrib/scrooge/src/python/**', 'pants', 'pants-plugins/src/python', 'pants-plugins/src/python/**', 'pants.toml', 'pants.toml/**', 'pants/**', 'requirements.txt', 'src/rust/engine/**/*.rs', 'src/rust/engine/**/*.toml', 'test_runner.pex', 'test_runner.pex/**', 'test_runner.pex/.bootstrap', 'test_runner.pex/.bootstrap/**'), glob_match_error_behavior=<GlobMatchErrorBehavior.ignore: 'ignore'>, conjunction=<GlobExpansionConjunction.any_match: 'any_match'>, description_of_origin='')) E Traceback (no traceback): E <pants native internals> E Exception: Failed to scan directory "/b/f/w/.pids": No such file or directory (os error 2) E E Traceback (most recent call last): E File "/b/f/w/pants/java/nailgun_protocol.py", line 188, in read_chunk E header = cls._read_until(sock, cls.HEADER_BYTES) E File "/b/f/w/pants/java/nailgun_protocol.py", line 168, in _read_until E desired_size, len(buf) E pants.java.nailgun_protocol.NailgunProtocol.TruncatedRead: Expected 5 bytes before socket shutdown, instead received 0 E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/java/nailgun_client.py", line 288, in execute E exit_code = self._session.execute(cwd, main_class, *args, **environment) E File "/b/f/w/pants/java/nailgun_client.py", line 141, in execute E return self._process_session() E File "/b/f/w/pants/java/nailgun_client.py", line 104, in _process_session E MaybeShutdownSocket(self._sock), return_bytes=True, timeout_object=self, E File "/b/f/w/pants/java/nailgun_protocol.py", line 290, in iter_chunks E maybe_shutdown_socket.socket, return_bytes E File "/b/f/w/pants/java/nailgun_protocol.py", line 190, in read_chunk E raise cls.TruncatedHeaderError("Failed to read nailgun chunk header ({!r}).".format(e)) E pants.java.nailgun_protocol.NailgunProtocol.TruncatedHeaderError: Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)). E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/bin/remote_pants_runner.py", line 125, in _run_pants_with_retry E return self._connect_and_execute(pantsd_handle) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 181, in _connect_and_execute E return client.execute(self._args[0], self._args[1:], modified_env) E File "/b/f/w/pants/java/nailgun_client.py", line 292, in execute E address=self._address_string, wrapped_exc=e, E pants.java.nailgun_client.NailgunClient.NailgunError: ('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E E During handling of the above exception, another exception occurred: E E Traceback (most recent call last): E File "/b/f/w/pants/bin/pants_exe.py", line 36, in main E exit_code = runner.run(start_time) E File "/b/f/w/pants/bin/pants_runner.py", line 86, in run E return RemotePantsRunner(self.args, self.env, options_bootstrapper).run(start_time) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 209, in run E return self._run_pants_with_retry(self._client.maybe_launch()) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 148, in _run_pants_with_retry E raise self._extract_remote_exception(pantsd_handle.pid, e).with_traceback(traceback) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 125, in _run_pants_with_retry E return self._connect_and_execute(pantsd_handle) E File "/b/f/w/pants/bin/remote_pants_runner.py", line 181, in _connect_and_execute E return client.execute(self._args[0], self._args[1:], modified_env) E File "/b/f/w/pants/java/nailgun_client.py", line 292, in execute E address=self._address_string, wrapped_exc=e, E pants.bin.remote_pants_runner.RemotePantsRunner.Terminated: abruptly lost active connection to pantsd runner: NailgunError('Problem talking to nailgun server (address: 127.0.0.1:38133): TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead(\'Expected 5 bytes before socket shutdown, instead received 0\',)).",)', TruncatedHeaderError("Failed to read nailgun chunk header (TruncatedRead('Expected 5 bytes before socket shutdown, instead received 0',)).",)) E Remote exception: E timestamp: 2020-06-11T22:38:23.142874 E process title: pantsd [/b/f/w] E sys.argv: ['/b/f/w/pants/__main__.py', '--no-pantsrc', '--pants-workdir=/b/f/w/.pants.d/tmp/tmp_nh9byxd.pants.d', '--print-exception-stacktrace=True', '--kill-nailguns', 'test', 'contrib/go/examples/src/go/libA'] E pid: 6310 E Exception caught: (pants.engine.internals.scheduler.ExecutionError) E File "/b/f/w/pants/__main__.py", line 7, in <module> E pants_loader.main() E File "/b/f/w/pants/bin/pants_loader.py", line 94, in main E PantsLoader.run() E File "/b/f/w/pants/bin/pants_loader.py", line 90, in run E cls.load_and_execute(entrypoint) E File "/b/f/w/pants/bin/pants_loader.py", line 83, in load_and_execute E entrypoint_main() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 452, in launch E PantsDaemon.create(OptionsBootstrapper.create()).run_sync() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 447, in run_sync E self._run_services(self._services) E File "/b/f/w/pants/pantsd/pants_daemon.py", line 350, in _run_services E self._initialize_pid() E File "/b/f/w/pants/pantsd/pants_daemon.py", line 393, in _initialize_pid E os.path.relpath(pidfile_absolute, self._build_root) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 120, in add_invalidation_glob E self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False)) E File "/b/f/w/pants/pantsd/service/scheduler_service.py", line 80, in _get_snapshot E Snapshot, subjects=[PathGlobs(globs)], poll=poll, timeout=timeout, E File "/b/f/w/pants/engine/internals/scheduler.py", line 549, in product_request E self._raise_on_error([t for _, t in throws]) E File "/b/f/w/pants/engine/internals/scheduler.py", line 489, in _raise_on_error E wrapped_exceptions=tuple(t.exc for t in throws), E E Exception message: 1 Exception encountered: E E Engine traceback: E in Snapshot(PathGlobs(globs=('!*.pyc', '!*_test.py', '!__pycache__/', '.pids/pantsd/pid', '3rdparty/**/requirements.txt', 'contrib/confluence/src/python', 'contrib/confluence/src/python/**', 'contrib/go/src/python', 'contrib/go/src/python/**', 'contrib/mypy/src/python', 'contrib/mypy/src/python/**', 'contrib/node/src/python', 'contrib/node/src/python/**', 'contrib/scrooge/src/python', 'contrib/scrooge/src/python/**', 'pants', 'pants-plugins/src/python', 'pants-plugins/src/python/**', 'pants.toml', 'pants.toml/**', 'pants/**', 'requirements.txt', 'src/rust/engine/**/*.rs', 'src/rust/engine/**/*.toml', 'test_runner.pex', 'test_runner.pex/**', 'test_runner.pex/.bootstrap', 'test_runner.pex/.bootstrap/**'), glob_match_error_behavior=<GlobMatchErrorBehavior.ignore: 'ignore'>, conjunction=<GlobExpansionConjunction.any_match: 'any_match'>, description_of_origin='')) E Traceback (no traceback): E <pants native internals> E Exception: Failed to scan directory "/b/f/w/.pids": No such file or directory (os error 2) ```
non_process
gotestintegrationtest test go test simple is flaky gotestintegrationtest test go test simple self def test go test simple self args pants run self run pants args self assert success pants run liba depends on libb so both tests should be run self assertregex pants run stdout data r ok s liba self assertregex pants run stdout data r ok s libb run a second time and see that they are cached todo this is better done with a unit test and as noted in testing interaction with a remote cache should probably be added somewhere pants run self run pants args self assert success pants run pants test contrib go tasks test go test integration py pants testutil pants run integration test py in assert success self assert result pants run pants succeeded exit code expected true msg msg pants testutil pants run integration test py in assert result assertion value pants run returncode error msg e assertionerror pyenv docker build versions bin m pants no pantsrc pants workdir b f w pants d tmp tmp pants d print exception stacktrace true kill nailguns test contrib go examples src go liba e returncode e stdout e e stderr e scrubbed pythonpath b f w home nobody pex code home nobody pex code b f w test runner pex pyenv docker build versions lib zip pyenv docker build versions lib pyenv docker build versions lib lib dynload home nobody pex installed wheels pygments none any whl home nobody pex installed wheels attrs none any whl home nobody pex installed wheels backcall none any whl home nobody pex installed wheels coverage whl home nobody pex installed wheels decorator none any whl home nobody pex installed wheels icdiff none any whl home nobody pex installed wheels importlib metadata none any whl home nobody pex installed wheels zipp none any whl home nobody pex installed wheels ipdb none any whl home nobody pex installed wheels ipython none any whl home nobody pex installed wheels setuptools none any whl home nobody pex installed wheels pickleshare none any whl home nobody pex installed wheels prompt toolkit none any whl home nobody pex installed wheels pexpect none any whl home nobody pex installed wheels jedi none any whl home nobody pex installed wheels traitlets none any whl home nobody pex installed wheels wcwidth none any whl home nobody pex installed wheels ptyprocess none any whl home nobody pex installed wheels parso none any whl home nobody pex installed wheels ipython genutils none any whl home nobody pex installed wheels six none any whl home nobody pex installed wheels more itertools none any whl home nobody pex installed wheels packaging none any whl home nobody pex installed wheels pyparsing none any whl home nobody pex installed wheels pluggy none any whl home nobody pex installed wheels pprintpp none any whl home nobody pex installed wheels py none any whl home nobody pex installed wheels pytest cov none any whl home nobody pex installed wheels pytest none any whl home nobody pex installed wheels pytest icdiff none any whl home nobody pex installed wheels pytest timeout none any whl home nobody pex installed wheels markdown none any whl home nobody pex installed wheels pyyaml linux whl home nobody pex installed wheels pygments none any whl home nobody pex installed wheels ansicolors none any whl home nobody pex installed wheels none any whl home nobody pex installed wheels certifi none any whl home nobody pex installed wheels cffi whl home nobody pex installed wheels pycparser none any whl home nobody pex installed wheels chardet none any whl home nobody pex installed wheels cryptography whl home nobody pex installed wheels six none any whl home nobody pex installed wheels dataclasses none any whl home nobody pex installed wheels docutils none any whl home nobody pex installed wheels fasteners none any whl home nobody pex installed wheels monotonic none any whl home nobody pex installed wheels idna none any whl home nobody pex installed wheels packaging none any whl home nobody pex installed wheels pyparsing none any whl home nobody pex installed wheels pathspec none any whl home nobody pex installed wheels pex none any whl home nobody pex installed wheels ply none any whl home nobody pex installed wheels psutil linux whl home nobody pex installed wheels py zipkin none any whl home nobody pex installed wheels linux whl home nobody pex installed wheels pyopenssl none any whl home nobody pex installed wheels pystache none any whl home nobody pex installed wheels python levenshtein linux whl home nobody pex installed wheels setuptools none any whl home nobody pex installed wheels pywatchman linux whl home nobody pex installed wheels requests none any whl home nobody pex installed wheels none any whl home nobody pex installed wheels setproctitle linux whl home nobody pex installed wheels toml none any whl home nobody pex installed wheels twitter common confluence none any whl home nobody pex installed wheels twitter common log none any whl home nobody pex installed wheels twitter common dirutil none any whl home nobody pex installed wheels twitter common options none any whl home nobody pex installed wheels twitter common lang none any whl home nobody pex installed wheels typed ast whl home nobody pex installed wheels typing extensions none any whl home nobody pex installed wheels www authenticate none any whl b f w test runner pex bootstrap from the environment e waiting for pantsd to start e pantsd started e e lost active connection to pantsd e abruptly lost active connection to pantsd runner nailgunerror problem talking to nailgun server address truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received e remote exception e timestamp e process title pantsd e sys argv e pid e exception caught pants engine internals scheduler executionerror e file b f w pants main py line in e pants loader main e file b f w pants bin pants loader py line in main e pantsloader run e file b f w pants bin pants loader py line in run e cls load and execute entrypoint e file b f w pants bin pants loader py line in load and execute e entrypoint main e file b f w pants pantsd pants daemon py line in launch e pantsdaemon create optionsbootstrapper create run sync e file b f w pants pantsd pants daemon py line in run sync e self run services self services e file b f w pants pantsd pants daemon py line in run services e self initialize pid e file b f w pants pantsd pants daemon py line in initialize pid e os path relpath pidfile absolute self build root e file b f w pants pantsd service scheduler service py line in add invalidation glob e self invalidation globs and snapshot globs self get snapshot globs poll false e file b f w pants pantsd service scheduler service py line in get snapshot e snapshot subjects poll poll timeout timeout e file b f w pants engine internals scheduler py line in product request e self raise on error e file b f w pants engine internals scheduler py line in raise on error e wrapped exceptions tuple t exc for t in throws e e exception message exception encountered e e engine traceback e in snapshot pathglobs globs pyc test py pycache pids pantsd pid requirements txt contrib confluence src python contrib confluence src python contrib go src python contrib go src python contrib mypy src python contrib mypy src python contrib node src python contrib node src python contrib scrooge src python contrib scrooge src python pants pants plugins src python pants plugins src python pants toml pants toml pants requirements txt src rust engine rs src rust engine toml test runner pex test runner pex test runner pex bootstrap test runner pex bootstrap glob match error behavior conjunction description of origin e traceback no traceback e e exception failed to scan directory b f w pids no such file or directory os error e e traceback most recent call last e file b f w pants java nailgun protocol py line in read chunk e header cls read until sock cls header bytes e file b f w pants java nailgun protocol py line in read until e desired size len buf e pants java nailgun protocol nailgunprotocol truncatedread expected bytes before socket shutdown instead received e e during handling of the above exception another exception occurred e e traceback most recent call last e file b f w pants java nailgun client py line in execute e exit code self session execute cwd main class args environment e file b f w pants java nailgun client py line in execute e return self process session e file b f w pants java nailgun client py line in process session e maybeshutdownsocket self sock return bytes true timeout object self e file b f w pants java nailgun protocol py line in iter chunks e maybe shutdown socket socket return bytes e file b f w pants java nailgun protocol py line in read chunk e raise cls truncatedheadererror failed to read nailgun chunk header r format e e pants java nailgun protocol nailgunprotocol truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received e e during handling of the above exception another exception occurred e e traceback most recent call last e file b f w pants bin remote pants runner py line in run pants with retry e return self connect and execute pantsd handle e file b f w pants bin remote pants runner py line in connect and execute e return client execute self args self args modified env e file b f w pants java nailgun client py line in execute e address self address string wrapped exc e e pants java nailgun client nailgunclient nailgunerror problem talking to nailgun server address truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received e e during handling of the above exception another exception occurred e e traceback most recent call last e file b f w pants bin pants exe py line in main e exit code runner run start time e file b f w pants bin pants runner py line in run e return remotepantsrunner self args self env options bootstrapper run start time e file b f w pants bin remote pants runner py line in run e return self run pants with retry self client maybe launch e file b f w pants bin remote pants runner py line in run pants with retry e raise self extract remote exception pantsd handle pid e with traceback traceback e file b f w pants bin remote pants runner py line in run pants with retry e return self connect and execute pantsd handle e file b f w pants bin remote pants runner py line in connect and execute e return client execute self args self args modified env e file b f w pants java nailgun client py line in execute e address self address string wrapped exc e e pants bin remote pants runner remotepantsrunner terminated abruptly lost active connection to pantsd runner nailgunerror problem talking to nailgun server address truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received truncatedheadererror failed to read nailgun chunk header truncatedread expected bytes before socket shutdown instead received e remote exception e timestamp e process title pantsd e sys argv e pid e exception caught pants engine internals scheduler executionerror e file b f w pants main py line in e pants loader main e file b f w pants bin pants loader py line in main e pantsloader run e file b f w pants bin pants loader py line in run e cls load and execute entrypoint e file b f w pants bin pants loader py line in load and execute e entrypoint main e file b f w pants pantsd pants daemon py line in launch e pantsdaemon create optionsbootstrapper create run sync e file b f w pants pantsd pants daemon py line in run sync e self run services self services e file b f w pants pantsd pants daemon py line in run services e self initialize pid e file b f w pants pantsd pants daemon py line in initialize pid e os path relpath pidfile absolute self build root e file b f w pants pantsd service scheduler service py line in add invalidation glob e self invalidation globs and snapshot globs self get snapshot globs poll false e file b f w pants pantsd service scheduler service py line in get snapshot e snapshot subjects poll poll timeout timeout e file b f w pants engine internals scheduler py line in product request e self raise on error e file b f w pants engine internals scheduler py line in raise on error e wrapped exceptions tuple t exc for t in throws e e exception message exception encountered e e engine traceback e in snapshot pathglobs globs pyc test py pycache pids pantsd pid requirements txt contrib confluence src python contrib confluence src python contrib go src python contrib go src python contrib mypy src python contrib mypy src python contrib node src python contrib node src python contrib scrooge src python contrib scrooge src python pants pants plugins src python pants plugins src python pants toml pants toml pants requirements txt src rust engine rs src rust engine toml test runner pex test runner pex test runner pex bootstrap test runner pex bootstrap glob match error behavior conjunction description of origin e traceback no traceback e e exception failed to scan directory b f w pids no such file or directory os error
0
21,259
28,432,294,620
IssuesEvent
2023-04-15 00:02:44
metallb/metallb
https://api.github.com/repos/metallb/metallb
closed
maintenership: document how to update the website
process lifecycle-stale
We should document on the website, how to upgrade the website. This come up here: https://github.com/metallb/metallb/pull/661#event-3624239546 The website is served from whatever is present in the `live-website` branch. The rules usually are: * The `live-website` branch points to some commit in the current minor release (as of this writing, v0.9) * You just force update the branch to point to the commit you want Assuming you just updated the website content in the 0.9 branch, you would normally do something like: * `git switch v0.9 && git branch -f live-website HEAD` * If all looks fine, `git push -f <remote-name> live-website` Netlify will automatically build a new release and publish it. But is nice to verify (we don't have access to Netlify just yet)
1.0
maintenership: document how to update the website - We should document on the website, how to upgrade the website. This come up here: https://github.com/metallb/metallb/pull/661#event-3624239546 The website is served from whatever is present in the `live-website` branch. The rules usually are: * The `live-website` branch points to some commit in the current minor release (as of this writing, v0.9) * You just force update the branch to point to the commit you want Assuming you just updated the website content in the 0.9 branch, you would normally do something like: * `git switch v0.9 && git branch -f live-website HEAD` * If all looks fine, `git push -f <remote-name> live-website` Netlify will automatically build a new release and publish it. But is nice to verify (we don't have access to Netlify just yet)
process
maintenership document how to update the website we should document on the website how to upgrade the website this come up here the website is served from whatever is present in the live website branch the rules usually are the live website branch points to some commit in the current minor release as of this writing you just force update the branch to point to the commit you want assuming you just updated the website content in the branch you would normally do something like git switch git branch f live website head if all looks fine git push f live website netlify will automatically build a new release and publish it but is nice to verify we don t have access to netlify just yet
1
671,170
22,746,437,861
IssuesEvent
2022-07-07 09:34:44
feast-dev/feast
https://api.github.com/repos/feast-dev/feast
opened
Spark source unable to accept parquet file folder path
kind/bug priority/p2
Spark source has been working well with providing a single parquet file path. But when a folder path is provided it shows following error: To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 07/07/2022 02:43:18 PM ERROR:Spark read of file source failed. Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Admin\anaconda3\envs\feast_21\Scripts\feast.exe\__main__.py", line 7, in <module> File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1062, in main rv = self.invoke(ctx) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 763, in invoke return __callback(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\cli.py", line 489, in apply_total_command apply_total(repo_config, repo, skip_source_validation) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 280, in wrapper raise exc.with_traceback(traceback) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 269, in wrapper return func(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 276, in apply_total apply_total_with_repo_instance( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 234, in apply_total_with_repo_instance data_source.validate(store.config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 149, in validate self.get_table_column_names_and_types(config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 165, in get_table_column_names_and_types df = spark_session.sql(f"SELECT * FROM {self.get_table_query_string()}") File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 190, in get_table_query_string df.createOrReplaceTempView(tmp_table_name) UnboundLocalError: local variable 'df' referenced before assignment
1.0
Spark source unable to accept parquet file folder path - Spark source has been working well with providing a single parquet file path. But when a folder path is provided it shows following error: To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 07/07/2022 02:43:18 PM ERROR:Spark read of file source failed. Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 184, in get_table_query_string df = spark_session.read.format(self.file_format).load(self.path) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\readwriter.py", line 204, in load return self._df(self._jreader.load(path)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\pyspark\sql\utils.py", line 117, in deco raise converted from None pyspark.sql.utils.AnalysisException: Path does not exist: file:/data Traceback (most recent call last): File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Admin\anaconda3\envs\feast_21\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Admin\anaconda3\envs\feast_21\Scripts\feast.exe\__main__.py", line 7, in <module> File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1062, in main rv = self.invoke(ctx) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\core.py", line 763, in invoke return __callback(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\cli.py", line 489, in apply_total_command apply_total(repo_config, repo, skip_source_validation) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 280, in wrapper raise exc.with_traceback(traceback) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\usage.py", line 269, in wrapper return func(*args, **kwargs) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 276, in apply_total apply_total_with_repo_instance( File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\repo_operations.py", line 234, in apply_total_with_repo_instance data_source.validate(store.config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 149, in validate self.get_table_column_names_and_types(config) File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 165, in get_table_column_names_and_types df = spark_session.sql(f"SELECT * FROM {self.get_table_query_string()}") File "C:\Users\Admin\anaconda3\envs\feast_21\lib\site-packages\feast\infra\offline_stores\contrib\spark_offline_store\spark_source.py", line 190, in get_table_query_string df.createOrReplaceTempView(tmp_table_name) UnboundLocalError: local variable 'df' referenced before assignment
non_process
spark source unable to accept parquet file folder path spark source has been working well with providing a single parquet file path but when a folder path is provided it shows following error to adjust logging level use sc setloglevel newlevel for sparkr use setloglevel newlevel pm error spark read of file source failed traceback most recent call last file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df spark session read format self file format load self path file c users admin envs feast lib site packages pyspark sql readwriter py line in load return self df self jreader load path file c users admin envs feast lib site packages java gateway py line in call return value get return value file c users admin envs feast lib site packages pyspark sql utils py line in deco raise converted from none pyspark sql utils analysisexception path does not exist file data traceback most recent call last file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df spark session read format self file format load self path file c users admin envs feast lib site packages pyspark sql readwriter py line in load return self df self jreader load path file c users admin envs feast lib site packages java gateway py line in call return value get return value file c users admin envs feast lib site packages pyspark sql utils py line in deco raise converted from none pyspark sql utils analysisexception path does not exist file data traceback most recent call last file c users admin envs feast lib runpy py line in run module as main return run code code main globals none file c users admin envs feast lib runpy py line in run code exec code run globals file c users admin envs feast scripts feast exe main py line in file c users admin envs feast lib site packages click core py line in call return self main args kwargs file c users admin envs feast lib site packages click core py line in main rv self invoke ctx file c users admin envs feast lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file c users admin envs feast lib site packages click core py line in invoke return ctx invoke self callback ctx params file c users admin envs feast lib site packages click core py line in invoke return callback args kwargs file c users admin envs feast lib site packages click decorators py line in new func return f get current context args kwargs file c users admin envs feast lib site packages feast cli py line in apply total command apply total repo config repo skip source validation file c users admin envs feast lib site packages feast usage py line in wrapper raise exc with traceback traceback file c users admin envs feast lib site packages feast usage py line in wrapper return func args kwargs file c users admin envs feast lib site packages feast repo operations py line in apply total apply total with repo instance file c users admin envs feast lib site packages feast repo operations py line in apply total with repo instance data source validate store config file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in validate self get table column names and types config file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table column names and types df spark session sql f select from self get table query string file c users admin envs feast lib site packages feast infra offline stores contrib spark offline store spark source py line in get table query string df createorreplacetempview tmp table name unboundlocalerror local variable df referenced before assignment
0
20,868
27,657,037,685
IssuesEvent
2023-03-12 03:48:57
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/k8sattributes] setting container.id on traces
processor/k8sattributes
### Component(s) processor/k8sattributes ### Describe the issue you're reporting Hi, I'm using the k8sattributes processor to enrich `container.id` on OTEL traces using `k8s.pod.ip` or `k8s.pod.uid` to match the pod and `k8s.container.name` and `k8s.container.restart_count` to match the container within the pod. I can appreciate that `k8s.container.restart_count` allows me to definitively match the container instance (if k8s restarts the container within the pod without restarting the pod). In practice, though, I wonder if this added complexity is necessary. For example, I can't find or think of an easy way to pass in the actual `k8s.container.restart_count` as an env variable from a k8s deployment yaml (i.e., k8s can't tell me this via downward API, and I can't think of a way to easily keep track of it myself in a persistent env variable). Perhaps I'm missing something obvious here? So barring that, if I hardcode `k8s.container.restart_count=0` as an env variable in the deployment yaml, that works unless of course k8s restarts the container. In practice, would it be reasonable to have a mode wherein the k8sattributes processor just assumes you want to match to the current running instance of the container? You obviously can't have 2 running at the same time, and I would think the only "glare" condition here is for traces emitted by the old container right before it died but which haven't yet been processed by the collector; if the new container immediately launches while those traces are in flight, they would be stamped by the k8sattributes processor with the wrong `container.id`. That said, this seems like a rare event, and even if it did occur, it is probably ok for most use caes? I'm suggesting that we have a mode in k8sattributes which will just assume you are talking about the latest instance of a given container in a pod if you don't explicitly set `k8s.container.restart_count`. PR forthcoming. thoughts?
1.0
[processor/k8sattributes] setting container.id on traces - ### Component(s) processor/k8sattributes ### Describe the issue you're reporting Hi, I'm using the k8sattributes processor to enrich `container.id` on OTEL traces using `k8s.pod.ip` or `k8s.pod.uid` to match the pod and `k8s.container.name` and `k8s.container.restart_count` to match the container within the pod. I can appreciate that `k8s.container.restart_count` allows me to definitively match the container instance (if k8s restarts the container within the pod without restarting the pod). In practice, though, I wonder if this added complexity is necessary. For example, I can't find or think of an easy way to pass in the actual `k8s.container.restart_count` as an env variable from a k8s deployment yaml (i.e., k8s can't tell me this via downward API, and I can't think of a way to easily keep track of it myself in a persistent env variable). Perhaps I'm missing something obvious here? So barring that, if I hardcode `k8s.container.restart_count=0` as an env variable in the deployment yaml, that works unless of course k8s restarts the container. In practice, would it be reasonable to have a mode wherein the k8sattributes processor just assumes you want to match to the current running instance of the container? You obviously can't have 2 running at the same time, and I would think the only "glare" condition here is for traces emitted by the old container right before it died but which haven't yet been processed by the collector; if the new container immediately launches while those traces are in flight, they would be stamped by the k8sattributes processor with the wrong `container.id`. That said, this seems like a rare event, and even if it did occur, it is probably ok for most use caes? I'm suggesting that we have a mode in k8sattributes which will just assume you are talking about the latest instance of a given container in a pod if you don't explicitly set `k8s.container.restart_count`. PR forthcoming. thoughts?
process
setting container id on traces component s processor describe the issue you re reporting hi i m using the processor to enrich container id on otel traces using pod ip or pod uid to match the pod and container name and container restart count to match the container within the pod i can appreciate that container restart count allows me to definitively match the container instance if restarts the container within the pod without restarting the pod in practice though i wonder if this added complexity is necessary for example i can t find or think of an easy way to pass in the actual container restart count as an env variable from a deployment yaml i e can t tell me this via downward api and i can t think of a way to easily keep track of it myself in a persistent env variable perhaps i m missing something obvious here so barring that if i hardcode container restart count as an env variable in the deployment yaml that works unless of course restarts the container in practice would it be reasonable to have a mode wherein the processor just assumes you want to match to the current running instance of the container you obviously can t have running at the same time and i would think the only glare condition here is for traces emitted by the old container right before it died but which haven t yet been processed by the collector if the new container immediately launches while those traces are in flight they would be stamped by the processor with the wrong container id that said this seems like a rare event and even if it did occur it is probably ok for most use caes i m suggesting that we have a mode in which will just assume you are talking about the latest instance of a given container in a pod if you don t explicitly set container restart count pr forthcoming thoughts
1
46,767
13,055,973,241
IssuesEvent
2020-07-30 03:16:38
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
dst - test dst::dst16.py is failing across the board (Trac #1847)
Incomplete Migration Migrated from Trac combo reconstruction defect
Migrated from https://code.icecube.wisc.edu/ticket/1847 ```json { "status": "closed", "changetime": "2016-09-01T14:56:02", "description": "{{{\n187/425 Test #187: dst::dst16.py ..................................................***Failed 6.66 sec\nINFO (I3Tray): qify-dst: WritePFrame = True (I3Tray.py:226 in __call__)\nINFO (I3Tray): Adding Anonymous Module of type '<class 'icecube.dst.dsttest.CheckFrameIndex'>' with name 'CheckFrameIndex_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'CountFrames' with name 'CountFrames_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'Dump' with name 'Dump_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'CountFrames' with name 'CountFrames_0001' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type '<class 'icecube.dst.dsttest.CheckFrameIndex'>' with name 'CheckFrameIndex_0001' (I3Tray.py:75 in _create_name)\nINFO (I3Module): Prefixing with filename/url /build/ports/test-data/sim/GeoCalibDetectorStatus_IC80_DC6.54655.i3.gz (I3InfiniteSource.cxx:55 in virtual void I3InfiniteSource::Configure())\nWARN (I3DSTModule16): I3DirectHitsValues 'PoleMuonLlhFitDirectHitsBaseC' will be ignored as per user request. (I3DSTModule16.cxx:117 in virtual void I3DSTModule16::Configure())\nINFO (dst): dst pixel size: 0.0337076 deg (HealPixCoordinates.cxx:68 in void HealPixCoordinate::ComputeBins(int))\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): header not writtten.. buffering (I3DSTExtractor16.cxx:124 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): header not writtten.. buffering (I3DSTExtractor16.cxx:124 in virtual void I3DSTExtractor16::Process())\nINFO (dst): dst pixel size: 0.0337076 deg (HealPixCoordinates.cxx:68 in void HealPixCoordinate::ComputeBins(int))\nFATAL (phys-services): Physics frames cannot be split (I3Splitter.cxx:36 in I3FramePtr I3Splitter::GetNextSubEvent(I3FramePtr))\nERROR (I3Module): UnpackDST: Exception thrown (I3Module.cxx:118 in void I3Module::Do(void (I3Module::*)()))\n------------------------- This is frame number 1 -------------------------\n[ I3Frame (TrayInfo):\n '2008-09-29T20:07:59.914225' [TrayInfo] ==> I3TrayInfo (13528)\n]\n------------------------- This is frame number 2 -------------------------\n[ I3Frame (Geometry):\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\n------------------------- This is frame number 3 -------------------------\n[ I3Frame (Calibration):\n 'Cindex' [Calibration] ==> I3PODHolder<int> (unk)\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Calibration' [Calibration] ==> I3Calibration (120071635)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\n------------------------- This is frame number 4 -------------------------\n[ I3Frame (DetectorStatus):\n 'Cindex' [Calibration] ==> I3PODHolder<int> (unk)\n 'Dindex' [DetectorStatus] ==> I3PODHolder<int> (unk)\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Calibration' [Calibration] ==> I3Calibration (120071635)\n 'I3DetectorStatus' [DetectorStatus] ==> I3DetectorStatus (995670)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\nTraceback (most recent call last):\n File \"/Volumes/Macintosh_HD2/build/buildslave/barbas/OS_X_Yosemite/source/dst/resources/test/dst16.py\", line 64, in <module>\n tray.Execute(14)\n File \"/Volumes/Macintosh_HD2/build/buildslave/barbas/OS_X_Yosemite/build/lib/I3Tray.py\", line 238, in Execute\n super(I3Tray, self).Execute(args[0])\nRuntimeError: Physics frames cannot be split (in I3FramePtr I3Splitter::GetNextSubEvent(I3FramePtr))\n}}}", "reporter": "nega", "cc": "", "resolution": "duplicate", "_ts": "1472741762631521", "component": "combo reconstruction", "summary": "dst - test dst::dst16.py is failing across the board", "priority": "normal", "keywords": "dst tests", "time": "2016-09-01T14:51:34", "milestone": "", "owner": "juancarlos", "type": "defect" } ```
1.0
dst - test dst::dst16.py is failing across the board (Trac #1847) - Migrated from https://code.icecube.wisc.edu/ticket/1847 ```json { "status": "closed", "changetime": "2016-09-01T14:56:02", "description": "{{{\n187/425 Test #187: dst::dst16.py ..................................................***Failed 6.66 sec\nINFO (I3Tray): qify-dst: WritePFrame = True (I3Tray.py:226 in __call__)\nINFO (I3Tray): Adding Anonymous Module of type '<class 'icecube.dst.dsttest.CheckFrameIndex'>' with name 'CheckFrameIndex_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'CountFrames' with name 'CountFrames_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'Dump' with name 'Dump_0000' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type 'CountFrames' with name 'CountFrames_0001' (I3Tray.py:75 in _create_name)\nINFO (I3Tray): Adding Anonymous Module of type '<class 'icecube.dst.dsttest.CheckFrameIndex'>' with name 'CheckFrameIndex_0001' (I3Tray.py:75 in _create_name)\nINFO (I3Module): Prefixing with filename/url /build/ports/test-data/sim/GeoCalibDetectorStatus_IC80_DC6.54655.i3.gz (I3InfiniteSource.cxx:55 in virtual void I3InfiniteSource::Configure())\nWARN (I3DSTModule16): I3DirectHitsValues 'PoleMuonLlhFitDirectHitsBaseC' will be ignored as per user request. (I3DSTModule16.cxx:117 in virtual void I3DSTModule16::Configure())\nINFO (dst): dst pixel size: 0.0337076 deg (HealPixCoordinates.cxx:68 in void HealPixCoordinate::ComputeBins(int))\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): Skipping frame (I3DSTExtractor16.cxx:116 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): header not writtten.. buffering (I3DSTExtractor16.cxx:124 in virtual void I3DSTExtractor16::Process())\nINFO (I3DSTExtractor16): header not writtten.. buffering (I3DSTExtractor16.cxx:124 in virtual void I3DSTExtractor16::Process())\nINFO (dst): dst pixel size: 0.0337076 deg (HealPixCoordinates.cxx:68 in void HealPixCoordinate::ComputeBins(int))\nFATAL (phys-services): Physics frames cannot be split (I3Splitter.cxx:36 in I3FramePtr I3Splitter::GetNextSubEvent(I3FramePtr))\nERROR (I3Module): UnpackDST: Exception thrown (I3Module.cxx:118 in void I3Module::Do(void (I3Module::*)()))\n------------------------- This is frame number 1 -------------------------\n[ I3Frame (TrayInfo):\n '2008-09-29T20:07:59.914225' [TrayInfo] ==> I3TrayInfo (13528)\n]\n------------------------- This is frame number 2 -------------------------\n[ I3Frame (Geometry):\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\n------------------------- This is frame number 3 -------------------------\n[ I3Frame (Calibration):\n 'Cindex' [Calibration] ==> I3PODHolder<int> (unk)\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Calibration' [Calibration] ==> I3Calibration (120071635)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\n------------------------- This is frame number 4 -------------------------\n[ I3Frame (DetectorStatus):\n 'Cindex' [Calibration] ==> I3PODHolder<int> (unk)\n 'Dindex' [DetectorStatus] ==> I3PODHolder<int> (unk)\n 'Gindex' [Geometry] ==> I3PODHolder<int> (unk)\n 'I3Calibration' [Calibration] ==> I3Calibration (120071635)\n 'I3DetectorStatus' [DetectorStatus] ==> I3DetectorStatus (995670)\n 'I3Geometry' [Geometry] ==> I3Geometry (436738)\n]\nTraceback (most recent call last):\n File \"/Volumes/Macintosh_HD2/build/buildslave/barbas/OS_X_Yosemite/source/dst/resources/test/dst16.py\", line 64, in <module>\n tray.Execute(14)\n File \"/Volumes/Macintosh_HD2/build/buildslave/barbas/OS_X_Yosemite/build/lib/I3Tray.py\", line 238, in Execute\n super(I3Tray, self).Execute(args[0])\nRuntimeError: Physics frames cannot be split (in I3FramePtr I3Splitter::GetNextSubEvent(I3FramePtr))\n}}}", "reporter": "nega", "cc": "", "resolution": "duplicate", "_ts": "1472741762631521", "component": "combo reconstruction", "summary": "dst - test dst::dst16.py is failing across the board", "priority": "normal", "keywords": "dst tests", "time": "2016-09-01T14:51:34", "milestone": "", "owner": "juancarlos", "type": "defect" } ```
non_process
dst test dst py is failing across the board trac migrated from json status closed changetime description test dst py failed sec ninfo qify dst writepframe true py in call ninfo adding anonymous module of type with name checkframeindex py in create name ninfo adding anonymous module of type countframes with name countframes py in create name ninfo adding anonymous module of type dump with name dump py in create name ninfo adding anonymous module of type countframes with name countframes py in create name ninfo adding anonymous module of type with name checkframeindex py in create name ninfo prefixing with filename url build ports test data sim geocalibdetectorstatus gz cxx in virtual void configure nwarn polemuonllhfitdirecthitsbasec will be ignored as per user request cxx in virtual void configure ninfo dst dst pixel size deg healpixcoordinates cxx in void healpixcoordinate computebins int ninfo skipping frame cxx in virtual void process ninfo skipping frame cxx in virtual void process ninfo skipping frame cxx in virtual void process ninfo skipping frame cxx in virtual void process ninfo header not writtten buffering cxx in virtual void process ninfo header not writtten buffering cxx in virtual void process ninfo dst dst pixel size deg healpixcoordinates cxx in void healpixcoordinate computebins int nfatal phys services physics frames cannot be split cxx in getnextsubevent nerror unpackdst exception thrown cxx in void do void n this is frame number n n n this is frame number n unk n n n this is frame number n unk n gindex unk n n n n this is frame number n unk n dindex unk n gindex unk n n n n ntraceback most recent call last n file volumes macintosh build buildslave barbas os x yosemite source dst resources test py line in n tray execute n file volumes macintosh build buildslave barbas os x yosemite build lib py line in execute n super self execute args nruntimeerror physics frames cannot be split in getnextsubevent n reporter nega cc resolution duplicate ts component combo reconstruction summary dst test dst py is failing across the board priority normal keywords dst tests time milestone owner juancarlos type defect
0
15,335
19,472,002,354
IssuesEvent
2021-12-24 03:51:20
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
migrate: create collection
script migration processing
build Javascript/Electron functionality from existing Python script `mongo_collection.py` insert new collection given collection id, collection name, and collection summary (optional)
1.0
migrate: create collection - build Javascript/Electron functionality from existing Python script `mongo_collection.py` insert new collection given collection id, collection name, and collection summary (optional)
process
migrate create collection build javascript electron functionality from existing python script mongo collection py insert new collection given collection id collection name and collection summary optional
1
1,786
4,519,124,961
IssuesEvent
2016-09-06 04:03:13
csdperic/csdp
https://api.github.com/repos/csdperic/csdp
opened
在offline的有个字段是reason,但是在web的页面上没有了,请加上
ncm process
![image](https://cloud.githubusercontent.com/assets/17737516/18261343/67dbc328-7429-11e6-89b7-6b5e863d9e7a.png) 在web的页面上加上,是用来纪录rreassign,tryagain和suspend的原因的 ![image](https://cloud.githubusercontent.com/assets/17737516/18261349/9424cba0-7429-11e6-978a-ffcd10a628e1.png) 在Reason那里,显示的格式如下,当点击suspend,在reason里面填写 Suspend: xxxxxxxx 如果是tryagain的,显示 Tryagain: xxxxxx
1.0
在offline的有个字段是reason,但是在web的页面上没有了,请加上 - ![image](https://cloud.githubusercontent.com/assets/17737516/18261343/67dbc328-7429-11e6-89b7-6b5e863d9e7a.png) 在web的页面上加上,是用来纪录rreassign,tryagain和suspend的原因的 ![image](https://cloud.githubusercontent.com/assets/17737516/18261349/9424cba0-7429-11e6-978a-ffcd10a628e1.png) 在Reason那里,显示的格式如下,当点击suspend,在reason里面填写 Suspend: xxxxxxxx 如果是tryagain的,显示 Tryagain: xxxxxx
process
在offline的有个字段是reason,但是在web的页面上没有了,请加上 在web的页面上加上,是用来纪录rreassign,tryagain和suspend的原因的 在reason那里,显示的格式如下,当点击suspend,在reason里面填写 suspend: xxxxxxxx 如果是tryagain的,显示 tryagain: xxxxxx
1
16,803
22,046,337,682
IssuesEvent
2022-05-30 02:26:36
EveryAgile/Backend
https://api.github.com/repos/EveryAgile/Backend
opened
[FEAT] 회원 가입 및 로그인
processing feature
## 설명 회원 가입 및 로그인 ## 체크사항 - [ ] AWS 프리티어 계정 생성 - [ ] 회원 엔티티 생성 후 디비 연동 (JPA 사용 예정) - [ ] JWT 사용 및 토큰 디비 생성 - [ ] 스프링 시큐리티 적용 - [ ] 회원 가입 및 로그인 관련 컨트롤러 및 DAO 생성 - [ ] 회원 가입 및 로그인 관련 서비스 생성 - [ ] 에러 처리 ## 참고자료 ## 관련 논의
1.0
[FEAT] 회원 가입 및 로그인 - ## 설명 회원 가입 및 로그인 ## 체크사항 - [ ] AWS 프리티어 계정 생성 - [ ] 회원 엔티티 생성 후 디비 연동 (JPA 사용 예정) - [ ] JWT 사용 및 토큰 디비 생성 - [ ] 스프링 시큐리티 적용 - [ ] 회원 가입 및 로그인 관련 컨트롤러 및 DAO 생성 - [ ] 회원 가입 및 로그인 관련 서비스 생성 - [ ] 에러 처리 ## 참고자료 ## 관련 논의
process
회원 가입 및 로그인 설명 회원 가입 및 로그인 체크사항 aws 프리티어 계정 생성 회원 엔티티 생성 후 디비 연동 jpa 사용 예정 jwt 사용 및 토큰 디비 생성 스프링 시큐리티 적용 회원 가입 및 로그인 관련 컨트롤러 및 dao 생성 회원 가입 및 로그인 관련 서비스 생성 에러 처리 참고자료 관련 논의
1
20,517
27,174,923,063
IssuesEvent
2023-02-17 23:52:49
0xPolygonMiden/miden-vm
https://api.github.com/repos/0xPolygonMiden/miden-vm
closed
Potential refactor of crypto operations
enhancement assembly processor
According to @bobbinth, we could make `mtree_set` a bit more efficient by changing semantics of `MRUPDATE` from: [V, d, i, R, NV] -> [NR, d, i, R, NV] to [V, d, i, R, NV] -> [V, d, i, NR, NV] This should shave off 3 VM cycles from `mtree_set` and get it down to 11 cycles, but: - It would complicate handling of constraints for this operation since we change items at odd stack positions. - It would make `mtree_cwm` operation quite a bit less efficient.
1.0
Potential refactor of crypto operations - According to @bobbinth, we could make `mtree_set` a bit more efficient by changing semantics of `MRUPDATE` from: [V, d, i, R, NV] -> [NR, d, i, R, NV] to [V, d, i, R, NV] -> [V, d, i, NR, NV] This should shave off 3 VM cycles from `mtree_set` and get it down to 11 cycles, but: - It would complicate handling of constraints for this operation since we change items at odd stack positions. - It would make `mtree_cwm` operation quite a bit less efficient.
process
potential refactor of crypto operations according to bobbinth we could make mtree set a bit more efficient by changing semantics of mrupdate from to this should shave off vm cycles from mtree set and get it down to cycles but it would complicate handling of constraints for this operation since we change items at odd stack positions it would make mtree cwm operation quite a bit less efficient
1
142,310
19,089,401,393
IssuesEvent
2021-11-29 10:22:22
tharun453/samples
https://api.github.com/repos/tharun453/samples
opened
CVE-2018-8292 (High) detected in system.net.http.4.3.0.nupkg, system.net.http.4.3.2.nupkg
security vulnerability
## CVE-2018-8292 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>system.net.http.4.3.0.nupkg</b>, <b>system.net.http.4.3.2.nupkg</b></p></summary> <p> <details><summary><b>system.net.http.4.3.0.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: samples/machine-learning/tutorials/ProductSalesAnomalyDetection/ProductSalesAnomalyDetection.csproj</p> <p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - xunit.2.4.1.nupkg (Root Library) - xunit.assert.2.4.1.nupkg - netstandard.library.1.6.1.nupkg - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) </details> <details><summary><b>system.net.http.4.3.2.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.2.nupkg">https://api.nuget.org/packages/system.net.http.4.3.2.nupkg</a></p> <p>Path to dependency file: samples/framework/libraries/net40-library/src/Library/Library.csproj</p> <p>Path to vulnerable library: nuget/packages/system.net.http/4.3.2/system.net.http.4.3.2.nupkg</p> <p> Dependency Hierarchy: - :x: **system.net.http.4.3.2.nupkg** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tharun453/samples/commit/0d7f1931b9759c22f0469b959114a5d94f8f92e4">0d7f1931b9759c22f0469b959114a5d94f8f92e4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0. <p>Publish Date: 2018-10-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-8292>CVE-2018-8292</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/dotnet/announcements/issues/88">https://github.com/dotnet/announcements/issues/88</a></p> <p>Release Date: 2018-10-10</p> <p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-8292 (High) detected in system.net.http.4.3.0.nupkg, system.net.http.4.3.2.nupkg - ## CVE-2018-8292 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>system.net.http.4.3.0.nupkg</b>, <b>system.net.http.4.3.2.nupkg</b></p></summary> <p> <details><summary><b>system.net.http.4.3.0.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: samples/machine-learning/tutorials/ProductSalesAnomalyDetection/ProductSalesAnomalyDetection.csproj</p> <p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - xunit.2.4.1.nupkg (Root Library) - xunit.assert.2.4.1.nupkg - netstandard.library.1.6.1.nupkg - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) </details> <details><summary><b>system.net.http.4.3.2.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.2.nupkg">https://api.nuget.org/packages/system.net.http.4.3.2.nupkg</a></p> <p>Path to dependency file: samples/framework/libraries/net40-library/src/Library/Library.csproj</p> <p>Path to vulnerable library: nuget/packages/system.net.http/4.3.2/system.net.http.4.3.2.nupkg</p> <p> Dependency Hierarchy: - :x: **system.net.http.4.3.2.nupkg** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tharun453/samples/commit/0d7f1931b9759c22f0469b959114a5d94f8f92e4">0d7f1931b9759c22f0469b959114a5d94f8f92e4</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0. <p>Publish Date: 2018-10-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-8292>CVE-2018-8292</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/dotnet/announcements/issues/88">https://github.com/dotnet/announcements/issues/88</a></p> <p>Release Date: 2018-10-10</p> <p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in system net http nupkg system net http nupkg cve high severity vulnerability vulnerable libraries system net http nupkg system net http nupkg system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file samples machine learning tutorials productsalesanomalydetection productsalesanomalydetection csproj path to vulnerable library usr share dotnet sdk nugetfallbackfolder system net http system net http nupkg dependency hierarchy xunit nupkg root library xunit assert nupkg netstandard library nupkg x system net http nupkg vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file samples framework libraries library src library library csproj path to vulnerable library nuget packages system net http system net http nupkg dependency hierarchy x system net http nupkg vulnerable library found in head commit a href found in base branch main vulnerability details an information disclosure vulnerability exists in net core when authentication information is inadvertently exposed in a redirect aka net core information disclosure vulnerability this affects net core net core net core powershell core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system net http microsoft powershell commands utility rc step up your open source security game with whitesource
0
147,046
13,200,153,179
IssuesEvent
2020-08-14 07:39:03
MaSyMoS/masymos-core
https://api.github.com/repos/MaSyMoS/masymos-core
opened
test masymos docker, check documentation
check documentation help wanted
- test, if running - check docs for - is everything understandable - grammar and typo - is everything covered, did i miss something? - please report in this issue! - OS+version - docker version # ToDo - [ ] check docs: [Minimal Userguide to get startet](https://masymos.readthedocs.io/en/latest/main_setup.html) - [ ] check docs: [masymos-docker documentation, tweaks](https://masymos.readthedocs.io/en/latest/contrib_docker.html) - [ ] test: jar-builder on linux - [ ] test: server-integration on linux - [ ] test: jar-builder on windows - [ ] test: server-integration on windows - [ ] are all use cases covered? :information_source: @ronhenkel
1.0
test masymos docker, check documentation - - test, if running - check docs for - is everything understandable - grammar and typo - is everything covered, did i miss something? - please report in this issue! - OS+version - docker version # ToDo - [ ] check docs: [Minimal Userguide to get startet](https://masymos.readthedocs.io/en/latest/main_setup.html) - [ ] check docs: [masymos-docker documentation, tweaks](https://masymos.readthedocs.io/en/latest/contrib_docker.html) - [ ] test: jar-builder on linux - [ ] test: server-integration on linux - [ ] test: jar-builder on windows - [ ] test: server-integration on windows - [ ] are all use cases covered? :information_source: @ronhenkel
non_process
test masymos docker check documentation test if running check docs for is everything understandable grammar and typo is everything covered did i miss something please report in this issue os version docker version todo check docs check docs test jar builder on linux test server integration on linux test jar builder on windows test server integration on windows are all use cases covered information source ronhenkel
0
81,067
30,694,105,239
IssuesEvent
2023-07-26 17:12:33
idaholab/moose
https://api.github.com/repos/idaholab/moose
opened
Subdomain mesh adaptivity issues for finite volume problems
T: defect P: normal
## Bug Description Undesirable behavior and crashes are observed for finite volume problems that use mesh adaptivity on a subdomain. ## Steps to Reproduce Consider the test input in moose/test/tests/indicators/gradient_jump_indicator/gradient_jump_indicator_test.i. This problem is split into 2 subdomains: the left half with unknown variable u0 and the right half with unknown variable u1. Mesh adaptivity is imposed on the left half of the domain by defining an indicator on variable u0 and a marker on said indicator. The mesh for the initial condition is shown below: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/1bd8553a-075d-4973-b694-e0caa00fdfd5"> The mesh for the last timestep is shown below: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/aacd62ce-1c8e-441c-b556-9890b25e089f"> The undesirable feature of this result is that although mesh adaptivity is only defined for variable u0 on the left half of the domain, adaptivity appears to be used on the right half as well. This effect can be mitigated by adding `block = 0` to the marker. This results in the following mesh for the final time step: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/e28030e1-3695-4238-85fc-8529acdc9d95"> Although not as drastic, the coarsening still propagates throughout the right half of the domain. When adding `block = 0` to the indicator, the problem crashes with `Segmentation fault: 11` in opt mode and eventually crashes in devel mode with the following message: ``` We caught a libMesh error in ThreadedElementLoopBase:Assertion `comp < this->n_comp(s,var)' failed. comp = 0 this->n_comp(s,var) = 0 ``` Finally, when attempting to use subdomain mesh adaptivity on a FV fluids problem that uses Rhie-Chow interpolation (see attached cask.txt), the problem crashes. In devel mode, the following message is given: ``` Assertion `it != _a.end()' failed We definitely should have found something at /Users/behnpa/projects/moose/modules/navier_stokes/src/userobjects/INSFVRhieChowInterpolator.C, line 455 ``` This error only seems to occur when running on more than one processor. ## Impact Prevents getting work done, as mesh adaptivity can be crucial for large simulations. [cask.txt](https://github.com/idaholab/moose/files/12175526/cask.txt)
1.0
Subdomain mesh adaptivity issues for finite volume problems - ## Bug Description Undesirable behavior and crashes are observed for finite volume problems that use mesh adaptivity on a subdomain. ## Steps to Reproduce Consider the test input in moose/test/tests/indicators/gradient_jump_indicator/gradient_jump_indicator_test.i. This problem is split into 2 subdomains: the left half with unknown variable u0 and the right half with unknown variable u1. Mesh adaptivity is imposed on the left half of the domain by defining an indicator on variable u0 and a marker on said indicator. The mesh for the initial condition is shown below: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/1bd8553a-075d-4973-b694-e0caa00fdfd5"> The mesh for the last timestep is shown below: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/aacd62ce-1c8e-441c-b556-9890b25e089f"> The undesirable feature of this result is that although mesh adaptivity is only defined for variable u0 on the left half of the domain, adaptivity appears to be used on the right half as well. This effect can be mitigated by adding `block = 0` to the marker. This results in the following mesh for the final time step: <img width="1232" alt="image" src="https://github.com/idaholab/moose/assets/31858053/e28030e1-3695-4238-85fc-8529acdc9d95"> Although not as drastic, the coarsening still propagates throughout the right half of the domain. When adding `block = 0` to the indicator, the problem crashes with `Segmentation fault: 11` in opt mode and eventually crashes in devel mode with the following message: ``` We caught a libMesh error in ThreadedElementLoopBase:Assertion `comp < this->n_comp(s,var)' failed. comp = 0 this->n_comp(s,var) = 0 ``` Finally, when attempting to use subdomain mesh adaptivity on a FV fluids problem that uses Rhie-Chow interpolation (see attached cask.txt), the problem crashes. In devel mode, the following message is given: ``` Assertion `it != _a.end()' failed We definitely should have found something at /Users/behnpa/projects/moose/modules/navier_stokes/src/userobjects/INSFVRhieChowInterpolator.C, line 455 ``` This error only seems to occur when running on more than one processor. ## Impact Prevents getting work done, as mesh adaptivity can be crucial for large simulations. [cask.txt](https://github.com/idaholab/moose/files/12175526/cask.txt)
non_process
subdomain mesh adaptivity issues for finite volume problems bug description undesirable behavior and crashes are observed for finite volume problems that use mesh adaptivity on a subdomain steps to reproduce consider the test input in moose test tests indicators gradient jump indicator gradient jump indicator test i this problem is split into subdomains the left half with unknown variable and the right half with unknown variable mesh adaptivity is imposed on the left half of the domain by defining an indicator on variable and a marker on said indicator the mesh for the initial condition is shown below img width alt image src the mesh for the last timestep is shown below img width alt image src the undesirable feature of this result is that although mesh adaptivity is only defined for variable on the left half of the domain adaptivity appears to be used on the right half as well this effect can be mitigated by adding block to the marker this results in the following mesh for the final time step img width alt image src although not as drastic the coarsening still propagates throughout the right half of the domain when adding block to the indicator the problem crashes with segmentation fault in opt mode and eventually crashes in devel mode with the following message we caught a libmesh error in threadedelementloopbase assertion comp n comp s var failed comp this n comp s var finally when attempting to use subdomain mesh adaptivity on a fv fluids problem that uses rhie chow interpolation see attached cask txt the problem crashes in devel mode the following message is given assertion it a end failed we definitely should have found something at users behnpa projects moose modules navier stokes src userobjects insfvrhiechowinterpolator c line this error only seems to occur when running on more than one processor impact prevents getting work done as mesh adaptivity can be crucial for large simulations
0
2,015
4,837,107,782
IssuesEvent
2016-11-08 21:36:05
cliffparnitzky/FormDependentMandatoryField
https://api.github.com/repos/cliffparnitzky/FormDependentMandatoryField
closed
Wrong error message for "is_empty" condition
Defect Improvement ⚙ - Processed
Hi Cliff, many thanks for your work, saved me from braindeath ;-) I would suggest to add another condition labeled "empty" or "is empty" to the dependent fields list. Error messages should be adapted accordingly (in German like "keine Werte in Feld xyz"). I my understanding, currently you could build it with "!=" as Condition and "*" as value which results in a wrong error message in the form. Andreas
1.0
Wrong error message for "is_empty" condition - Hi Cliff, many thanks for your work, saved me from braindeath ;-) I would suggest to add another condition labeled "empty" or "is empty" to the dependent fields list. Error messages should be adapted accordingly (in German like "keine Werte in Feld xyz"). I my understanding, currently you could build it with "!=" as Condition and "*" as value which results in a wrong error message in the form. Andreas
process
wrong error message for is empty condition hi cliff many thanks for your work saved me from braindeath i would suggest to add another condition labeled empty or is empty to the dependent fields list error messages should be adapted accordingly in german like keine werte in feld xyz i my understanding currently you could build it with as condition and as value which results in a wrong error message in the form andreas
1
21,991
30,485,794,952
IssuesEvent
2023-07-18 02:00:10
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Tue, 18 Jul 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Boundary-weighted logit consistency improves calibration of segmentation networks - **Authors:** Neerav Karani, Neel Dey, Polina Golland - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08163 - **Pdf link:** https://arxiv.org/pdf/2307.08163 - **Abstract** Neural network prediction probabilities and accuracy are often only weakly-correlated. Inherent label ambiguity in training data for image segmentation aggravates such miscalibration. We show that logit consistency across stochastic transformations acts as a spatially varying regularizer that prevents overconfident predictions at pixels with ambiguous labels. Our boundary-weighted extension of this regularizer provides state-of-the-art calibration for prostate and heart MRI segmentation. ### Random Boxes Are Open-world Object Detectors - **Authors:** Yanghao Wang, Zhongqi Yue, Xian-Sheng Hua, Hanwang Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08249 - **Pdf link:** https://arxiv.org/pdf/2307.08249 - **Abstract** We show that classifiers trained with random region proposals achieve state-of-the-art Open-world Object Detection (OWOD): they can not only maintain the accuracy of the known objects (w/ training labels), but also considerably improve the recall of unknown ones (w/o training labels). Specifically, we propose RandBox, a Fast R-CNN based architecture trained on random proposals at each training iteration, surpassing existing Faster R-CNN and Transformer based OWOD. Its effectiveness stems from the following two benefits introduced by randomness. First, as the randomization is independent of the distribution of the limited known objects, the random proposals become the instrumental variable that prevents the training from being confounded by the known objects. Second, the unbiased training encourages more proposal explorations by using our proposed matching score that does not penalize the random proposals whose prediction scores do not match the known objects. On two benchmarks: Pascal-VOC/MS-COCO and LVIS, RandBox significantly outperforms the previous state-of-the-art in all metrics. We also detail the ablations on randomization and loss designs. Codes are available at https://github.com/scuwyh2000/RandBox. ## Keyword: event camera ### Video Frame Interpolation with Stereo Event and Intensity Camera - **Authors:** Chao Ding, Mingyuan Lin, Haijian Zhang, Jianzhuang Liu, Lei Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08228 - **Pdf link:** https://arxiv.org/pdf/2307.08228 - **Abstract** The stereo event-intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information. However, such a setup commonly encounters cross-modality parallax that is difficult to be eliminated solely with stereo rectification especially for real-world scenes with complex motions and varying depths, posing artifacts and distortion for existing Event-based Video Frame Interpolation (E-VFI) approaches. To tackle this problem, we propose a novel Stereo Event-based VFI (SE-VFI) network (SEVFI-Net) to generate high-quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them. Specifically, we propose a Feature Aggregation Module (FAM) to alleviate the parallax and achieve spatial alignment in the feature domain. We then exploit the fused features accomplishing accurate optical flow and disparity estimation, and achieving better interpolated results through flow-based and synthesis-based ways. We also build a stereo visual acquisition system composed of an event camera and an RGB-D camera to collect a new Stereo Event-Intensity Dataset (SEID) containing diverse scenes with complex motions and varying depths. Experiments on public real-world stereo datasets, i.e., DSEC and MVSEC, and our SEID dataset demonstrate that our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### INVE: Interactive Neural Video Editing - **Authors:** Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07663 - **Pdf link:** https://arxiv.org/pdf/2307.07663 - **Abstract** We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip. Our method is inspired by the recent work on Layered Neural Atlas (LNA). LNA, however, suffers from two major drawbacks: (1) the method is too slow for interactive editing, and (2) it offers insufficient support for some editing use cases, including direct frame editing and rigid texture tracking. To address these challenges we leverage and adopt highly efficient network architectures, powered by hash-grids encoding, to substantially improve processing speed. In addition, we learn bi-directional functions between image-atlas and introduce vectorized editing, which collectively enables a much greater variety of edits in both the atlas and the frames directly. Compared to LNA, our INVE reduces the learning and inference time by a factor of 5, and supports various video editing operations that LNA cannot. We showcase the superiority of INVE over LNA in interactive video editing through a comprehensive quantitative and qualitative analysis, highlighting its numerous advantages and improved performance. For video results, please see https://gabriel-huang.github.io/inve/ ### Benchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types - **Authors:** Tim Rohwedder, Daile Osorio-Roig, Christian Rathgeb, Christoph Busch - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08615 - **Pdf link:** https://arxiv.org/pdf/2307.08615 - **Abstract** Traditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance. This is of particular interest since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. ## Keyword: ISP ### Dense Multitask Learning to Reconfigure Comics - **Authors:** Deblina Bhattacharjee, Sabine Süsstrunk, Mathieu Salzmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2307.08071 - **Pdf link:** https://arxiv.org/pdf/2307.08071 - **Abstract** In this paper, we develop a MultiTask Learning (MTL) model to achieve dense predictions for comics panels to, in turn, facilitate the transfer of comics from one publication channel to another by assisting authors in the task of reconfiguring their narratives. Our MTL method can successfully identify the semantic units as well as the embedded notion of 3D in comic panels. This is a significantly challenging problem because comics comprise disparate artistic styles, illustrations, layouts, and object scales that depend on the authors creative process. Typically, dense image-based prediction techniques require a large corpus of data. Finding an automated solution for dense prediction in the comics domain, therefore, becomes more difficult with the lack of ground-truth dense annotations for the comics images. To address these challenges, we develop the following solutions: 1) we leverage a commonly-used strategy known as unsupervised image-to-image translation, which allows us to utilize a large corpus of real-world annotations; 2) we utilize the results of the translations to develop our multitasking approach that is based on a vision transformer backbone and a domain transferable attention module; 3) we study the feasibility of integrating our MTL dense-prediction method with an existing retargeting method, thereby reconfiguring comics. ### FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow - **Authors:** Jihyun Lee, Junbong Jang, Donghwan Kim, Minhyuk Sung, Tae-Kyun Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08100 - **Pdf link:** https://arxiv.org/pdf/2307.08100 - **Abstract** Recent 4D shape representations model continuous temporal evolution of implicit shapes by (1) learning query flows without leveraging shape and articulation priors or (2) decoding shape occupancies separately for each time value. Thus, they do not effectively capture implicit correspondences between articulated shapes or regularize jittery temporal deformations. In this work, we present FourierHandFlow, which is a spatio-temporally continuous representation for human hands that combines a 3D occupancy field with articulation-aware query flows represented as Fourier series. Given an input RGB sequence, we aim to learn a fixed number of Fourier coefficients for each query flow to guarantee smooth and continuous temporal shape dynamics. To effectively model spatio-temporal deformations of articulated hands, we compose our 4D representation based on two types of Fourier query flow: (1) pose flow that models query dynamics influenced by hand articulation changes via implicit linear blend skinning and (2) shape flow that models query-wise displacement flow. In the experiments, our method achieves state-of-the-art results on video-based 4D reconstruction while being computationally more efficient than the existing 3D/4D implicit shape representations. We additionally show our results on motion inter- and extrapolation and texture transfer using the learned correspondences of implicit shapes. To the best of our knowledge, FourierHandFlow is the first neural 4D continuous hand representation learned from RGB videos. The code will be publicly accessible. ### Video Frame Interpolation with Stereo Event and Intensity Camera - **Authors:** Chao Ding, Mingyuan Lin, Haijian Zhang, Jianzhuang Liu, Lei Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08228 - **Pdf link:** https://arxiv.org/pdf/2307.08228 - **Abstract** The stereo event-intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information. However, such a setup commonly encounters cross-modality parallax that is difficult to be eliminated solely with stereo rectification especially for real-world scenes with complex motions and varying depths, posing artifacts and distortion for existing Event-based Video Frame Interpolation (E-VFI) approaches. To tackle this problem, we propose a novel Stereo Event-based VFI (SE-VFI) network (SEVFI-Net) to generate high-quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them. Specifically, we propose a Feature Aggregation Module (FAM) to alleviate the parallax and achieve spatial alignment in the feature domain. We then exploit the fused features accomplishing accurate optical flow and disparity estimation, and achieving better interpolated results through flow-based and synthesis-based ways. We also build a stereo visual acquisition system composed of an event camera and an RGB-D camera to collect a new Stereo Event-Intensity Dataset (SEID) containing diverse scenes with complex motions and varying depths. Experiments on public real-world stereo datasets, i.e., DSEC and MVSEC, and our SEID dataset demonstrate that our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Extreme Image Compression using Fine-tuned VQGAN Models - **Authors:** Qi Mao, Tinghan Yang, Yinuo Zhang, Shuyin Pan, Meng Wang, Shiqi Wang, Siwei Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.08265 - **Pdf link:** https://arxiv.org/pdf/2307.08265 - **Abstract** Recent advances in generative compression methods have demonstrated remarkable progress in enhancing the perceptual quality of compressed data, especially in scenarios with low bitrates. Nevertheless, their efficacy and applicability in achieving extreme compression ratios ($<0.1$ bpp) still remain constrained. In this work, we propose a simple yet effective coding framework by introducing vector quantization (VQ)-based generative models into the image compression domain. The main insight is that the codebook learned by the VQGAN model yields strong expressive capacity, facilitating efficient compression of continuous information in the latent space while maintaining reconstruction quality. Specifically, an image can be represented as VQ-indices by finding the nearest codeword, which can be encoded using lossless compression methods into bitstreams. We then propose clustering a pre-trained large-scale codebook into smaller codebooks using the K-means algorithm. This enables images to be represented as diverse ranges of VQ-indices maps, resulting in variable bitrates and different levels of reconstruction quality. Extensive qualitative and quantitative experiments on various datasets demonstrate that the proposed framework outperforms the state-of-the-art codecs in terms of perceptual quality-oriented metrics and human perception under extremely low bitrates. ### Distributed bundle adjustment with block-based sparse matrix compression for super large scale datasets - **Authors:** Maoteng Zheng, Nengcheng Chen, Junfeng Zhu, Xiaoru Zeng, Huanbin Qiu, Yuyao Jiang, Xingyue Lu, Hao Qu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://arxiv.org/abs/2307.08383 - **Pdf link:** https://arxiv.org/pdf/2307.08383 - **Abstract** We propose a distributed bundle adjustment (DBA) method using the exact Levenberg-Marquardt (LM) algorithm for super large-scale datasets. Most of the existing methods partition the global map to small ones and conduct bundle adjustment in the submaps. In order to fit the parallel framework, they use approximate solutions instead of the LM algorithm. However, those methods often give sub-optimal results. Different from them, we utilize the exact LM algorithm to conduct global bundle adjustment where the formation of the reduced camera system (RCS) is actually parallelized and executed in a distributed way. To store the large RCS, we compress it with a block-based sparse matrix compression format (BSMC), which fully exploits its block feature. The BSMC format also enables the distributed storage and updating of the global RCS. The proposed method is extensively evaluated and compared with the state-of-the-art pipelines using both synthetic and real datasets. Preliminary results demonstrate the efficient memory usage and vast scalability of the proposed method compared with the baselines. For the first time, we conducted parallel bundle adjustment using LM algorithm on a real datasets with 1.18 million images and a synthetic dataset with 10 million images (about 500 times that of the state-of-the-art LM-based BA) on a distributed computing system. ## Keyword: RAW ### Flow-Guided Controllable Line Drawing Generation - **Authors:** Chengyu Fang, Xianfeng Han - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2307.07540 - **Pdf link:** https://arxiv.org/pdf/2307.07540 - **Abstract** In this paper, we investigate the problem of automatically controllable artistic character line drawing generation from photographs by proposing a Vector Flow Aware and Line Controllable Image-to-Image Translation architecture, which can be viewed as an appealing intersection between Artificial Intelligence and Arts. Specifically, we first present an Image-to-Flow network (I2FNet) to efficiently and robustly create the vector flow field in a learning-based manner, which can provide a direction guide for drawing lines. Then, we introduce our well-designed Double Flow Generator (DFG) framework to fuse features from learned vector flow and input image flow guaranteeing the spatial coherence of lines. Meanwhile, in order to allow for controllable character line drawing generation, we integrate a Line Control Matrix (LCM) into DFG and train a Line Control Regressor (LCR) to synthesize drawings with different styles by elaborately controlling the level of details, such as thickness, smoothness, and continuity, of lines. Finally, we design a Fourier Transformation Loss to further constrain the character line generation from the frequency domain view of the point. Quantitative and qualitative experiments demonstrate that our approach can obtain superior performance in producing high-resolution character line-drawing images with perceptually realistic characteristics. ### INVE: Interactive Neural Video Editing - **Authors:** Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07663 - **Pdf link:** https://arxiv.org/pdf/2307.07663 - **Abstract** We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip. Our method is inspired by the recent work on Layered Neural Atlas (LNA). LNA, however, suffers from two major drawbacks: (1) the method is too slow for interactive editing, and (2) it offers insufficient support for some editing use cases, including direct frame editing and rigid texture tracking. To address these challenges we leverage and adopt highly efficient network architectures, powered by hash-grids encoding, to substantially improve processing speed. In addition, we learn bi-directional functions between image-atlas and introduce vectorized editing, which collectively enables a much greater variety of edits in both the atlas and the frames directly. Compared to LNA, our INVE reduces the learning and inference time by a factor of 5, and supports various video editing operations that LNA cannot. We showcase the superiority of INVE over LNA in interactive video editing through a comprehensive quantitative and qualitative analysis, highlighting its numerous advantages and improved performance. For video results, please see https://gabriel-huang.github.io/inve/ ### ExposureDiffusion: Learning to Expose for Low-light Image Enhancement - **Authors:** Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.07710 - **Pdf link:** https://arxiv.org/pdf/2307.07710 - **Abstract** Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted. ### Prawn Morphometrics and Weight Estimation from Images using Deep Learning for Landmark Localization - **Authors:** Alzayat Saleh, Md Mehedi Hasan, Herman W Raadsma, Mehar S Khatkar, Dean R Jerry, Mostafa Rahimi Azghadi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07732 - **Pdf link:** https://arxiv.org/pdf/2307.07732 - **Abstract** Accurate weight estimation and morphometric analyses are useful in aquaculture for optimizing feeding, predicting harvest yields, identifying desirable traits for selective breeding, grading processes, and monitoring the health status of production animals. However, the collection of phenotypic data through traditional manual approaches at industrial scales and in real-time is time-consuming, labour-intensive, and prone to errors. Digital imaging of individuals and subsequent training of prediction models using Deep Learning (DL) has the potential to rapidly and accurately acquire phenotypic data from aquaculture species. In this study, we applied a novel DL approach to automate weight estimation and morphometric analysis using the black tiger prawn (Penaeus monodon) as a model crustacean. The DL approach comprises two main components: a feature extraction module that efficiently combines low-level and high-level features using the Kronecker product operation; followed by a landmark localization module that then uses these features to predict the coordinates of key morphological points (landmarks) on the prawn body. Once these landmarks were extracted, weight was estimated using a weight regression module based on the extracted landmarks using a fully connected network. For morphometric analyses, we utilized the detected landmarks to derive five important prawn traits. Principal Component Analysis (PCA) was also used to identify landmark-derived distances, which were found to be highly correlated with shape features such as body length, and width. We evaluated our approach on a large dataset of 8164 images of the Black tiger prawn (Penaeus monodon) collected from Australian farms. Our experimental results demonstrate that the novel DL approach outperforms existing DL methods in terms of accuracy, robustness, and efficiency. ### EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes - **Authors:** Jingyuan Yang, Qiruin Huang, Tingting Ding, Dani Lischinski, Daniel Cohen-Or, Hui Huang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07961 - **Pdf link:** https://arxiv.org/pdf/2307.07961 - **Abstract** Visual Emotion Analysis (VEA) aims at predicting people's emotional responses to visual stimuli. This is a promising, yet challenging, task in affective computing, which has drawn increasing attention in recent years. Most of the existing work in this area focuses on feature design, while little attention has been paid to dataset construction. In this work, we introduce EmoSet, the first large-scale visual emotion dataset annotated with rich attributes, which is superior to existing datasets in four aspects: scale, annotation richness, diversity, and data balance. EmoSet comprises 3.3 million images in total, with 118,102 of these images carefully labeled by human annotators, making it five times larger than the largest existing dataset. EmoSet includes images from social networks, as well as artistic images, and it is well balanced between different emotion categories. Motivated by psychological studies, in addition to emotion category, each image is also annotated with a set of describable emotion attributes: brightness, colorfulness, scene type, object class, facial expression, and human action, which can help understand visual emotions in a precise and interpretable way. The relevance of these emotion attributes is validated by analyzing the correlations between them and visual emotion, as well as by designing an attribute module to help visual emotion recognition. We believe EmoSet will bring some key insights and encourage further research in visual emotion analysis and understanding. The data and code will be released after the publication of this work. ### Planting a SEED of Vision in Large Language Model - **Authors:** Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, Ying Shan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08041 - **Pdf link:** https://arxiv.org/pdf/2307.08041 - **Abstract** We present SEED, an elaborate image tokenizer that empowers Large Language Models (LLMs) with the emergent ability to SEE and Draw at the same time. Research on image tokenizers has previously reached an impasse, as frameworks employing quantized visual tokens have lost prominence due to subpar performance and convergence in multimodal comprehension (compared to BLIP-2, etc.) or generation (compared to Stable Diffusion, etc.). Despite the limitations, we remain confident in its natural capacity to unify visual and textual representations, facilitating scalable multimodal training with LLM's original recipe. In this study, we identify two crucial principles for the architecture and training of SEED that effectively ease subsequent alignment with LLMs. (1) Image tokens should be independent of 2D physical patch positions and instead be produced with a 1D causal dependency, exhibiting intrinsic interdependence that aligns with the left-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens should capture high-level semantics consistent with the degree of semantic abstraction in words, and be optimized for both discriminativeness and reconstruction during the tokenizer training phase. As a result, the off-the-shelf LLM is able to perform both image-to-text and text-to-image generation by incorporating our SEED through efficient LoRA tuning. Comprehensive multimodal pretraining and instruction tuning, which may yield improved results, are reserved for future investigation. This version of SEED was trained in 5.7 days using only 64 V100 GPUs and 5M publicly available image-text pairs. Our preliminary study emphasizes the great potential of discrete visual tokens in versatile multimodal LLMs and the importance of proper image tokenizers in broader research. ### Zero-Shot Image Harmonization with Generative Model Prior - **Authors:** Jianqi Chen, Zhengxia Zou, Yilan Zhang, Keyan Chen, Zhenwei Shi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08182 - **Pdf link:** https://arxiv.org/pdf/2307.08182 - **Abstract** Recent image harmonization methods have demonstrated promising results. However, due to their heavy reliance on a large number of composite images, these works are expensive in the training phase and often fail to generalize to unseen images. In this paper, we draw lessons from human behavior and come up with a zero-shot image harmonization method. Specifically, in the harmonization process, a human mainly utilizes his long-term prior on harmonious images and makes a composite image close to that prior. To imitate that, we resort to pretrained generative models for the prior of natural images. For the guidance of the harmonization direction, we propose an Attention-Constraint Text which is optimized to well illustrate the image environments. Some further designs are introduced for preserving the foreground content structure. The resulting framework, highly consistent with human behavior, can achieve harmonious results without burdensome training. Extensive experiments have demonstrated the effectiveness of our approach, and we have also explored some interesting applications. ### Benchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types - **Authors:** Tim Rohwedder, Daile Osorio-Roig, Christian Rathgeb, Christoph Busch - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08615 - **Pdf link:** https://arxiv.org/pdf/2307.08615 - **Abstract** Traditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance. This is of particular interest since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. ## Keyword: raw image ### ExposureDiffusion: Learning to Expose for Low-light Image Enhancement - **Authors:** Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.07710 - **Pdf link:** https://arxiv.org/pdf/2307.07710 - **Abstract** Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.
2.0
New submissions for Tue, 18 Jul 23 - ## Keyword: events ### Boundary-weighted logit consistency improves calibration of segmentation networks - **Authors:** Neerav Karani, Neel Dey, Polina Golland - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08163 - **Pdf link:** https://arxiv.org/pdf/2307.08163 - **Abstract** Neural network prediction probabilities and accuracy are often only weakly-correlated. Inherent label ambiguity in training data for image segmentation aggravates such miscalibration. We show that logit consistency across stochastic transformations acts as a spatially varying regularizer that prevents overconfident predictions at pixels with ambiguous labels. Our boundary-weighted extension of this regularizer provides state-of-the-art calibration for prostate and heart MRI segmentation. ### Random Boxes Are Open-world Object Detectors - **Authors:** Yanghao Wang, Zhongqi Yue, Xian-Sheng Hua, Hanwang Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08249 - **Pdf link:** https://arxiv.org/pdf/2307.08249 - **Abstract** We show that classifiers trained with random region proposals achieve state-of-the-art Open-world Object Detection (OWOD): they can not only maintain the accuracy of the known objects (w/ training labels), but also considerably improve the recall of unknown ones (w/o training labels). Specifically, we propose RandBox, a Fast R-CNN based architecture trained on random proposals at each training iteration, surpassing existing Faster R-CNN and Transformer based OWOD. Its effectiveness stems from the following two benefits introduced by randomness. First, as the randomization is independent of the distribution of the limited known objects, the random proposals become the instrumental variable that prevents the training from being confounded by the known objects. Second, the unbiased training encourages more proposal explorations by using our proposed matching score that does not penalize the random proposals whose prediction scores do not match the known objects. On two benchmarks: Pascal-VOC/MS-COCO and LVIS, RandBox significantly outperforms the previous state-of-the-art in all metrics. We also detail the ablations on randomization and loss designs. Codes are available at https://github.com/scuwyh2000/RandBox. ## Keyword: event camera ### Video Frame Interpolation with Stereo Event and Intensity Camera - **Authors:** Chao Ding, Mingyuan Lin, Haijian Zhang, Jianzhuang Liu, Lei Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08228 - **Pdf link:** https://arxiv.org/pdf/2307.08228 - **Abstract** The stereo event-intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information. However, such a setup commonly encounters cross-modality parallax that is difficult to be eliminated solely with stereo rectification especially for real-world scenes with complex motions and varying depths, posing artifacts and distortion for existing Event-based Video Frame Interpolation (E-VFI) approaches. To tackle this problem, we propose a novel Stereo Event-based VFI (SE-VFI) network (SEVFI-Net) to generate high-quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them. Specifically, we propose a Feature Aggregation Module (FAM) to alleviate the parallax and achieve spatial alignment in the feature domain. We then exploit the fused features accomplishing accurate optical flow and disparity estimation, and achieving better interpolated results through flow-based and synthesis-based ways. We also build a stereo visual acquisition system composed of an event camera and an RGB-D camera to collect a new Stereo Event-Intensity Dataset (SEID) containing diverse scenes with complex motions and varying depths. Experiments on public real-world stereo datasets, i.e., DSEC and MVSEC, and our SEID dataset demonstrate that our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### INVE: Interactive Neural Video Editing - **Authors:** Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07663 - **Pdf link:** https://arxiv.org/pdf/2307.07663 - **Abstract** We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip. Our method is inspired by the recent work on Layered Neural Atlas (LNA). LNA, however, suffers from two major drawbacks: (1) the method is too slow for interactive editing, and (2) it offers insufficient support for some editing use cases, including direct frame editing and rigid texture tracking. To address these challenges we leverage and adopt highly efficient network architectures, powered by hash-grids encoding, to substantially improve processing speed. In addition, we learn bi-directional functions between image-atlas and introduce vectorized editing, which collectively enables a much greater variety of edits in both the atlas and the frames directly. Compared to LNA, our INVE reduces the learning and inference time by a factor of 5, and supports various video editing operations that LNA cannot. We showcase the superiority of INVE over LNA in interactive video editing through a comprehensive quantitative and qualitative analysis, highlighting its numerous advantages and improved performance. For video results, please see https://gabriel-huang.github.io/inve/ ### Benchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types - **Authors:** Tim Rohwedder, Daile Osorio-Roig, Christian Rathgeb, Christoph Busch - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08615 - **Pdf link:** https://arxiv.org/pdf/2307.08615 - **Abstract** Traditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance. This is of particular interest since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. ## Keyword: ISP ### Dense Multitask Learning to Reconfigure Comics - **Authors:** Deblina Bhattacharjee, Sabine Süsstrunk, Mathieu Salzmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) - **Arxiv link:** https://arxiv.org/abs/2307.08071 - **Pdf link:** https://arxiv.org/pdf/2307.08071 - **Abstract** In this paper, we develop a MultiTask Learning (MTL) model to achieve dense predictions for comics panels to, in turn, facilitate the transfer of comics from one publication channel to another by assisting authors in the task of reconfiguring their narratives. Our MTL method can successfully identify the semantic units as well as the embedded notion of 3D in comic panels. This is a significantly challenging problem because comics comprise disparate artistic styles, illustrations, layouts, and object scales that depend on the authors creative process. Typically, dense image-based prediction techniques require a large corpus of data. Finding an automated solution for dense prediction in the comics domain, therefore, becomes more difficult with the lack of ground-truth dense annotations for the comics images. To address these challenges, we develop the following solutions: 1) we leverage a commonly-used strategy known as unsupervised image-to-image translation, which allows us to utilize a large corpus of real-world annotations; 2) we utilize the results of the translations to develop our multitasking approach that is based on a vision transformer backbone and a domain transferable attention module; 3) we study the feasibility of integrating our MTL dense-prediction method with an existing retargeting method, thereby reconfiguring comics. ### FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow - **Authors:** Jihyun Lee, Junbong Jang, Donghwan Kim, Minhyuk Sung, Tae-Kyun Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08100 - **Pdf link:** https://arxiv.org/pdf/2307.08100 - **Abstract** Recent 4D shape representations model continuous temporal evolution of implicit shapes by (1) learning query flows without leveraging shape and articulation priors or (2) decoding shape occupancies separately for each time value. Thus, they do not effectively capture implicit correspondences between articulated shapes or regularize jittery temporal deformations. In this work, we present FourierHandFlow, which is a spatio-temporally continuous representation for human hands that combines a 3D occupancy field with articulation-aware query flows represented as Fourier series. Given an input RGB sequence, we aim to learn a fixed number of Fourier coefficients for each query flow to guarantee smooth and continuous temporal shape dynamics. To effectively model spatio-temporal deformations of articulated hands, we compose our 4D representation based on two types of Fourier query flow: (1) pose flow that models query dynamics influenced by hand articulation changes via implicit linear blend skinning and (2) shape flow that models query-wise displacement flow. In the experiments, our method achieves state-of-the-art results on video-based 4D reconstruction while being computationally more efficient than the existing 3D/4D implicit shape representations. We additionally show our results on motion inter- and extrapolation and texture transfer using the learned correspondences of implicit shapes. To the best of our knowledge, FourierHandFlow is the first neural 4D continuous hand representation learned from RGB videos. The code will be publicly accessible. ### Video Frame Interpolation with Stereo Event and Intensity Camera - **Authors:** Chao Ding, Mingyuan Lin, Haijian Zhang, Jianzhuang Liu, Lei Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08228 - **Pdf link:** https://arxiv.org/pdf/2307.08228 - **Abstract** The stereo event-intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information. However, such a setup commonly encounters cross-modality parallax that is difficult to be eliminated solely with stereo rectification especially for real-world scenes with complex motions and varying depths, posing artifacts and distortion for existing Event-based Video Frame Interpolation (E-VFI) approaches. To tackle this problem, we propose a novel Stereo Event-based VFI (SE-VFI) network (SEVFI-Net) to generate high-quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them. Specifically, we propose a Feature Aggregation Module (FAM) to alleviate the parallax and achieve spatial alignment in the feature domain. We then exploit the fused features accomplishing accurate optical flow and disparity estimation, and achieving better interpolated results through flow-based and synthesis-based ways. We also build a stereo visual acquisition system composed of an event camera and an RGB-D camera to collect a new Stereo Event-Intensity Dataset (SEID) containing diverse scenes with complex motions and varying depths. Experiments on public real-world stereo datasets, i.e., DSEC and MVSEC, and our SEID dataset demonstrate that our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Extreme Image Compression using Fine-tuned VQGAN Models - **Authors:** Qi Mao, Tinghan Yang, Yinuo Zhang, Shuyin Pan, Meng Wang, Shiqi Wang, Siwei Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.08265 - **Pdf link:** https://arxiv.org/pdf/2307.08265 - **Abstract** Recent advances in generative compression methods have demonstrated remarkable progress in enhancing the perceptual quality of compressed data, especially in scenarios with low bitrates. Nevertheless, their efficacy and applicability in achieving extreme compression ratios ($<0.1$ bpp) still remain constrained. In this work, we propose a simple yet effective coding framework by introducing vector quantization (VQ)-based generative models into the image compression domain. The main insight is that the codebook learned by the VQGAN model yields strong expressive capacity, facilitating efficient compression of continuous information in the latent space while maintaining reconstruction quality. Specifically, an image can be represented as VQ-indices by finding the nearest codeword, which can be encoded using lossless compression methods into bitstreams. We then propose clustering a pre-trained large-scale codebook into smaller codebooks using the K-means algorithm. This enables images to be represented as diverse ranges of VQ-indices maps, resulting in variable bitrates and different levels of reconstruction quality. Extensive qualitative and quantitative experiments on various datasets demonstrate that the proposed framework outperforms the state-of-the-art codecs in terms of perceptual quality-oriented metrics and human perception under extremely low bitrates. ### Distributed bundle adjustment with block-based sparse matrix compression for super large scale datasets - **Authors:** Maoteng Zheng, Nengcheng Chen, Junfeng Zhu, Xiaoru Zeng, Huanbin Qiu, Yuyao Jiang, Xingyue Lu, Hao Qu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://arxiv.org/abs/2307.08383 - **Pdf link:** https://arxiv.org/pdf/2307.08383 - **Abstract** We propose a distributed bundle adjustment (DBA) method using the exact Levenberg-Marquardt (LM) algorithm for super large-scale datasets. Most of the existing methods partition the global map to small ones and conduct bundle adjustment in the submaps. In order to fit the parallel framework, they use approximate solutions instead of the LM algorithm. However, those methods often give sub-optimal results. Different from them, we utilize the exact LM algorithm to conduct global bundle adjustment where the formation of the reduced camera system (RCS) is actually parallelized and executed in a distributed way. To store the large RCS, we compress it with a block-based sparse matrix compression format (BSMC), which fully exploits its block feature. The BSMC format also enables the distributed storage and updating of the global RCS. The proposed method is extensively evaluated and compared with the state-of-the-art pipelines using both synthetic and real datasets. Preliminary results demonstrate the efficient memory usage and vast scalability of the proposed method compared with the baselines. For the first time, we conducted parallel bundle adjustment using LM algorithm on a real datasets with 1.18 million images and a synthetic dataset with 10 million images (about 500 times that of the state-of-the-art LM-based BA) on a distributed computing system. ## Keyword: RAW ### Flow-Guided Controllable Line Drawing Generation - **Authors:** Chengyu Fang, Xianfeng Han - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2307.07540 - **Pdf link:** https://arxiv.org/pdf/2307.07540 - **Abstract** In this paper, we investigate the problem of automatically controllable artistic character line drawing generation from photographs by proposing a Vector Flow Aware and Line Controllable Image-to-Image Translation architecture, which can be viewed as an appealing intersection between Artificial Intelligence and Arts. Specifically, we first present an Image-to-Flow network (I2FNet) to efficiently and robustly create the vector flow field in a learning-based manner, which can provide a direction guide for drawing lines. Then, we introduce our well-designed Double Flow Generator (DFG) framework to fuse features from learned vector flow and input image flow guaranteeing the spatial coherence of lines. Meanwhile, in order to allow for controllable character line drawing generation, we integrate a Line Control Matrix (LCM) into DFG and train a Line Control Regressor (LCR) to synthesize drawings with different styles by elaborately controlling the level of details, such as thickness, smoothness, and continuity, of lines. Finally, we design a Fourier Transformation Loss to further constrain the character line generation from the frequency domain view of the point. Quantitative and qualitative experiments demonstrate that our approach can obtain superior performance in producing high-resolution character line-drawing images with perceptually realistic characteristics. ### INVE: Interactive Neural Video Editing - **Authors:** Jiahui Huang, Leonid Sigal, Kwang Moo Yi, Oliver Wang, Joon-Young Lee - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07663 - **Pdf link:** https://arxiv.org/pdf/2307.07663 - **Abstract** We present Interactive Neural Video Editing (INVE), a real-time video editing solution, which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip. Our method is inspired by the recent work on Layered Neural Atlas (LNA). LNA, however, suffers from two major drawbacks: (1) the method is too slow for interactive editing, and (2) it offers insufficient support for some editing use cases, including direct frame editing and rigid texture tracking. To address these challenges we leverage and adopt highly efficient network architectures, powered by hash-grids encoding, to substantially improve processing speed. In addition, we learn bi-directional functions between image-atlas and introduce vectorized editing, which collectively enables a much greater variety of edits in both the atlas and the frames directly. Compared to LNA, our INVE reduces the learning and inference time by a factor of 5, and supports various video editing operations that LNA cannot. We showcase the superiority of INVE over LNA in interactive video editing through a comprehensive quantitative and qualitative analysis, highlighting its numerous advantages and improved performance. For video results, please see https://gabriel-huang.github.io/inve/ ### ExposureDiffusion: Learning to Expose for Low-light Image Enhancement - **Authors:** Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.07710 - **Pdf link:** https://arxiv.org/pdf/2307.07710 - **Abstract** Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted. ### Prawn Morphometrics and Weight Estimation from Images using Deep Learning for Landmark Localization - **Authors:** Alzayat Saleh, Md Mehedi Hasan, Herman W Raadsma, Mehar S Khatkar, Dean R Jerry, Mostafa Rahimi Azghadi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07732 - **Pdf link:** https://arxiv.org/pdf/2307.07732 - **Abstract** Accurate weight estimation and morphometric analyses are useful in aquaculture for optimizing feeding, predicting harvest yields, identifying desirable traits for selective breeding, grading processes, and monitoring the health status of production animals. However, the collection of phenotypic data through traditional manual approaches at industrial scales and in real-time is time-consuming, labour-intensive, and prone to errors. Digital imaging of individuals and subsequent training of prediction models using Deep Learning (DL) has the potential to rapidly and accurately acquire phenotypic data from aquaculture species. In this study, we applied a novel DL approach to automate weight estimation and morphometric analysis using the black tiger prawn (Penaeus monodon) as a model crustacean. The DL approach comprises two main components: a feature extraction module that efficiently combines low-level and high-level features using the Kronecker product operation; followed by a landmark localization module that then uses these features to predict the coordinates of key morphological points (landmarks) on the prawn body. Once these landmarks were extracted, weight was estimated using a weight regression module based on the extracted landmarks using a fully connected network. For morphometric analyses, we utilized the detected landmarks to derive five important prawn traits. Principal Component Analysis (PCA) was also used to identify landmark-derived distances, which were found to be highly correlated with shape features such as body length, and width. We evaluated our approach on a large dataset of 8164 images of the Black tiger prawn (Penaeus monodon) collected from Australian farms. Our experimental results demonstrate that the novel DL approach outperforms existing DL methods in terms of accuracy, robustness, and efficiency. ### EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes - **Authors:** Jingyuan Yang, Qiruin Huang, Tingting Ding, Dani Lischinski, Daniel Cohen-Or, Hui Huang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.07961 - **Pdf link:** https://arxiv.org/pdf/2307.07961 - **Abstract** Visual Emotion Analysis (VEA) aims at predicting people's emotional responses to visual stimuli. This is a promising, yet challenging, task in affective computing, which has drawn increasing attention in recent years. Most of the existing work in this area focuses on feature design, while little attention has been paid to dataset construction. In this work, we introduce EmoSet, the first large-scale visual emotion dataset annotated with rich attributes, which is superior to existing datasets in four aspects: scale, annotation richness, diversity, and data balance. EmoSet comprises 3.3 million images in total, with 118,102 of these images carefully labeled by human annotators, making it five times larger than the largest existing dataset. EmoSet includes images from social networks, as well as artistic images, and it is well balanced between different emotion categories. Motivated by psychological studies, in addition to emotion category, each image is also annotated with a set of describable emotion attributes: brightness, colorfulness, scene type, object class, facial expression, and human action, which can help understand visual emotions in a precise and interpretable way. The relevance of these emotion attributes is validated by analyzing the correlations between them and visual emotion, as well as by designing an attribute module to help visual emotion recognition. We believe EmoSet will bring some key insights and encourage further research in visual emotion analysis and understanding. The data and code will be released after the publication of this work. ### Planting a SEED of Vision in Large Language Model - **Authors:** Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, Ying Shan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08041 - **Pdf link:** https://arxiv.org/pdf/2307.08041 - **Abstract** We present SEED, an elaborate image tokenizer that empowers Large Language Models (LLMs) with the emergent ability to SEE and Draw at the same time. Research on image tokenizers has previously reached an impasse, as frameworks employing quantized visual tokens have lost prominence due to subpar performance and convergence in multimodal comprehension (compared to BLIP-2, etc.) or generation (compared to Stable Diffusion, etc.). Despite the limitations, we remain confident in its natural capacity to unify visual and textual representations, facilitating scalable multimodal training with LLM's original recipe. In this study, we identify two crucial principles for the architecture and training of SEED that effectively ease subsequent alignment with LLMs. (1) Image tokens should be independent of 2D physical patch positions and instead be produced with a 1D causal dependency, exhibiting intrinsic interdependence that aligns with the left-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens should capture high-level semantics consistent with the degree of semantic abstraction in words, and be optimized for both discriminativeness and reconstruction during the tokenizer training phase. As a result, the off-the-shelf LLM is able to perform both image-to-text and text-to-image generation by incorporating our SEED through efficient LoRA tuning. Comprehensive multimodal pretraining and instruction tuning, which may yield improved results, are reserved for future investigation. This version of SEED was trained in 5.7 days using only 64 V100 GPUs and 5M publicly available image-text pairs. Our preliminary study emphasizes the great potential of discrete visual tokens in versatile multimodal LLMs and the importance of proper image tokenizers in broader research. ### Zero-Shot Image Harmonization with Generative Model Prior - **Authors:** Jianqi Chen, Zhengxia Zou, Yilan Zhang, Keyan Chen, Zhenwei Shi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08182 - **Pdf link:** https://arxiv.org/pdf/2307.08182 - **Abstract** Recent image harmonization methods have demonstrated promising results. However, due to their heavy reliance on a large number of composite images, these works are expensive in the training phase and often fail to generalize to unseen images. In this paper, we draw lessons from human behavior and come up with a zero-shot image harmonization method. Specifically, in the harmonization process, a human mainly utilizes his long-term prior on harmonious images and makes a composite image close to that prior. To imitate that, we resort to pretrained generative models for the prior of natural images. For the guidance of the harmonization direction, we propose an Attention-Constraint Text which is optimized to well illustrate the image environments. Some further designs are introduced for preserving the foreground content structure. The resulting framework, highly consistent with human behavior, can achieve harmonious results without burdensome training. Extensive experiments have demonstrated the effectiveness of our approach, and we have also explored some interesting applications. ### Benchmarking fixed-length Fingerprint Representations across different Embedding Sizes and Sensor Types - **Authors:** Tim Rohwedder, Daile Osorio-Roig, Christian Rathgeb, Christoph Busch - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.08615 - **Pdf link:** https://arxiv.org/pdf/2307.08615 - **Abstract** Traditional minutiae-based fingerprint representations consist of a variable-length set of minutiae. This necessitates a more complex comparison causing the drawback of high computational cost in one-to-many comparison. Recently, deep neural networks have been proposed to extract fixed-length embeddings from fingerprints. In this paper, we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance. This is of particular interest since it would allow to reduce the number of operations incurred at comparisons. We also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types, i.e. optical and capacitive. Furthermore, the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed. Experimental results conducted on a publicly available database reveal an optimal embedding size of 512 feature elements for the texture-based embedding part of fixed-length fingerprint representations. In addition, differences in performance between sensor types can be perceived. ## Keyword: raw image ### ExposureDiffusion: Learning to Expose for Low-light Image Enhancement - **Authors:** Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.07710 - **Pdf link:** https://arxiv.org/pdf/2307.07710 - **Abstract** Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.
process
new submissions for tue jul keyword events boundary weighted logit consistency improves calibration of segmentation networks authors neerav karani neel dey polina golland subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract neural network prediction probabilities and accuracy are often only weakly correlated inherent label ambiguity in training data for image segmentation aggravates such miscalibration we show that logit consistency across stochastic transformations acts as a spatially varying regularizer that prevents overconfident predictions at pixels with ambiguous labels our boundary weighted extension of this regularizer provides state of the art calibration for prostate and heart mri segmentation random boxes are open world object detectors authors yanghao wang zhongqi yue xian sheng hua hanwang zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we show that classifiers trained with random region proposals achieve state of the art open world object detection owod they can not only maintain the accuracy of the known objects w training labels but also considerably improve the recall of unknown ones w o training labels specifically we propose randbox a fast r cnn based architecture trained on random proposals at each training iteration surpassing existing faster r cnn and transformer based owod its effectiveness stems from the following two benefits introduced by randomness first as the randomization is independent of the distribution of the limited known objects the random proposals become the instrumental variable that prevents the training from being confounded by the known objects second the unbiased training encourages more proposal explorations by using our proposed matching score that does not penalize the random proposals whose prediction scores do not match the known objects on two benchmarks pascal voc ms coco and lvis randbox significantly outperforms the previous state of the art in all metrics we also detail the ablations on randomization and loss designs codes are available at keyword event camera video frame interpolation with stereo event and intensity camera authors chao ding mingyuan lin haijian zhang jianzhuang liu lei yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the stereo event intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information however such a setup commonly encounters cross modality parallax that is difficult to be eliminated solely with stereo rectification especially for real world scenes with complex motions and varying depths posing artifacts and distortion for existing event based video frame interpolation e vfi approaches to tackle this problem we propose a novel stereo event based vfi se vfi network sevfi net to generate high quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them specifically we propose a feature aggregation module fam to alleviate the parallax and achieve spatial alignment in the feature domain we then exploit the fused features accomplishing accurate optical flow and disparity estimation and achieving better interpolated results through flow based and synthesis based ways we also build a stereo visual acquisition system composed of an event camera and an rgb d camera to collect a new stereo event intensity dataset seid containing diverse scenes with complex motions and varying depths experiments on public real world stereo datasets i e dsec and mvsec and our seid dataset demonstrate that our proposed sevfi net outperforms state of the art methods by a large margin keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb inve interactive neural video editing authors jiahui huang leonid sigal kwang moo yi oliver wang joon young lee subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present interactive neural video editing inve a real time video editing solution which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip our method is inspired by the recent work on layered neural atlas lna lna however suffers from two major drawbacks the method is too slow for interactive editing and it offers insufficient support for some editing use cases including direct frame editing and rigid texture tracking to address these challenges we leverage and adopt highly efficient network architectures powered by hash grids encoding to substantially improve processing speed in addition we learn bi directional functions between image atlas and introduce vectorized editing which collectively enables a much greater variety of edits in both the atlas and the frames directly compared to lna our inve reduces the learning and inference time by a factor of and supports various video editing operations that lna cannot we showcase the superiority of inve over lna in interactive video editing through a comprehensive quantitative and qualitative analysis highlighting its numerous advantages and improved performance for video results please see benchmarking fixed length fingerprint representations across different embedding sizes and sensor types authors tim rohwedder daile osorio roig christian rathgeb christoph busch subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract traditional minutiae based fingerprint representations consist of a variable length set of minutiae this necessitates a more complex comparison causing the drawback of high computational cost in one to many comparison recently deep neural networks have been proposed to extract fixed length embeddings from fingerprints in this paper we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance this is of particular interest since it would allow to reduce the number of operations incurred at comparisons we also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types i e optical and capacitive furthermore the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed experimental results conducted on a publicly available database reveal an optimal embedding size of feature elements for the texture based embedding part of fixed length fingerprint representations in addition differences in performance between sensor types can be perceived keyword isp dense multitask learning to reconfigure comics authors deblina bhattacharjee sabine süsstrunk mathieu salzmann subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract in this paper we develop a multitask learning mtl model to achieve dense predictions for comics panels to in turn facilitate the transfer of comics from one publication channel to another by assisting authors in the task of reconfiguring their narratives our mtl method can successfully identify the semantic units as well as the embedded notion of in comic panels this is a significantly challenging problem because comics comprise disparate artistic styles illustrations layouts and object scales that depend on the authors creative process typically dense image based prediction techniques require a large corpus of data finding an automated solution for dense prediction in the comics domain therefore becomes more difficult with the lack of ground truth dense annotations for the comics images to address these challenges we develop the following solutions we leverage a commonly used strategy known as unsupervised image to image translation which allows us to utilize a large corpus of real world annotations we utilize the results of the translations to develop our multitasking approach that is based on a vision transformer backbone and a domain transferable attention module we study the feasibility of integrating our mtl dense prediction method with an existing retargeting method thereby reconfiguring comics fourierhandflow neural hand representation using fourier query flow authors jihyun lee junbong jang donghwan kim minhyuk sung tae kyun kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent shape representations model continuous temporal evolution of implicit shapes by learning query flows without leveraging shape and articulation priors or decoding shape occupancies separately for each time value thus they do not effectively capture implicit correspondences between articulated shapes or regularize jittery temporal deformations in this work we present fourierhandflow which is a spatio temporally continuous representation for human hands that combines a occupancy field with articulation aware query flows represented as fourier series given an input rgb sequence we aim to learn a fixed number of fourier coefficients for each query flow to guarantee smooth and continuous temporal shape dynamics to effectively model spatio temporal deformations of articulated hands we compose our representation based on two types of fourier query flow pose flow that models query dynamics influenced by hand articulation changes via implicit linear blend skinning and shape flow that models query wise displacement flow in the experiments our method achieves state of the art results on video based reconstruction while being computationally more efficient than the existing implicit shape representations we additionally show our results on motion inter and extrapolation and texture transfer using the learned correspondences of implicit shapes to the best of our knowledge fourierhandflow is the first neural continuous hand representation learned from rgb videos the code will be publicly accessible video frame interpolation with stereo event and intensity camera authors chao ding mingyuan lin haijian zhang jianzhuang liu lei yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the stereo event intensity camera setup is widely applied to leverage the advantages of both event cameras with low latency and intensity cameras that capture accurate brightness and texture information however such a setup commonly encounters cross modality parallax that is difficult to be eliminated solely with stereo rectification especially for real world scenes with complex motions and varying depths posing artifacts and distortion for existing event based video frame interpolation e vfi approaches to tackle this problem we propose a novel stereo event based vfi se vfi network sevfi net to generate high quality intermediate frames and corresponding disparities from misaligned inputs consisting of two consecutive keyframes and event streams emitted between them specifically we propose a feature aggregation module fam to alleviate the parallax and achieve spatial alignment in the feature domain we then exploit the fused features accomplishing accurate optical flow and disparity estimation and achieving better interpolated results through flow based and synthesis based ways we also build a stereo visual acquisition system composed of an event camera and an rgb d camera to collect a new stereo event intensity dataset seid containing diverse scenes with complex motions and varying depths experiments on public real world stereo datasets i e dsec and mvsec and our seid dataset demonstrate that our proposed sevfi net outperforms state of the art methods by a large margin keyword image signal processing there is no result keyword image signal process there is no result keyword compression extreme image compression using fine tuned vqgan models authors qi mao tinghan yang yinuo zhang shuyin pan meng wang shiqi wang siwei ma subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract recent advances in generative compression methods have demonstrated remarkable progress in enhancing the perceptual quality of compressed data especially in scenarios with low bitrates nevertheless their efficacy and applicability in achieving extreme compression ratios bpp still remain constrained in this work we propose a simple yet effective coding framework by introducing vector quantization vq based generative models into the image compression domain the main insight is that the codebook learned by the vqgan model yields strong expressive capacity facilitating efficient compression of continuous information in the latent space while maintaining reconstruction quality specifically an image can be represented as vq indices by finding the nearest codeword which can be encoded using lossless compression methods into bitstreams we then propose clustering a pre trained large scale codebook into smaller codebooks using the k means algorithm this enables images to be represented as diverse ranges of vq indices maps resulting in variable bitrates and different levels of reconstruction quality extensive qualitative and quantitative experiments on various datasets demonstrate that the proposed framework outperforms the state of the art codecs in terms of perceptual quality oriented metrics and human perception under extremely low bitrates distributed bundle adjustment with block based sparse matrix compression for super large scale datasets authors maoteng zheng nengcheng chen junfeng zhu xiaoru zeng huanbin qiu yuyao jiang xingyue lu hao qu subjects computer vision and pattern recognition cs cv distributed parallel and cluster computing cs dc arxiv link pdf link abstract we propose a distributed bundle adjustment dba method using the exact levenberg marquardt lm algorithm for super large scale datasets most of the existing methods partition the global map to small ones and conduct bundle adjustment in the submaps in order to fit the parallel framework they use approximate solutions instead of the lm algorithm however those methods often give sub optimal results different from them we utilize the exact lm algorithm to conduct global bundle adjustment where the formation of the reduced camera system rcs is actually parallelized and executed in a distributed way to store the large rcs we compress it with a block based sparse matrix compression format bsmc which fully exploits its block feature the bsmc format also enables the distributed storage and updating of the global rcs the proposed method is extensively evaluated and compared with the state of the art pipelines using both synthetic and real datasets preliminary results demonstrate the efficient memory usage and vast scalability of the proposed method compared with the baselines for the first time we conducted parallel bundle adjustment using lm algorithm on a real datasets with million images and a synthetic dataset with million images about times that of the state of the art lm based ba on a distributed computing system keyword raw flow guided controllable line drawing generation authors chengyu fang xianfeng han subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract in this paper we investigate the problem of automatically controllable artistic character line drawing generation from photographs by proposing a vector flow aware and line controllable image to image translation architecture which can be viewed as an appealing intersection between artificial intelligence and arts specifically we first present an image to flow network to efficiently and robustly create the vector flow field in a learning based manner which can provide a direction guide for drawing lines then we introduce our well designed double flow generator dfg framework to fuse features from learned vector flow and input image flow guaranteeing the spatial coherence of lines meanwhile in order to allow for controllable character line drawing generation we integrate a line control matrix lcm into dfg and train a line control regressor lcr to synthesize drawings with different styles by elaborately controlling the level of details such as thickness smoothness and continuity of lines finally we design a fourier transformation loss to further constrain the character line generation from the frequency domain view of the point quantitative and qualitative experiments demonstrate that our approach can obtain superior performance in producing high resolution character line drawing images with perceptually realistic characteristics inve interactive neural video editing authors jiahui huang leonid sigal kwang moo yi oliver wang joon young lee subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present interactive neural video editing inve a real time video editing solution which can assist the video editing process by consistently propagating sparse frame edits to the entire video clip our method is inspired by the recent work on layered neural atlas lna lna however suffers from two major drawbacks the method is too slow for interactive editing and it offers insufficient support for some editing use cases including direct frame editing and rigid texture tracking to address these challenges we leverage and adopt highly efficient network architectures powered by hash grids encoding to substantially improve processing speed in addition we learn bi directional functions between image atlas and introduce vectorized editing which collectively enables a much greater variety of edits in both the atlas and the frames directly compared to lna our inve reduces the learning and inference time by a factor of and supports various video editing operations that lna cannot we showcase the superiority of inve over lna in interactive video editing through a comprehensive quantitative and qualitative analysis highlighting its numerous advantages and improved performance for video results please see exposurediffusion learning to expose for low light image enhancement authors yufei wang yi yu wenhan yang lanqing guo lap pui chau alex c kot bihan wen subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract previous raw image based low light image enhancement methods predominantly relied on feed forward neural networks to learn deterministic mappings from low light to normally exposed images however they failed to capture critical distribution information leading to visually undesirable results this work addresses the issue by seamlessly integrating a diffusion model with a physics based exposure model different from a vanilla diffusion model that has to perform gaussian denoising with the injected physics based exposure model our restoration process can directly start from a noisy image instead of pure noise as such our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models to make full use of the advantages of different intermediate steps we further propose an adaptive residual layer that effectively screens out the side effect in the iterative refinement when the intermediate results have been already well exposed the proposed framework can work with both real paired datasets sota noise models and different backbone networks note that the proposed framework is compatible with real paired datasets real synthetic noise models and different backbone networks we evaluate the proposed method on various public benchmarks achieving promising results with consistent improvements using different exposure models and backbones besides the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted prawn morphometrics and weight estimation from images using deep learning for landmark localization authors alzayat saleh md mehedi hasan herman w raadsma mehar s khatkar dean r jerry mostafa rahimi azghadi subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract accurate weight estimation and morphometric analyses are useful in aquaculture for optimizing feeding predicting harvest yields identifying desirable traits for selective breeding grading processes and monitoring the health status of production animals however the collection of phenotypic data through traditional manual approaches at industrial scales and in real time is time consuming labour intensive and prone to errors digital imaging of individuals and subsequent training of prediction models using deep learning dl has the potential to rapidly and accurately acquire phenotypic data from aquaculture species in this study we applied a novel dl approach to automate weight estimation and morphometric analysis using the black tiger prawn penaeus monodon as a model crustacean the dl approach comprises two main components a feature extraction module that efficiently combines low level and high level features using the kronecker product operation followed by a landmark localization module that then uses these features to predict the coordinates of key morphological points landmarks on the prawn body once these landmarks were extracted weight was estimated using a weight regression module based on the extracted landmarks using a fully connected network for morphometric analyses we utilized the detected landmarks to derive five important prawn traits principal component analysis pca was also used to identify landmark derived distances which were found to be highly correlated with shape features such as body length and width we evaluated our approach on a large dataset of images of the black tiger prawn penaeus monodon collected from australian farms our experimental results demonstrate that the novel dl approach outperforms existing dl methods in terms of accuracy robustness and efficiency emoset a large scale visual emotion dataset with rich attributes authors jingyuan yang qiruin huang tingting ding dani lischinski daniel cohen or hui huang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract visual emotion analysis vea aims at predicting people s emotional responses to visual stimuli this is a promising yet challenging task in affective computing which has drawn increasing attention in recent years most of the existing work in this area focuses on feature design while little attention has been paid to dataset construction in this work we introduce emoset the first large scale visual emotion dataset annotated with rich attributes which is superior to existing datasets in four aspects scale annotation richness diversity and data balance emoset comprises million images in total with of these images carefully labeled by human annotators making it five times larger than the largest existing dataset emoset includes images from social networks as well as artistic images and it is well balanced between different emotion categories motivated by psychological studies in addition to emotion category each image is also annotated with a set of describable emotion attributes brightness colorfulness scene type object class facial expression and human action which can help understand visual emotions in a precise and interpretable way the relevance of these emotion attributes is validated by analyzing the correlations between them and visual emotion as well as by designing an attribute module to help visual emotion recognition we believe emoset will bring some key insights and encourage further research in visual emotion analysis and understanding the data and code will be released after the publication of this work planting a seed of vision in large language model authors yuying ge yixiao ge ziyun zeng xintao wang ying shan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present seed an elaborate image tokenizer that empowers large language models llms with the emergent ability to see and draw at the same time research on image tokenizers has previously reached an impasse as frameworks employing quantized visual tokens have lost prominence due to subpar performance and convergence in multimodal comprehension compared to blip etc or generation compared to stable diffusion etc despite the limitations we remain confident in its natural capacity to unify visual and textual representations facilitating scalable multimodal training with llm s original recipe in this study we identify two crucial principles for the architecture and training of seed that effectively ease subsequent alignment with llms image tokens should be independent of physical patch positions and instead be produced with a causal dependency exhibiting intrinsic interdependence that aligns with the left to right autoregressive prediction mechanism in llms image tokens should capture high level semantics consistent with the degree of semantic abstraction in words and be optimized for both discriminativeness and reconstruction during the tokenizer training phase as a result the off the shelf llm is able to perform both image to text and text to image generation by incorporating our seed through efficient lora tuning comprehensive multimodal pretraining and instruction tuning which may yield improved results are reserved for future investigation this version of seed was trained in days using only gpus and publicly available image text pairs our preliminary study emphasizes the great potential of discrete visual tokens in versatile multimodal llms and the importance of proper image tokenizers in broader research zero shot image harmonization with generative model prior authors jianqi chen zhengxia zou yilan zhang keyan chen zhenwei shi subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent image harmonization methods have demonstrated promising results however due to their heavy reliance on a large number of composite images these works are expensive in the training phase and often fail to generalize to unseen images in this paper we draw lessons from human behavior and come up with a zero shot image harmonization method specifically in the harmonization process a human mainly utilizes his long term prior on harmonious images and makes a composite image close to that prior to imitate that we resort to pretrained generative models for the prior of natural images for the guidance of the harmonization direction we propose an attention constraint text which is optimized to well illustrate the image environments some further designs are introduced for preserving the foreground content structure the resulting framework highly consistent with human behavior can achieve harmonious results without burdensome training extensive experiments have demonstrated the effectiveness of our approach and we have also explored some interesting applications benchmarking fixed length fingerprint representations across different embedding sizes and sensor types authors tim rohwedder daile osorio roig christian rathgeb christoph busch subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract traditional minutiae based fingerprint representations consist of a variable length set of minutiae this necessitates a more complex comparison causing the drawback of high computational cost in one to many comparison recently deep neural networks have been proposed to extract fixed length embeddings from fingerprints in this paper we explore to what extent fingerprint texture information contained in such embeddings can be reduced in terms of dimension while preserving high biometric performance this is of particular interest since it would allow to reduce the number of operations incurred at comparisons we also study the impact in terms of recognition performance of the fingerprint textural information for two sensor types i e optical and capacitive furthermore the impact of rotation and translation of fingerprint images on the extraction of fingerprint embeddings is analysed experimental results conducted on a publicly available database reveal an optimal embedding size of feature elements for the texture based embedding part of fixed length fingerprint representations in addition differences in performance between sensor types can be perceived keyword raw image exposurediffusion learning to expose for low light image enhancement authors yufei wang yi yu wenhan yang lanqing guo lap pui chau alex c kot bihan wen subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract previous raw image based low light image enhancement methods predominantly relied on feed forward neural networks to learn deterministic mappings from low light to normally exposed images however they failed to capture critical distribution information leading to visually undesirable results this work addresses the issue by seamlessly integrating a diffusion model with a physics based exposure model different from a vanilla diffusion model that has to perform gaussian denoising with the injected physics based exposure model our restoration process can directly start from a noisy image instead of pure noise as such our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models to make full use of the advantages of different intermediate steps we further propose an adaptive residual layer that effectively screens out the side effect in the iterative refinement when the intermediate results have been already well exposed the proposed framework can work with both real paired datasets sota noise models and different backbone networks note that the proposed framework is compatible with real paired datasets real synthetic noise models and different backbone networks we evaluate the proposed method on various public benchmarks achieving promising results with consistent improvements using different exposure models and backbones besides the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted
1
52,619
22,321,575,044
IssuesEvent
2022-06-14 06:59:20
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
closed
[BUG][Storage mgmt Track2] RestoreBlobRanges' output TimeToRestore is always 1/1/0001 12:00:00 AM +00:00
Storage Service Attention Client customer-reported question needs-team-attention
### Library name and version Azure.ResourceManager.Storage 1.0.0-beta.9 ### Describe the bug In RestoreBlobRanges' output, no matter what the user input of TimeToRestore is, the output TimeToRestore's value is always 1/1/0001 12:00:00 AM +00:00. This happens no matter if we put WaitUtil.Started or WaitUtil.Completed when calling the method. ### Expected behavior For RestoreBlobRanges, the TimeToRestore in the output BlobRestoreStatus object should be the same as the TimeToRestore value specified in BlobRestoreContent object when calling RestoreBlobRanges. Say the input is 1/1/2022 15:23:03 AM +00:00, then the output should be the same. ### Actual behavior RestoreBlobRanges' output TimeToRestore is 1/1/0001 12:00:00 AM +00:00, but the input is not. ![image](https://user-images.githubusercontent.com/100746763/172806392-55ef1f22-f427-4777-bd87-06450deacf7e.png) The response returned by the server seems to be correct and has the right time: ``` GET https://management.azure.com/subscriptions/***/providers/Microsoft.Storage/locations/eastus2euap/asyncoperations/97cea899-6954-400d-8ac0-5bbbc547ff10?monitor=true&api-version=2021-08-01 HTTP/1.1 CommandName: Restore-AzStorageBlobRange ParameterSetName: AccountName x-ms-client-request-id: ea2a7a0a-d3aa-4912-b4ac-ce1169dc6363 x-ms-return-client-request-id: true Authorization: Bearer *** Host: management.azure.com HTTP/1.1 200 OK Cache-Control: no-cache Pragma: no-cache Content-Length: 277 Content-Type: application/json Expires: -1 x-ms-client-request-id: ea2a7a0a-d3aa-4912-b4ac-ce1169dc6363 x-ms-request-id: 0818c420-1797-4b42-af4e-c0fd775fc311 Strict-Transport-Security: max-age=31536000; includeSubDomains x-ms-ratelimit-remaining-subscription-reads: 11990 Server: Microsoft-Azure-Storage-Resource-Provider/1.0,Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0 x-ms-correlation-request-id: 81c45bec-795f-490f-9564-a2463e1e3e6f x-ms-routing-request-id: SOUTHEASTASIA:20220609T080208Z:81c45bec-795f-490f-9564-a2463e1e3e6f X-Content-Type-Options: nosniff Date: Thu, 09 Jun 2022 08:02:07 GMT {"status":"Complete","restoreId":"97cea899-6954-400d-8ac0-5bbbc547ff10","parameters":{"timetoRestore":"2022-06-09T07:57:41.3435008Z","blobRanges":[{"startRange":"container1/blob1","endRange":"container2/blob2"},{"startRange":"container3/blob3","endRange":"container4/blob4"}]}} ``` ### Reproduction Steps ``` var restoreLro = account.RestoreBlobRanges( WaitUntil.Started, new Track2Models.BlobRestoreContent( this.TimeToRestore, PSBlobRestoreRange.ParseBlobRestoreRanges(this.BlobRestoreRange)) ); // Wait for result to be returned and check for TimeToRestore value. var result = restoreLro.WaitForCompletion().Value; ``` This is the code we use to reproduce the bug - first call the method with `WaitUtil.Started`, and then `WaitForCompletion`. However, if we use `WaitUtil.Completed`, the bug still exists. Thus it can also be reproduced by ``` // Wait for it to return and check for TimeToRestore value var restoreLro = account.RestoreBlobRanges( WaitUntil.Completed, new Track2Models.BlobRestoreContent( this.TimeToRestore, PSBlobRestoreRange.ParseBlobRestoreRanges(this.BlobRestoreRange)) ); ``` ### Environment _No response_
1.0
[BUG][Storage mgmt Track2] RestoreBlobRanges' output TimeToRestore is always 1/1/0001 12:00:00 AM +00:00 - ### Library name and version Azure.ResourceManager.Storage 1.0.0-beta.9 ### Describe the bug In RestoreBlobRanges' output, no matter what the user input of TimeToRestore is, the output TimeToRestore's value is always 1/1/0001 12:00:00 AM +00:00. This happens no matter if we put WaitUtil.Started or WaitUtil.Completed when calling the method. ### Expected behavior For RestoreBlobRanges, the TimeToRestore in the output BlobRestoreStatus object should be the same as the TimeToRestore value specified in BlobRestoreContent object when calling RestoreBlobRanges. Say the input is 1/1/2022 15:23:03 AM +00:00, then the output should be the same. ### Actual behavior RestoreBlobRanges' output TimeToRestore is 1/1/0001 12:00:00 AM +00:00, but the input is not. ![image](https://user-images.githubusercontent.com/100746763/172806392-55ef1f22-f427-4777-bd87-06450deacf7e.png) The response returned by the server seems to be correct and has the right time: ``` GET https://management.azure.com/subscriptions/***/providers/Microsoft.Storage/locations/eastus2euap/asyncoperations/97cea899-6954-400d-8ac0-5bbbc547ff10?monitor=true&api-version=2021-08-01 HTTP/1.1 CommandName: Restore-AzStorageBlobRange ParameterSetName: AccountName x-ms-client-request-id: ea2a7a0a-d3aa-4912-b4ac-ce1169dc6363 x-ms-return-client-request-id: true Authorization: Bearer *** Host: management.azure.com HTTP/1.1 200 OK Cache-Control: no-cache Pragma: no-cache Content-Length: 277 Content-Type: application/json Expires: -1 x-ms-client-request-id: ea2a7a0a-d3aa-4912-b4ac-ce1169dc6363 x-ms-request-id: 0818c420-1797-4b42-af4e-c0fd775fc311 Strict-Transport-Security: max-age=31536000; includeSubDomains x-ms-ratelimit-remaining-subscription-reads: 11990 Server: Microsoft-Azure-Storage-Resource-Provider/1.0,Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0 x-ms-correlation-request-id: 81c45bec-795f-490f-9564-a2463e1e3e6f x-ms-routing-request-id: SOUTHEASTASIA:20220609T080208Z:81c45bec-795f-490f-9564-a2463e1e3e6f X-Content-Type-Options: nosniff Date: Thu, 09 Jun 2022 08:02:07 GMT {"status":"Complete","restoreId":"97cea899-6954-400d-8ac0-5bbbc547ff10","parameters":{"timetoRestore":"2022-06-09T07:57:41.3435008Z","blobRanges":[{"startRange":"container1/blob1","endRange":"container2/blob2"},{"startRange":"container3/blob3","endRange":"container4/blob4"}]}} ``` ### Reproduction Steps ``` var restoreLro = account.RestoreBlobRanges( WaitUntil.Started, new Track2Models.BlobRestoreContent( this.TimeToRestore, PSBlobRestoreRange.ParseBlobRestoreRanges(this.BlobRestoreRange)) ); // Wait for result to be returned and check for TimeToRestore value. var result = restoreLro.WaitForCompletion().Value; ``` This is the code we use to reproduce the bug - first call the method with `WaitUtil.Started`, and then `WaitForCompletion`. However, if we use `WaitUtil.Completed`, the bug still exists. Thus it can also be reproduced by ``` // Wait for it to return and check for TimeToRestore value var restoreLro = account.RestoreBlobRanges( WaitUntil.Completed, new Track2Models.BlobRestoreContent( this.TimeToRestore, PSBlobRestoreRange.ParseBlobRestoreRanges(this.BlobRestoreRange)) ); ``` ### Environment _No response_
non_process
restoreblobranges output timetorestore is always am library name and version azure resourcemanager storage beta describe the bug in restoreblobranges output no matter what the user input of timetorestore is the output timetorestore s value is always am this happens no matter if we put waitutil started or waitutil completed when calling the method expected behavior for restoreblobranges the timetorestore in the output blobrestorestatus object should be the same as the timetorestore value specified in blobrestorecontent object when calling restoreblobranges say the input is am then the output should be the same actual behavior restoreblobranges output timetorestore is am but the input is not the response returned by the server seems to be correct and has the right time get http commandname restore azstorageblobrange parametersetname accountname x ms client request id x ms return client request id true authorization bearer host management azure com http ok cache control no cache pragma no cache content length content type application json expires x ms client request id x ms request id strict transport security max age includesubdomains x ms ratelimit remaining subscription reads server microsoft azure storage resource provider microsoft httpapi microsoft httpapi x ms correlation request id x ms routing request id southeastasia x content type options nosniff date thu jun gmt status complete restoreid parameters timetorestore blobranges reproduction steps var restorelro account restoreblobranges waituntil started new blobrestorecontent this timetorestore psblobrestorerange parseblobrestoreranges this blobrestorerange wait for result to be returned and check for timetorestore value var result restorelro waitforcompletion value this is the code we use to reproduce the bug first call the method with waitutil started and then waitforcompletion however if we use waitutil completed the bug still exists thus it can also be reproduced by wait for it to return and check for timetorestore value var restorelro account restoreblobranges waituntil completed new blobrestorecontent this timetorestore psblobrestorerange parseblobrestoreranges this blobrestorerange environment no response
0
269,475
23,444,996,105
IssuesEvent
2022-08-15 18:39:37
PowerShell/PowerShellGet
https://api.github.com/repos/PowerShell/PowerShellGet
closed
Review ProjectUri/Url and IconUri/Url in Publish-PSResource
feature_request Needs Testing
### Summary of the new feature / enhancement See: https://github.com/PowerShell/PowerShellGet/pull/551#issuecomment-1020034298 Ensure that 'ProjectUri' or 'ProjectUrl' and 'IconUri' and 'IconUrl' are being used in the appropriate spots and are being referenced correctly when reading from module manifests and writing to nuspecs. ### Proposed technical implementation details (optional) _No response_
1.0
Review ProjectUri/Url and IconUri/Url in Publish-PSResource - ### Summary of the new feature / enhancement See: https://github.com/PowerShell/PowerShellGet/pull/551#issuecomment-1020034298 Ensure that 'ProjectUri' or 'ProjectUrl' and 'IconUri' and 'IconUrl' are being used in the appropriate spots and are being referenced correctly when reading from module manifests and writing to nuspecs. ### Proposed technical implementation details (optional) _No response_
non_process
review projecturi url and iconuri url in publish psresource summary of the new feature enhancement see ensure that projecturi or projecturl and iconuri and iconurl are being used in the appropriate spots and are being referenced correctly when reading from module manifests and writing to nuspecs proposed technical implementation details optional no response
0
10,820
8,183,521,817
IssuesEvent
2018-08-29 09:18:21
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
fileUpload: filename returned by UploadedFile should be sanitized
6.2.9 enhancement security
## 1) Environment - PrimeFaces version: primefaces-6.2.RC2-snapshot - Application server + version: WildFly 11 - Affected browsers: all ## 2) Expected behavior The filename returned by `UploadedFile ` should be sanitized. Otherwise, this may lead to path traversal vulnerabilities if developers use `UploadedFile.getFileName` to write to the file system - which is common practice and likely to happen. Maybe `NativeUploadedFile.write` could provide some protection as well, e.g. by providing a properly defined interface and strict separation of path and filename. ## 3) Actual behavior Some complicated escaping seems to be done in `NativeUploadedFile.getContentDispositionFileName`, however it is still possible for the attacker to send a specially crafted POST request that contains e.g. `.. ` or paths in the filename. In the worst case malware could be uploaded to system locations or server config files could be overridden. The filename could for example be sanitized by means of `FilenameUtils.getName ` from Apache commons-io. ## 4) Steps to reproduce Open the sample XHTML, choose a file to upload, switch to console and enter the following code: ```javascript var formData = new FormData(); formData.append($("form").attr("id"),$("form").attr("id")); formData.append($("input[type=file]").attr("id"),$("input[type=file]")[0].files[0],'../../windows/malware.dll'); formData.append($("button").attr("id"),""); formData.append("javax.faces.ViewState",$("input[name$=ViewState]").val()); var request = new XMLHttpRequest(); request.open("POST","fileUpload.xhtml"); request.send(formData); ``` ## 5) Sample XHTML ```xml <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.org/ui"> <h:head> <title>fileUpload test</title> </h:head> <h:form enctype="multipart/form-data"> <p:fileUpload value="#{fileUploadView.file}" mode="simple" skinSimple="true"/> <p:commandButton value="Submit" ajax="false" actionListener="#{fileUploadView.upload}" /> </h:form> </html> ``` ## 6) Sample bean ```java package de.test.primefaces; import java.io.Serializable; import javax.enterprise.context.SessionScoped; import javax.inject.Named; import org.primefaces.model.UploadedFile; @Named("fileUploadView") @SessionScoped public class FileUploadView implements Serializable { private UploadedFile file; public UploadedFile getFile() { return file; } public void setFile(UploadedFile file) { this.file = file; } public void upload() { if(file != null) { try { file.write("c:\\application_server_root\\upload_dir" + file.getFileName()); //potential path traversal } catch (Exception e) { e.printStackTrace(); } } } } ```
True
fileUpload: filename returned by UploadedFile should be sanitized - ## 1) Environment - PrimeFaces version: primefaces-6.2.RC2-snapshot - Application server + version: WildFly 11 - Affected browsers: all ## 2) Expected behavior The filename returned by `UploadedFile ` should be sanitized. Otherwise, this may lead to path traversal vulnerabilities if developers use `UploadedFile.getFileName` to write to the file system - which is common practice and likely to happen. Maybe `NativeUploadedFile.write` could provide some protection as well, e.g. by providing a properly defined interface and strict separation of path and filename. ## 3) Actual behavior Some complicated escaping seems to be done in `NativeUploadedFile.getContentDispositionFileName`, however it is still possible for the attacker to send a specially crafted POST request that contains e.g. `.. ` or paths in the filename. In the worst case malware could be uploaded to system locations or server config files could be overridden. The filename could for example be sanitized by means of `FilenameUtils.getName ` from Apache commons-io. ## 4) Steps to reproduce Open the sample XHTML, choose a file to upload, switch to console and enter the following code: ```javascript var formData = new FormData(); formData.append($("form").attr("id"),$("form").attr("id")); formData.append($("input[type=file]").attr("id"),$("input[type=file]")[0].files[0],'../../windows/malware.dll'); formData.append($("button").attr("id"),""); formData.append("javax.faces.ViewState",$("input[name$=ViewState]").val()); var request = new XMLHttpRequest(); request.open("POST","fileUpload.xhtml"); request.send(formData); ``` ## 5) Sample XHTML ```xml <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.org/ui"> <h:head> <title>fileUpload test</title> </h:head> <h:form enctype="multipart/form-data"> <p:fileUpload value="#{fileUploadView.file}" mode="simple" skinSimple="true"/> <p:commandButton value="Submit" ajax="false" actionListener="#{fileUploadView.upload}" /> </h:form> </html> ``` ## 6) Sample bean ```java package de.test.primefaces; import java.io.Serializable; import javax.enterprise.context.SessionScoped; import javax.inject.Named; import org.primefaces.model.UploadedFile; @Named("fileUploadView") @SessionScoped public class FileUploadView implements Serializable { private UploadedFile file; public UploadedFile getFile() { return file; } public void setFile(UploadedFile file) { this.file = file; } public void upload() { if(file != null) { try { file.write("c:\\application_server_root\\upload_dir" + file.getFileName()); //potential path traversal } catch (Exception e) { e.printStackTrace(); } } } } ```
non_process
fileupload filename returned by uploadedfile should be sanitized environment primefaces version primefaces snapshot application server version wildfly affected browsers all expected behavior the filename returned by uploadedfile should be sanitized otherwise this may lead to path traversal vulnerabilities if developers use uploadedfile getfilename to write to the file system which is common practice and likely to happen maybe nativeuploadedfile write could provide some protection as well e g by providing a properly defined interface and strict separation of path and filename actual behavior some complicated escaping seems to be done in nativeuploadedfile getcontentdispositionfilename however it is still possible for the attacker to send a specially crafted post request that contains e g or paths in the filename in the worst case malware could be uploaded to system locations or server config files could be overridden the filename could for example be sanitized by means of filenameutils getname from apache commons io steps to reproduce open the sample xhtml choose a file to upload switch to console and enter the following code javascript var formdata new formdata formdata append form attr id form attr id formdata append input attr id input files windows malware dll formdata append button attr id formdata append javax faces viewstate input val var request new xmlhttprequest request open post fileupload xhtml request send formdata sample xhtml xml html xmlns xmlns h xmlns p fileupload test sample bean java package de test primefaces import java io serializable import javax enterprise context sessionscoped import javax inject named import org primefaces model uploadedfile named fileuploadview sessionscoped public class fileuploadview implements serializable private uploadedfile file public uploadedfile getfile return file public void setfile uploadedfile file this file file public void upload if file null try file write c application server root upload dir file getfilename potential path traversal catch exception e e printstacktrace
0
15,925
20,142,304,940
IssuesEvent
2022-02-09 01:20:54
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
opened
Model and styling for figures found in minipages
enhancement postprocessing schema
I will start collecting examples in an issue, ideally I go to 5-10 different articles in arXiv, to see the broad flavor of markup we get with minipages. Currently my CSS is consistently wrong for them, but I would like to double-check the underlying HTML is also sensible. I'll recruit help, as I'm looking into details. For starters, combinations with `<span>` elements with class `ltx_inline-para ltx_minipage` are curious. Examples (starting with one): * [quant-ph/0510032](https://ar5iv.org/html/quant-ph/0510032#S3.SS2.p6.1)
1.0
Model and styling for figures found in minipages - I will start collecting examples in an issue, ideally I go to 5-10 different articles in arXiv, to see the broad flavor of markup we get with minipages. Currently my CSS is consistently wrong for them, but I would like to double-check the underlying HTML is also sensible. I'll recruit help, as I'm looking into details. For starters, combinations with `<span>` elements with class `ltx_inline-para ltx_minipage` are curious. Examples (starting with one): * [quant-ph/0510032](https://ar5iv.org/html/quant-ph/0510032#S3.SS2.p6.1)
process
model and styling for figures found in minipages i will start collecting examples in an issue ideally i go to different articles in arxiv to see the broad flavor of markup we get with minipages currently my css is consistently wrong for them but i would like to double check the underlying html is also sensible i ll recruit help as i m looking into details for starters combinations with elements with class ltx inline para ltx minipage are curious examples starting with one
1
10,418
13,210,736,096
IssuesEvent
2020-08-15 18:33:27
km4ack/patmenu2
https://api.github.com/repos/km4ack/patmenu2
closed
Station List download recommendation
bug in process
I mistakenly chose to update my station list while offline. The result was my station list file was replaced with an empty list. I would suggest the update script check to see if the new list is successfully downloaded before erasing the old list. If you don't have internet access and hit the wrong button, you will lose all your ARDOP/Packet station info.
1.0
Station List download recommendation - I mistakenly chose to update my station list while offline. The result was my station list file was replaced with an empty list. I would suggest the update script check to see if the new list is successfully downloaded before erasing the old list. If you don't have internet access and hit the wrong button, you will lose all your ARDOP/Packet station info.
process
station list download recommendation i mistakenly chose to update my station list while offline the result was my station list file was replaced with an empty list i would suggest the update script check to see if the new list is successfully downloaded before erasing the old list if you don t have internet access and hit the wrong button you will lose all your ardop packet station info
1
495,594
14,284,675,146
IssuesEvent
2020-11-23 12:52:09
SupremeObsidian/ProjectManager
https://api.github.com/repos/SupremeObsidian/ProjectManager
opened
Mobs falling apart when hit
High Priority bug
All of our mobs' geometry comes apart when they are hit by a player. This might be a problem in the future, especially if it's something that has to do with how we animate or create our models.
1.0
Mobs falling apart when hit - All of our mobs' geometry comes apart when they are hit by a player. This might be a problem in the future, especially if it's something that has to do with how we animate or create our models.
non_process
mobs falling apart when hit all of our mobs geometry comes apart when they are hit by a player this might be a problem in the future especially if it s something that has to do with how we animate or create our models
0
19,121
25,171,933,958
IssuesEvent
2022-11-11 04:42:09
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
react: feature page
processing
have these at least functioning from the react page: - [x] create new feature - [x] display all features table
1.0
react: feature page - have these at least functioning from the react page: - [x] create new feature - [x] display all features table
process
react feature page have these at least functioning from the react page create new feature display all features table
1
9,218
12,254,378,263
IssuesEvent
2020-05-06 08:22:49
python-trio/trio
https://api.github.com/repos/python-trio/trio
closed
Put process-creation into a thread
subprocesses
[Original title: Is process creation *actually* non-blocking?] In https://github.com/python-trio/trio/issues/1104 I wrote: > the actual process startup is synchronous, so you could just as well have a synchronous version But uh... it just occurred to me that I'm actually not sure if this is true! I mean, right now we just use `subprocess.Popen`, which is indeed a synchronous interface. And on Unix, spawning a new process and getting a handle on it is generally super cheap – it's just `fork`. The `exec` is expensive, but that happens after the child has split off – the parent doesn't wait for it. But on Windows, you call `CreateProcess`, which I think might block the caller while doing all the disk access to set up the new process? Process creation on Windows are notoriously slow, and I don't know how much of that the parent process has to sit and wait for before `CreateProcess` can return. And even on Unix, you use `vfork`, in which case the parent process is blocked until the `exec`. And on recent Pythons, `subprocess` uses `posix_spawn`. On Linux this might use `vfork` (I'm not actually sure?). And on macOS it uses a native `posix_spawn` syscall, so who knows what that does. Again, this *might* not be a big deal... maybe the parent gets to go again the instant the child calls `exec`, or sooner, without having to wait for any disk access or anything. But I'm not sure! So... we should figure this out. Because if process creation is slow enough that we need to treat it as a blocking operation, we might need to change the process API to give it an async constructor. (Presumably by making `Process.__init__` private, and adding `await trio.open_process(...)` – similar to how we handle files.)
1.0
Put process-creation into a thread - [Original title: Is process creation *actually* non-blocking?] In https://github.com/python-trio/trio/issues/1104 I wrote: > the actual process startup is synchronous, so you could just as well have a synchronous version But uh... it just occurred to me that I'm actually not sure if this is true! I mean, right now we just use `subprocess.Popen`, which is indeed a synchronous interface. And on Unix, spawning a new process and getting a handle on it is generally super cheap – it's just `fork`. The `exec` is expensive, but that happens after the child has split off – the parent doesn't wait for it. But on Windows, you call `CreateProcess`, which I think might block the caller while doing all the disk access to set up the new process? Process creation on Windows are notoriously slow, and I don't know how much of that the parent process has to sit and wait for before `CreateProcess` can return. And even on Unix, you use `vfork`, in which case the parent process is blocked until the `exec`. And on recent Pythons, `subprocess` uses `posix_spawn`. On Linux this might use `vfork` (I'm not actually sure?). And on macOS it uses a native `posix_spawn` syscall, so who knows what that does. Again, this *might* not be a big deal... maybe the parent gets to go again the instant the child calls `exec`, or sooner, without having to wait for any disk access or anything. But I'm not sure! So... we should figure this out. Because if process creation is slow enough that we need to treat it as a blocking operation, we might need to change the process API to give it an async constructor. (Presumably by making `Process.__init__` private, and adding `await trio.open_process(...)` – similar to how we handle files.)
process
put process creation into a thread in i wrote the actual process startup is synchronous so you could just as well have a synchronous version but uh it just occurred to me that i m actually not sure if this is true i mean right now we just use subprocess popen which is indeed a synchronous interface and on unix spawning a new process and getting a handle on it is generally super cheap – it s just fork the exec is expensive but that happens after the child has split off – the parent doesn t wait for it but on windows you call createprocess which i think might block the caller while doing all the disk access to set up the new process process creation on windows are notoriously slow and i don t know how much of that the parent process has to sit and wait for before createprocess can return and even on unix you use vfork in which case the parent process is blocked until the exec and on recent pythons subprocess uses posix spawn on linux this might use vfork i m not actually sure and on macos it uses a native posix spawn syscall so who knows what that does again this might not be a big deal maybe the parent gets to go again the instant the child calls exec or sooner without having to wait for any disk access or anything but i m not sure so we should figure this out because if process creation is slow enough that we need to treat it as a blocking operation we might need to change the process api to give it an async constructor presumably by making process init private and adding await trio open process – similar to how we handle files
1
2,833
5,786,157,114
IssuesEvent
2017-05-01 08:57:13
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[FEATURE][processing] New algorithm for calculating feature bounding boxes
Processing Text
Original commit: https://github.com/qgis/QGIS/commit/0815ddd766cea48cf9c2eddde656943f8a1ecfda by nyalldawson (cherry-picked from bd8db5d156071b308f9e091bc444857424879f06)
1.0
[FEATURE][processing] New algorithm for calculating feature bounding boxes - Original commit: https://github.com/qgis/QGIS/commit/0815ddd766cea48cf9c2eddde656943f8a1ecfda by nyalldawson (cherry-picked from bd8db5d156071b308f9e091bc444857424879f06)
process
new algorithm for calculating feature bounding boxes original commit by nyalldawson cherry picked from
1
20,022
26,499,969,196
IssuesEvent
2023-01-18 09:32:29
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Process.WaitForExit() deadlock when running multiple processes on linux with .net7
area-System.Diagnostics.Process untriaged
### Description Running multiple processes concurrently using System.Diagnostics.Process causes a deadlock on linux with .net7. I checked [this](https://github.com/dotnet/runtime/issues/51277), but I believe it is something else. ### Reproduction Steps A simple example to reproduce deadlock is [here](https://github.com/kirnosenko/processdeadlock). ### Expected behavior no deadlock ### Actual behavior deadlock ### Regression? _No response_ ### Known Workarounds _No response_ ### Configuration .net7 on x64 alpine linux container ### Other information I tested it in different environments: .net6 + windows = fine .net6 + linux = fine .net7 + windows = fine .net7 + linux = deadlock
1.0
Process.WaitForExit() deadlock when running multiple processes on linux with .net7 - ### Description Running multiple processes concurrently using System.Diagnostics.Process causes a deadlock on linux with .net7. I checked [this](https://github.com/dotnet/runtime/issues/51277), but I believe it is something else. ### Reproduction Steps A simple example to reproduce deadlock is [here](https://github.com/kirnosenko/processdeadlock). ### Expected behavior no deadlock ### Actual behavior deadlock ### Regression? _No response_ ### Known Workarounds _No response_ ### Configuration .net7 on x64 alpine linux container ### Other information I tested it in different environments: .net6 + windows = fine .net6 + linux = fine .net7 + windows = fine .net7 + linux = deadlock
process
process waitforexit deadlock when running multiple processes on linux with description running multiple processes concurrently using system diagnostics process causes a deadlock on linux with i checked but i believe it is something else reproduction steps a simple example to reproduce deadlock is expected behavior no deadlock actual behavior deadlock regression no response known workarounds no response configuration on alpine linux container other information i tested it in different environments windows fine linux fine windows fine linux deadlock
1
50,655
13,550,102,994
IssuesEvent
2020-09-17 09:08:28
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
[Security] New LoginFormAuthenticator fails on InsufficientAuthenticationException with missing AuthenticationEntryPoint
Bug Security Status: Needs Review
**Symfony version(s) affected**: 5.1 **Description** I've replaced my guard login authenticator with a version based on the `AbstractLoginFormAuthenticator` and enabled the AuthenticationManager. Everyhing works fine except hitting a page which requires `IS_AUTHENTICATED_REMEMBERED` without actual being logged in. This triggers an `AccessDeniedException` which gets caught by the firewall ExceptionListener. This creates an `InsufficientAuthenticationException` and tries to `startAuthentication()` with it. That method then throws an HttpException because there's not AuthenticationEntryPoint defined: ~~~php if (null === $this->authenticationEntryPoint) { throw new HttpException(Response::HTTP_UNAUTHORIZED, $authException->getMessage(), $authException, [], $authException->getCode()); } ~~~ The guard system correctly uses the configured entry point to redirect to my login page. **How to reproduce** 1. Build a login form based on the new authenticator 2. Build a page requiring a authenticated user 3. Hit the page w/o authentication **Possible Solution** Define an entry point for the authenticator? Sorry if I missed some doc. I've checked the blog post and Wouter's additional article, but couldn't find help. Kind regards Matthias
True
[Security] New LoginFormAuthenticator fails on InsufficientAuthenticationException with missing AuthenticationEntryPoint - **Symfony version(s) affected**: 5.1 **Description** I've replaced my guard login authenticator with a version based on the `AbstractLoginFormAuthenticator` and enabled the AuthenticationManager. Everyhing works fine except hitting a page which requires `IS_AUTHENTICATED_REMEMBERED` without actual being logged in. This triggers an `AccessDeniedException` which gets caught by the firewall ExceptionListener. This creates an `InsufficientAuthenticationException` and tries to `startAuthentication()` with it. That method then throws an HttpException because there's not AuthenticationEntryPoint defined: ~~~php if (null === $this->authenticationEntryPoint) { throw new HttpException(Response::HTTP_UNAUTHORIZED, $authException->getMessage(), $authException, [], $authException->getCode()); } ~~~ The guard system correctly uses the configured entry point to redirect to my login page. **How to reproduce** 1. Build a login form based on the new authenticator 2. Build a page requiring a authenticated user 3. Hit the page w/o authentication **Possible Solution** Define an entry point for the authenticator? Sorry if I missed some doc. I've checked the blog post and Wouter's additional article, but couldn't find help. Kind regards Matthias
non_process
new loginformauthenticator fails on insufficientauthenticationexception with missing authenticationentrypoint symfony version s affected description i ve replaced my guard login authenticator with a version based on the abstractloginformauthenticator and enabled the authenticationmanager everyhing works fine except hitting a page which requires is authenticated remembered without actual being logged in this triggers an accessdeniedexception which gets caught by the firewall exceptionlistener this creates an insufficientauthenticationexception and tries to startauthentication with it that method then throws an httpexception because there s not authenticationentrypoint defined php if null this authenticationentrypoint throw new httpexception response http unauthorized authexception getmessage authexception authexception getcode the guard system correctly uses the configured entry point to redirect to my login page how to reproduce build a login form based on the new authenticator build a page requiring a authenticated user hit the page w o authentication possible solution define an entry point for the authenticator sorry if i missed some doc i ve checked the blog post and wouter s additional article but couldn t find help kind regards matthias
0
9,832
12,828,086,804
IssuesEvent
2020-07-06 19:49:04
googleapis/code-suggester
https://api.github.com/repos/googleapis/code-suggester
opened
Framework-core: handle existing PRs and existing branches
type: process
- [ ] When there is an existing PR on an up-stream repository and a PR from the same branch and same down-stream repository is opened, there will be an error thrown. Ensure that the latest version is made into a PR. - [ ] When there is an existing branch and someone tries to apply changes onto an existing branch, optionally overwrite the existing branch. If overwrite is enabled, then warn the user. Otherwise gracefully fail. ### Additional Information This is for branches on a fork from an upstream-repository, and PRs from a fork on an upstream-repository
1.0
Framework-core: handle existing PRs and existing branches - - [ ] When there is an existing PR on an up-stream repository and a PR from the same branch and same down-stream repository is opened, there will be an error thrown. Ensure that the latest version is made into a PR. - [ ] When there is an existing branch and someone tries to apply changes onto an existing branch, optionally overwrite the existing branch. If overwrite is enabled, then warn the user. Otherwise gracefully fail. ### Additional Information This is for branches on a fork from an upstream-repository, and PRs from a fork on an upstream-repository
process
framework core handle existing prs and existing branches when there is an existing pr on an up stream repository and a pr from the same branch and same down stream repository is opened there will be an error thrown ensure that the latest version is made into a pr when there is an existing branch and someone tries to apply changes onto an existing branch optionally overwrite the existing branch if overwrite is enabled then warn the user otherwise gracefully fail additional information this is for branches on a fork from an upstream repository and prs from a fork on an upstream repository
1
16,316
20,971,536,155
IssuesEvent
2022-03-28 11:52:15
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR modulation by symbiont of host system process
multi-species process
Hello, The multiorganism working group proposes to create a new term, 'modulation by symbiont of host system process ', to organize the 'modulation by symbiont of host process' branch, which would group: 'modulation by symbiont of host digestive system process' 'modulation by symbiont of host endocrine process' 'modulation by symbiont of host nervous system process' 'modulation by symbiont of host respiratory system process' +[Term] +id: GO:0140780 +name: modulation by symbiont of host system process +namespace: biological_process +def: "The process in which a symbiont organism effects a change in an anatomical system process of its host organism." [GOC:pg] +intersection_of: GO:0044003 ! modulation by symbiont of host process +intersection_of: regulates GO:0003008 ! system process +created_by: pg +creation_date: 2022-03-28T09:54:08Z
1.0
NTR modulation by symbiont of host system process - Hello, The multiorganism working group proposes to create a new term, 'modulation by symbiont of host system process ', to organize the 'modulation by symbiont of host process' branch, which would group: 'modulation by symbiont of host digestive system process' 'modulation by symbiont of host endocrine process' 'modulation by symbiont of host nervous system process' 'modulation by symbiont of host respiratory system process' +[Term] +id: GO:0140780 +name: modulation by symbiont of host system process +namespace: biological_process +def: "The process in which a symbiont organism effects a change in an anatomical system process of its host organism." [GOC:pg] +intersection_of: GO:0044003 ! modulation by symbiont of host process +intersection_of: regulates GO:0003008 ! system process +created_by: pg +creation_date: 2022-03-28T09:54:08Z
process
ntr modulation by symbiont of host system process hello the multiorganism working group proposes to create a new term modulation by symbiont of host system process to organize the modulation by symbiont of host process branch which would group modulation by symbiont of host digestive system process modulation by symbiont of host endocrine process modulation by symbiont of host nervous system process modulation by symbiont of host respiratory system process id go name modulation by symbiont of host system process namespace biological process def the process in which a symbiont organism effects a change in an anatomical system process of its host organism intersection of go modulation by symbiont of host process intersection of regulates go system process created by pg creation date
1
438,699
30,657,707,697
IssuesEvent
2023-07-25 13:12:46
eclipse-xpanse/xpanse
https://api.github.com/repos/eclipse-xpanse/xpanse
closed
review all API methods documentation
documentation api
Review all API methods documentation and update wherever necessary.
1.0
review all API methods documentation - Review all API methods documentation and update wherever necessary.
non_process
review all api methods documentation review all api methods documentation and update wherever necessary
0
12,018
14,738,444,683
IssuesEvent
2021-01-07 04:47:05
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
SA Billing - Rebillables / Vendor sort
anc-ops anc-process anc-report anc-ui anp-1 ant-bug ant-support
In GitLab by @kdjstudios on May 24, 2018, 12:47 **Submitted by:** Valerie Brown " <valerie.brown@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-24-44361/conversation **Server:** Internal (Both?) **Client/Site:** NA **Account:** NA **Issue:** I am having problems with the SA Billing program. If I try to sort by vendor name, it get an error occurred message. Can you please check out what the problem may be.
1.0
SA Billing - Rebillables / Vendor sort - In GitLab by @kdjstudios on May 24, 2018, 12:47 **Submitted by:** Valerie Brown " <valerie.brown@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-24-44361/conversation **Server:** Internal (Both?) **Client/Site:** NA **Account:** NA **Issue:** I am having problems with the SA Billing program. If I try to sort by vendor name, it get an error occurred message. Can you please check out what the problem may be.
process
sa billing rebillables vendor sort in gitlab by kdjstudios on may submitted by valerie brown helpdesk server internal both client site na account na issue i am having problems with the sa billing program if i try to sort by vendor name it get an error occurred message can you please check out what the problem may be
1
10,100
13,044,162,111
IssuesEvent
2020-07-29 03:47:29
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `AddDatetimeAndString` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `AddDatetimeAndString` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @mapleFU ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `AddDatetimeAndString` from TiDB - ## Description Port the scalar function `AddDatetimeAndString` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @mapleFU ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function adddatetimeandstring from tidb description port the scalar function adddatetimeandstring from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
1
2,912
5,903,441,800
IssuesEvent
2017-05-19 06:45:27
inasafe/inasafe-realtime
https://api.github.com/repos/inasafe/inasafe-realtime
closed
Ordering issues in Realtime reports
bug realtime processor
Layer in map view were not ordered correctly. Need workaround. See original ticket at https://github.com/inasafe/inasafe/issues/3692 for further discussion.
1.0
Ordering issues in Realtime reports - Layer in map view were not ordered correctly. Need workaround. See original ticket at https://github.com/inasafe/inasafe/issues/3692 for further discussion.
process
ordering issues in realtime reports layer in map view were not ordered correctly need workaround see original ticket at for further discussion
1
36,866
18,019,567,129
IssuesEvent
2021-09-16 17:36:18
microsoft/react-native-windows
https://api.github.com/repos/microsoft/react-native-windows
closed
NuGet packages include symbols
enhancement Area: Performance Area: Developer Experience
The current nuget packages we produce (like v8jsi etc.) include PDBs which bloat the on-disk footprint, and 99% of developers won't need symbols for v8jsi, chakra, etc. These should instead be distributed via snupkg: https://docs.microsoft.com/en-us/nuget/create-packages/symbol-packages-snupkg ![image](https://user-images.githubusercontent.com/22989529/93245672-7a4c2680-f740-11ea-9216-c8800e542f8d.png) PDBs account for 1.3 GB: FullName | Length -- | -- ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\x64\v8jsi.dll.pdb | 170332160 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\x86\v8jsi.dll.pdb | 168587264 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\x64\v8jsi.dll.pdb | 134377472 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\x86\v8jsi.dll.pdb | 130871296 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\release\ChakraCore.pdb | 70201344 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\debug\ChakraCore.pdb | 70201344 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\release\ChakraCore.pdb | 69480448 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\debug\ChakraCore.pdb | 69480448 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\debug\ChakraCore.pdb | 68505600 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\release\ChakraCore.pdb | 68505600 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\debug\ChakraCore.pdb | 55914496 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\release\ChakraCore.pdb | 55914496 ChakraCore.Debugger.0.0.0.44\lib\native\x86\Debug\ChakraCore.Debugger.pdb | 28020736 ChakraCore.Debugger.0.0.0.44\lib\native\x64\Debug\ChakraCore.Debugger.pdb | 27308032 ChakraCore.Debugger.0.0.0.44\lib\native\ARM64\Debug\ChakraCore.Debugger.pdb | 26341376 ChakraCore.Debugger.0.0.0.44\lib\native\ARM\Debug\ChakraCore.Debugger.pdb | 23670784 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\arm64\v8jsi.dll.pdb | 14708736 ChakraCore.Debugger.0.0.0.44\lib\native\x64\Release\ChakraCore.Debugger.pdb | 13774848 ChakraCore.Debugger.0.0.0.44\lib\native\ARM64\Release\ChakraCore.Debugger.pdb | 13316096 ChakraCore.Debugger.0.0.0.44\lib\native\x86\Release\ChakraCore.Debugger.pdb | 13152256 ChakraCore.Debugger.0.0.0.44\lib\native\ARM\Release\ChakraCore.Debugger.pdb | 10727424 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\arm64\v8jsi.dll.pdb | 9695232 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\release\ch.pdb | 4804608 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\debug\ch.pdb | 4804608 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\debug\ch.pdb | 4755456 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\release\ch.pdb | 4755456 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\debug\ch.pdb | 4616192 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\release\ch.pdb | 4616192 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\debug\ch.pdb | 4239360 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\release\ch.pdb | 4239360 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Debug\gtest.pdb | 1429504 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Debug\gtest.pdb | 1421312 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Release\gtest.pdb | 1339392 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Release\gtest.pdb | 1331200 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Debug\gtest_main.pdb | 724992 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Debug\gtest_main.pdb | 716800 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Release\gtest_main.pdb | 692224 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Release\gtest_main.pdb | 684032
True
NuGet packages include symbols - The current nuget packages we produce (like v8jsi etc.) include PDBs which bloat the on-disk footprint, and 99% of developers won't need symbols for v8jsi, chakra, etc. These should instead be distributed via snupkg: https://docs.microsoft.com/en-us/nuget/create-packages/symbol-packages-snupkg ![image](https://user-images.githubusercontent.com/22989529/93245672-7a4c2680-f740-11ea-9216-c8800e542f8d.png) PDBs account for 1.3 GB: FullName | Length -- | -- ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\x64\v8jsi.dll.pdb | 170332160 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\x86\v8jsi.dll.pdb | 168587264 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\x64\v8jsi.dll.pdb | 134377472 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\x86\v8jsi.dll.pdb | 130871296 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\release\ChakraCore.pdb | 70201344 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\debug\ChakraCore.pdb | 70201344 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\release\ChakraCore.pdb | 69480448 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\debug\ChakraCore.pdb | 69480448 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\debug\ChakraCore.pdb | 68505600 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\release\ChakraCore.pdb | 68505600 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\debug\ChakraCore.pdb | 55914496 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\release\ChakraCore.pdb | 55914496 ChakraCore.Debugger.0.0.0.44\lib\native\x86\Debug\ChakraCore.Debugger.pdb | 28020736 ChakraCore.Debugger.0.0.0.44\lib\native\x64\Debug\ChakraCore.Debugger.pdb | 27308032 ChakraCore.Debugger.0.0.0.44\lib\native\ARM64\Debug\ChakraCore.Debugger.pdb | 26341376 ChakraCore.Debugger.0.0.0.44\lib\native\ARM\Debug\ChakraCore.Debugger.pdb | 23670784 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Debug\arm64\v8jsi.dll.pdb | 14708736 ChakraCore.Debugger.0.0.0.44\lib\native\x64\Release\ChakraCore.Debugger.pdb | 13774848 ChakraCore.Debugger.0.0.0.44\lib\native\ARM64\Release\ChakraCore.Debugger.pdb | 13316096 ChakraCore.Debugger.0.0.0.44\lib\native\x86\Release\ChakraCore.Debugger.pdb | 13152256 ChakraCore.Debugger.0.0.0.44\lib\native\ARM\Release\ChakraCore.Debugger.pdb | 10727424 ReactNative.V8Jsi.Windows.0.64.2\lib\win32\Release\arm64\v8jsi.dll.pdb | 9695232 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\release\ch.pdb | 4804608 ReactWindows.ChakraCore.ARM64.1.11.20\lib\native\v140\arm64\debug\ch.pdb | 4804608 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\debug\ch.pdb | 4755456 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x64\release\ch.pdb | 4755456 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\debug\ch.pdb | 4616192 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\x86\release\ch.pdb | 4616192 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\debug\ch.pdb | 4239360 Microsoft.ChakraCore.vc140.1.11.20\lib\native\v140\arm\release\ch.pdb | 4239360 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Debug\gtest.pdb | 1429504 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Debug\gtest.pdb | 1421312 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Release\gtest.pdb | 1339392 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Release\gtest.pdb | 1331200 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Debug\gtest_main.pdb | 724992 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Debug\gtest_main.pdb | 716800 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x86\Release\gtest_main.pdb | 692224 Microsoft.googletest.v140.windesktop.msvcstl.static.rt-dyn.1.8.1\lib\native\v140\windesktop\msvcstl\static\rt-dyn\x64\Release\gtest_main.pdb | 684032
non_process
nuget packages include symbols the current nuget packages we produce like etc include pdbs which bloat the on disk footprint and of developers won t need symbols for chakra etc these should instead be distributed via snupkg pdbs account for gb fullname length reactnative windows lib debug dll pdb reactnative windows lib debug dll pdb reactnative windows lib release dll pdb reactnative windows lib release dll pdb microsoft chakracore lib native release chakracore pdb microsoft chakracore lib native debug chakracore pdb reactwindows chakracore lib native release chakracore pdb reactwindows chakracore lib native debug chakracore pdb microsoft chakracore lib native debug chakracore pdb microsoft chakracore lib native release chakracore pdb microsoft chakracore lib native arm debug chakracore pdb microsoft chakracore lib native arm release chakracore pdb chakracore debugger lib native debug chakracore debugger pdb chakracore debugger lib native debug chakracore debugger pdb chakracore debugger lib native debug chakracore debugger pdb chakracore debugger lib native arm debug chakracore debugger pdb reactnative windows lib debug dll pdb chakracore debugger lib native release chakracore debugger pdb chakracore debugger lib native release chakracore debugger pdb chakracore debugger lib native release chakracore debugger pdb chakracore debugger lib native arm release chakracore debugger pdb reactnative windows lib release dll pdb reactwindows chakracore lib native release ch pdb reactwindows chakracore lib native debug ch pdb microsoft chakracore lib native debug ch pdb microsoft chakracore lib native release ch pdb microsoft chakracore lib native debug ch pdb microsoft chakracore lib native release ch pdb microsoft chakracore lib native arm debug ch pdb microsoft chakracore lib native arm release ch pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn debug gtest pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn debug gtest pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn release gtest pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn release gtest pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn debug gtest main pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn debug gtest main pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn release gtest main pdb microsoft googletest windesktop msvcstl static rt dyn lib native windesktop msvcstl static rt dyn release gtest main pdb
0
21,304
28,499,639,541
IssuesEvent
2023-04-18 16:21:23
GoogleCloudPlatform/pgadapter
https://api.github.com/repos/GoogleCloudPlatform/pgadapter
closed
Difference in timezone for Egypt
type: process
``` ITPsqlTest.testTimestamptzParsing:794 Timestamp: 2041-06-21 17:45:31, Timezone: Egypt expected:<...041-06-21 17:45:31+0[3] (1 row) > but was:<...041-06-21 17:45:31+0[2] (1 row) ```
1.0
Difference in timezone for Egypt - ``` ITPsqlTest.testTimestamptzParsing:794 Timestamp: 2041-06-21 17:45:31, Timezone: Egypt expected:<...041-06-21 17:45:31+0[3] (1 row) > but was:<...041-06-21 17:45:31+0[2] (1 row) ```
process
difference in timezone for egypt itpsqltest testtimestamptzparsing timestamp timezone egypt expected row but was row
1
11,867
14,667,744,504
IssuesEvent
2020-12-29 19:25:17
modi-w/AutoVersionsDB
https://api.github.com/repos/modi-w/AutoVersionsDB
opened
The log process window is responding slowly when it runs on a big project with a lot of scripts.
area-UI process-discussion type-bug
**Describe the bug** The log process window is responding slowly when it runs on a big project with a lot of scripts. **To Reproduce** Steps to reproduce the behavior: 1. Define a big project with a lot of scripts with many scripts block. 2. Run all the scripts file with "Recreate" 3. after a while during the process, open the message window by clicking on the status box. 4. Try to scroll and see the slow responding **Action Items:** 1. 2. 3. **Updates** 1.
1.0
The log process window is responding slowly when it runs on a big project with a lot of scripts. - **Describe the bug** The log process window is responding slowly when it runs on a big project with a lot of scripts. **To Reproduce** Steps to reproduce the behavior: 1. Define a big project with a lot of scripts with many scripts block. 2. Run all the scripts file with "Recreate" 3. after a while during the process, open the message window by clicking on the status box. 4. Try to scroll and see the slow responding **Action Items:** 1. 2. 3. **Updates** 1.
process
the log process window is responding slowly when it runs on a big project with a lot of scripts describe the bug the log process window is responding slowly when it runs on a big project with a lot of scripts to reproduce steps to reproduce the behavior define a big project with a lot of scripts with many scripts block run all the scripts file with recreate after a while during the process open the message window by clicking on the status box try to scroll and see the slow responding action items updates
1
4,385
3,367,263,020
IssuesEvent
2015-11-22 01:33:08
openhab/openhab
https://api.github.com/repos/openhab/openhab
closed
can't setup IDE
build-or-ide question
https://github.com/openhab/openhab/wiki/IDE-Setup I read this tutorial , i install all required plugins on top of an existing Eclipse ,use this update site:http://yoxos.eclipsesource.com/userdata/profile/c5f3985b62c488f0df0dfbc369f9e057 there is an error while i isntalling it:"The installation cannot be completed as requested."
1.0
can't setup IDE - https://github.com/openhab/openhab/wiki/IDE-Setup I read this tutorial , i install all required plugins on top of an existing Eclipse ,use this update site:http://yoxos.eclipsesource.com/userdata/profile/c5f3985b62c488f0df0dfbc369f9e057 there is an error while i isntalling it:"The installation cannot be completed as requested."
non_process
can t setup ide i read this tutorial i install all required plugins on top of an existing eclipse use this update site there is an error while i isntalling it the installation cannot be completed as requested
0
9,815
25,277,944,945
IssuesEvent
2022-11-16 13:55:27
hzi-braunschweig/SORMAS-Project
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
opened
Clearly separate EJBs and services
refactoring change architecture
### Problem Description While working on #9708, I encountered a strange situation where the fact that EJBs and services receive their own `EntityManager` (via `AbstractBaseEjb` and `BaseAdoService` respectively) causes the creation of unnecessary transactions when you call from EJBs to services. In cases where an EJB accesses a lazy loaded many-to-many relation on an entity returned from a service, this access will result in a `LazyInitializationException` (this is reproducible across multiple entities) as the session of the returned entity will be always terminated. ### Proposed Change To avoid this behavior and to finally provide a clear separation of services and EJBs, I propose to remove the `EntityManager` from all EJBs and enforce this with an architecture test. Only services should be have access to the DB. ### Acceptance Criteria - [ ] `BaseAdoService` (or a `GenesisService`) should be the only class in the whole project which receives the `EntityManager` for the SORMAS persistence unit. ### Implementation Details - [ ] Write an architecture test - [ ] Remove the `EntityManager` - [ ] Compile - [ ] Move the code from EJBs to the corresponding service. This is really just copying stuff and creating a method with the same name which is called instead. - [ ] Repeat until done ### Additional Information We heavily rely on DTO projection where we convert an ADO with it all its lazy fields to a full DTO (i.e., `toDto`). I expect this to have a quite negative performance impact as the advantage of lazy-loading is negated. Once this ticket is implemented, we should focus on projecting less, or ideally only projecting the final result of a service call to a DTO.
1.0
Clearly separate EJBs and services - ### Problem Description While working on #9708, I encountered a strange situation where the fact that EJBs and services receive their own `EntityManager` (via `AbstractBaseEjb` and `BaseAdoService` respectively) causes the creation of unnecessary transactions when you call from EJBs to services. In cases where an EJB accesses a lazy loaded many-to-many relation on an entity returned from a service, this access will result in a `LazyInitializationException` (this is reproducible across multiple entities) as the session of the returned entity will be always terminated. ### Proposed Change To avoid this behavior and to finally provide a clear separation of services and EJBs, I propose to remove the `EntityManager` from all EJBs and enforce this with an architecture test. Only services should be have access to the DB. ### Acceptance Criteria - [ ] `BaseAdoService` (or a `GenesisService`) should be the only class in the whole project which receives the `EntityManager` for the SORMAS persistence unit. ### Implementation Details - [ ] Write an architecture test - [ ] Remove the `EntityManager` - [ ] Compile - [ ] Move the code from EJBs to the corresponding service. This is really just copying stuff and creating a method with the same name which is called instead. - [ ] Repeat until done ### Additional Information We heavily rely on DTO projection where we convert an ADO with it all its lazy fields to a full DTO (i.e., `toDto`). I expect this to have a quite negative performance impact as the advantage of lazy-loading is negated. Once this ticket is implemented, we should focus on projecting less, or ideally only projecting the final result of a service call to a DTO.
non_process
clearly separate ejbs and services problem description while working on i encountered a strange situation where the fact that ejbs and services receive their own entitymanager via abstractbaseejb and baseadoservice respectively causes the creation of unnecessary transactions when you call from ejbs to services in cases where an ejb accesses a lazy loaded many to many relation on an entity returned from a service this access will result in a lazyinitializationexception this is reproducible across multiple entities as the session of the returned entity will be always terminated proposed change to avoid this behavior and to finally provide a clear separation of services and ejbs i propose to remove the entitymanager from all ejbs and enforce this with an architecture test only services should be have access to the db acceptance criteria baseadoservice or a genesisservice should be the only class in the whole project which receives the entitymanager for the sormas persistence unit implementation details write an architecture test remove the entitymanager compile move the code from ejbs to the corresponding service this is really just copying stuff and creating a method with the same name which is called instead repeat until done additional information we heavily rely on dto projection where we convert an ado with it all its lazy fields to a full dto i e todto i expect this to have a quite negative performance impact as the advantage of lazy loading is negated once this ticket is implemented we should focus on projecting less or ideally only projecting the final result of a service call to a dto
0
60,477
25,147,969,166
IssuesEvent
2022-11-10 07:39:20
Azure/azure-sdk-for-java
https://api.github.com/repos/Azure/azure-sdk-for-java
closed
sdk/servicebus/azure-messaging-servicebus - cspell found spelling errors in package
Service Bus Client Spelling
Spell check scanning of package at `sdk/servicebus/azure-messaging-servicebus` detected spelling errors in the public API surface. This directory is opted out of PR spell checking in PR #31526 to keep PRs unblocked. ## What to do 1. Ensure Node.js is installed (https://nodejs.org/en/download/). 1. Delete the entry in `.vscode/cspell.json`'s `ignorePaths` field. It will look like: `sdk/servicebus/azure-messaging-servicebus/**`. You need to do this to enable checking the files. 1. From the root of the repo run spell check using `./eng/common/spelling/Invoke-Cspell.ps1 -ScanGlobs "sdk/servicebus/azure-messaging-servicebus/**"` 1. Fix detections according to http://aka.ms/azsdk/engsys/spellcheck use the "False positives" section to fix false positives 1. Check in changes (including the change to `.vscode/cspell.json` where the `ignorePaths` is updated to remove the entry for this service.). You may need to run `git add -f .vscode/cspell.json` to force adding the changes to the file in git. ## Spell checking output ``` ./sdk/servicebus/azure-messaging-servicebus/CHANGELOG.md:317:22 - Unknown word (Asyn) ./sdk/servicebus/azure-messaging-servicebus/CHANGELOG.md:327:64 - Unknown word (Asyn) ./sdk/servicebus/azure-messaging-servicebus/migration-guide.md:307:67 - Unknown word (automatica) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:16 - Unknown word (Qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:31 - Unknown word (qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:54 - Unknown word (Qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:426:42 - Unknown word (implcit) ./sdk/servicebus/azure-messaging-servicebus/README.md:456:2 - Unknown word (qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:478:23 - Unknown word (implcit) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/AuthorizationRule.java:48:28 - Unknown word (authoriation) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/CorrelationRuleFilter.java:35:43 - Unknown word (lexigraphical) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/SqlRuleAction.java:49:57 - Unknown word (deserialised) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/SqlRuleFilter.java:56:57 - Unknown word (deserialised) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/FluxAutoLockRenew.java:44:44 - Unknown word (eceiver) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverAsyncClient.java:1311:16 - Unknown word (releated) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverAsyncClient.java:1312:54 - Unknown word (stucks) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverClient.java:752:57 - Unknown word (alread) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusSessionReceiver.java:103:25 - Unknown word (conniey) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/DeadletterQueueSample.java:49:21 - Unknown word (Kopernikus) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/DeadletterQueueSample.java:49:35 - Unknown word (Nikolaus) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/SendSessionMessageAsyncSample.java:77:57 - Unknown word (Guten) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/ServiceBusProcessorPeekLockReceiveSample.java:74:58 - Unknown word (unretriable) ```
1.0
sdk/servicebus/azure-messaging-servicebus - cspell found spelling errors in package - Spell check scanning of package at `sdk/servicebus/azure-messaging-servicebus` detected spelling errors in the public API surface. This directory is opted out of PR spell checking in PR #31526 to keep PRs unblocked. ## What to do 1. Ensure Node.js is installed (https://nodejs.org/en/download/). 1. Delete the entry in `.vscode/cspell.json`'s `ignorePaths` field. It will look like: `sdk/servicebus/azure-messaging-servicebus/**`. You need to do this to enable checking the files. 1. From the root of the repo run spell check using `./eng/common/spelling/Invoke-Cspell.ps1 -ScanGlobs "sdk/servicebus/azure-messaging-servicebus/**"` 1. Fix detections according to http://aka.ms/azsdk/engsys/spellcheck use the "False positives" section to fix false positives 1. Check in changes (including the change to `.vscode/cspell.json` where the `ignorePaths` is updated to remove the entry for this service.). You may need to run `git add -f .vscode/cspell.json` to force adding the changes to the file in git. ## Spell checking output ``` ./sdk/servicebus/azure-messaging-servicebus/CHANGELOG.md:317:22 - Unknown word (Asyn) ./sdk/servicebus/azure-messaging-servicebus/CHANGELOG.md:327:64 - Unknown word (Asyn) ./sdk/servicebus/azure-messaging-servicebus/migration-guide.md:307:67 - Unknown word (automatica) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:16 - Unknown word (Qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:31 - Unknown word (qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:385:54 - Unknown word (Qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:426:42 - Unknown word (implcit) ./sdk/servicebus/azure-messaging-servicebus/README.md:456:2 - Unknown word (qpid) ./sdk/servicebus/azure-messaging-servicebus/README.md:478:23 - Unknown word (implcit) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/AuthorizationRule.java:48:28 - Unknown word (authoriation) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/CorrelationRuleFilter.java:35:43 - Unknown word (lexigraphical) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/SqlRuleAction.java:49:57 - Unknown word (deserialised) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/administration/models/SqlRuleFilter.java:56:57 - Unknown word (deserialised) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/FluxAutoLockRenew.java:44:44 - Unknown word (eceiver) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverAsyncClient.java:1311:16 - Unknown word (releated) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverAsyncClient.java:1312:54 - Unknown word (stucks) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusReceiverClient.java:752:57 - Unknown word (alread) ./sdk/servicebus/azure-messaging-servicebus/src/main/java/com/azure/messaging/servicebus/ServiceBusSessionReceiver.java:103:25 - Unknown word (conniey) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/DeadletterQueueSample.java:49:21 - Unknown word (Kopernikus) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/DeadletterQueueSample.java:49:35 - Unknown word (Nikolaus) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/SendSessionMessageAsyncSample.java:77:57 - Unknown word (Guten) ./sdk/servicebus/azure-messaging-servicebus/src/samples/java/com/azure/messaging/servicebus/ServiceBusProcessorPeekLockReceiveSample.java:74:58 - Unknown word (unretriable) ```
non_process
sdk servicebus azure messaging servicebus cspell found spelling errors in package spell check scanning of package at sdk servicebus azure messaging servicebus detected spelling errors in the public api surface this directory is opted out of pr spell checking in pr to keep prs unblocked what to do ensure node js is installed delete the entry in vscode cspell json s ignorepaths field it will look like sdk servicebus azure messaging servicebus you need to do this to enable checking the files from the root of the repo run spell check using eng common spelling invoke cspell scanglobs sdk servicebus azure messaging servicebus fix detections according to use the false positives section to fix false positives check in changes including the change to vscode cspell json where the ignorepaths is updated to remove the entry for this service you may need to run git add f vscode cspell json to force adding the changes to the file in git spell checking output sdk servicebus azure messaging servicebus changelog md unknown word asyn sdk servicebus azure messaging servicebus changelog md unknown word asyn sdk servicebus azure messaging servicebus migration guide md unknown word automatica sdk servicebus azure messaging servicebus readme md unknown word qpid sdk servicebus azure messaging servicebus readme md unknown word qpid sdk servicebus azure messaging servicebus readme md unknown word qpid sdk servicebus azure messaging servicebus readme md unknown word implcit sdk servicebus azure messaging servicebus readme md unknown word qpid sdk servicebus azure messaging servicebus readme md unknown word implcit sdk servicebus azure messaging servicebus src main java com azure messaging servicebus administration models authorizationrule java unknown word authoriation sdk servicebus azure messaging servicebus src main java com azure messaging servicebus administration models correlationrulefilter java unknown word lexigraphical sdk servicebus azure messaging servicebus src main java com azure messaging servicebus administration models sqlruleaction java unknown word deserialised sdk servicebus azure messaging servicebus src main java com azure messaging servicebus administration models sqlrulefilter java unknown word deserialised sdk servicebus azure messaging servicebus src main java com azure messaging servicebus fluxautolockrenew java unknown word eceiver sdk servicebus azure messaging servicebus src main java com azure messaging servicebus servicebusreceiverasyncclient java unknown word releated sdk servicebus azure messaging servicebus src main java com azure messaging servicebus servicebusreceiverasyncclient java unknown word stucks sdk servicebus azure messaging servicebus src main java com azure messaging servicebus servicebusreceiverclient java unknown word alread sdk servicebus azure messaging servicebus src main java com azure messaging servicebus servicebussessionreceiver java unknown word conniey sdk servicebus azure messaging servicebus src samples java com azure messaging servicebus deadletterqueuesample java unknown word kopernikus sdk servicebus azure messaging servicebus src samples java com azure messaging servicebus deadletterqueuesample java unknown word nikolaus sdk servicebus azure messaging servicebus src samples java com azure messaging servicebus sendsessionmessageasyncsample java unknown word guten sdk servicebus azure messaging servicebus src samples java com azure messaging servicebus servicebusprocessorpeeklockreceivesample java unknown word unretriable
0